Topic
stringclasses
9 values
News_Title
stringlengths
10
120
Citation
stringlengths
18
4.58k
Paper_URL
stringlengths
27
213
News_URL
stringlengths
36
119
Paper_Body
stringlengths
11.8k
2.03M
News_Body
stringlengths
574
29.7k
DOI
stringlengths
3
169
Biology
Exploring the diversity of cell gatekeepers could be the key to better crops
Annamaria De Rosa et al. Genome-wide identification and characterisation of Aquaporins in Nicotiana tabacum and their relationships with other Solanaceae species, BMC Plant Biology (2020). DOI: 10.1186/s12870-020-02412-5 Journal information: BMC Plant Biology
http://dx.doi.org/10.1186/s12870-020-02412-5
https://phys.org/news/2020-06-exploring-diversity-cell-gatekeepers-key.html
Abstract Background Cellular membranes are dynamic structures, continuously adjusting their composition, allowing plants to respond to developmental signals, stresses, and changing environments. To facilitate transmembrane transport of substrates, plant membranes are embedded with both active and passive transporters. Aquaporins (AQPs) constitute a major family of membrane spanning channel proteins that selectively facilitate the passive bidirectional passage of substrates across biological membranes at an astonishing 10 8 molecules per second. AQPs are the most diversified in the plant kingdom, comprising of five major subfamilies that differ in temporal and spatial gene expression, subcellular protein localisation, substrate specificity, and post-translational regulatory mechanisms; collectively providing a dynamic transportation network spanning the entire plant. Plant AQPs can transport a range of solutes essential for numerous plant processes including, water relations, growth and development, stress responses, root nutrient uptake, and photosynthesis. The ability to manipulate AQPs towards improving plant productivity, is reliant on expanding our insight into the diversity and functional roles of AQPs. Results We characterised the AQP family from Nicotiana tabacum (NtAQPs; tobacco), a popular model system capable of scaling from the laboratory to the field. Tobacco is closely related to major economic crops (e.g. tomato, potato, eggplant and peppers) and itself has new commercial applications. Tobacco harbours 76 AQPs making it the second largest characterised AQP family. These fall into five distinct subfamilies, for which we characterised phylogenetic relationships, gene structures, protein sequences, selectivity filter compositions, sub-cellular localisation, and tissue-specific expression. We also identified the AQPs from tobacco’s parental genomes ( N. sylvestris and N. tomentosiformis ), allowing us to characterise the evolutionary history of the NtAQP family. Assigning orthology to tomato and potato AQPs allowed for cross-species comparisons of conservation in protein structures, gene expression, and potential physiological roles. Conclusions This study provides a comprehensive characterisation of the tobacco AQP family, and strengthens the current knowledge of AQP biology. The refined gene/protein models, tissue-specific expression analysis, and cross-species comparisons, provide valuable insight into the evolutionary history and likely physiological roles of NtAQPs and their Solanaceae orthologs. Collectively, these results will support future functional studies and help transfer basic research to applied agriculture. Background Cellular membranes are dynamic structures, continuously adjusting their composition in order to allow plants to respond to developmental signals, stresses, and changing environments [ 1 ]. The biological function of cell membranes is conferred by its protein composition, with the lipid bilayer providing a basic structure and permeability barrier, and integral transmembrane proteins facilitating diffusion of selected substrates [ 1 ]. Cell membrane diffusion is a fundamental process of plant biology and one of the oldest subjects studied in plant physiology [ 2 ]. Diffusional events at the cellular level eventuate in the coordinated transport of substrates throughout the plant to support development and growth. Plant membranes contain three major classes of transport proteins known as ATP-powered pumps, Transporters, and Channel proteins [ 3 ]. Pumps, are active transporters that use the energy of ATP hydrolysis to move substrates across the membrane against a concentration gradient or electrical potential. Transporters move a variety of molecules across a membrane along or against a gradient at rates of 10 2 to 10 4 molecules per second. Unlike the first two classes, channel proteins are bidirectional and increase membrane permeability to a particular molecule. Channel proteins are permeable to a wide range of substrates and can pass up to 10 8 molecules per second. In plants, aquaporins (AQPs) constitute a major family of such channel proteins that facilitate selective transport of substrates for numerous biological processes including, water relations, plant development, stress responses, and photosynthesis [ 4 , 5 ]. The AQP monomer forms a characteristic hour-glass membrane-spanning pore that assembles as tetrameric complexes in cell membranes. The union of the four monomers, creates a fifth pore at the centre of the tetramer which may provide an additional diffusional path [ 6 ]. The substrate specificity of a given AQP is conferred by the complement of pore lining residues which achieve specificity through a combination of size exclusion and biochemical interactions with substrates [ 7 ]. Key identified specificity residues include the dual Asn-Pro-Ala (NPA) motifs, the aromatic/Arginine filter (ar/R filter) and Froger’s positions (P1-P5) [ 8 , 9 , 10 ]. However, other pore-lining residues and lengths of the various transmembrane and loop domains of the AQP monomer are also known to influence substrate specificity through conformational changes of the pore size and accessibility [ 7 , 11 ]. It is likely that other residues that determine specificity and transport efficiency remain to be elucidated. Aquaporins, which are members of the major intrinsic proteins (MIP) superfamily, are found across all taxonomic kingdoms [ 12 ]. While mammals usually have only 15 isoforms, plants have vastly larger AQP families commonly ranging from 30 to 121 members [ 5 , 13 , 14 , 15 ]. This impressive diversification has been facilitated by the propensity of gene duplication events, especially prevalent in the angiosperms, and likely by the adaptive potential provided by AQPs. Based on sequence homology and subcellular localisation, up to thirteen AQP subfamilies are now recognised in the plant kingdom [ 13 , 16 , 17 , 18 , 19 ]. Eight of these AQP subfamilies occur in more ancestral plant lineages and include, the GlpF-like Intrinsic Proteins (GIPs) and Hybrid Intrinsic Proteins (HIPs) in mosses, the MIPs A to E of green algae, and the Large Intrinsic Proteins (LIPs) in diatoms. The remaining five subfamilies are prevalent across higher plants and have extensively diversified into sub-groups and include the Plasma membrane Intrinsic Proteins (PIPs; subgroups PIP1 and PIP2), Tonoplast Intrinsic Proteins (TIPs; subgroups TIP1 to TIP5), Small basic Intrinsic Proteins (SIPs; subgroups SIP1 and SIP2), Nodulin 26-like Intrinsic Proteins (NIPs; subgroups NIP1 to NIP5), and X Intrinsic Proteins (XIPs; subgroups XIP1 to XIP3). The XIPs are present in many eudicot species, but are absent in the Brassicaceae and monocots [ 17 ]. The AQP subfamilies differ to some degree in substrate specificity and integrate into different cellular membranes, providing plants with a versatile system for both sub-cellular compartmentalisation and intercellular transport. In plants, AQPs are by far the most extensively diversified, capable of transporting a wide variety of substrates including water, ammonia, urea, carbon dioxide, hydrogen peroxide, boron, silicon and other metalloids [ 7 , 20 , 21 ]. More recently, lactic acid, oxygen, and cations have been identified as permeating substrates [ 22 , 23 , 24 , 25 ], with RNA molecules also implicated as a possible transported substrate [ 26 ]. Further versatility is achieved through tightly regulated spatial and temporal tissue-specific expression of different AQP genes, as well as post-translational modification of AQP proteins (e.g. phosphorylation) that controls membrane trafficking and channel activity [ 27 , 28 ]. Given their diverse complement of transported substrates and growing involvement in many developmental and stress responsive physiological roles, AQPs are targets for engineering more resilient and productive plants [ 5 , 29 ]. For example, CO 2 -permeable AQP are being targeted to enhance photosynthetic efficiency and yield increases [ 5 , 30 , 31 ], while AQPs responsive to drought stress are being used to improve tolerance to water-limited conditions [ 32 , 33 ], and manipulations of boron-permeable AQPs are being pursued to improve crop tolerance to soils with either toxic or sub-optimal levels of boron [ 15 , 34 , 35 ]. The genomic era of plant biology has provided unprecedented opportunity to query AQP biology by exploring sequence conservation and diversity between isoforms in many species. This is reflected in the increasing number of plant AQP family studies being reported in recent years. Almost exclusively, these studies focus on the species of interest with no direct evaluation with AQPs from other plant species. However, extending an AQP family characterisation to closely related species (e.g. within the same taxonomic family) can be especially informative, with comparisons of close orthologous AQPs helping to better elucidate the evolutionary history and physiological roles of different AQPs. Comparisons between closely related species can also improve the translation of basic AQP research to applied agriculture, especially if the analysis involves crop species. To improve our current knowledge on AQP biology and aid in their potential use towards improving plant resilience and productivity, we have characterised the AQP family from Nicotiana tabacum (NtAQPs; tobacco). Tobacco is a fitting candidate species to explore unknowns of AQP biology as it is a popular model system for studying fundamental physiological processes that is capable of scaling from the laboratory to the field. Tobacco is part of the large Solanaceae family, which includes species of major economic importance such as tomato, potato, eggplant and peppers [ 36 ], and itself has renewed commercial applications in the biofuel and plant-based pharmaceutical sectors [ 37 , 38 , 39 ]. We found that tobacco harbours 76 AQPs, making it the second largest family characterised to date. Tobacco is a recent allotetraploid, which accounts for its large AQP family size. Phylogenetic relationships, gene structures, protein sequences, selectivity filter compositions, sub-cellular localisation, and tissue-specific expression profiles were used to characterise NtAQP family members. We also identified the AQPs of the tobacco parental genomes ( Nicotiana sylvestris and Nicotiana tomentosiformis ), allowing us to characterise the recent evolutionary history of the NtAQP family. Furthermore, using the already defined AQP families of tomato ( Solanum lycopersicum ) and potato ( Solanum tuberosum ) [ 40 , 41 ], we made cross-species comparisons of gene structures, protein sequences and expression profiles, to provide insight into conservation and diversification of protein function and physiological roles for future studies and engineering efforts. Results Identification and classification of NtAQP genes A homology search, using tomato and potato AQP protein sequences as queries, identified 85 loci putatively encoding AQP-like genes in the genome of the TN90 tobacco cultivar [ 42 ]. Nine of these genes encode for severely truncated proteins and were classified as pseudogenes (Additional file 1 : Table S1). The remaining 76 genes had a level of homology to tomato and potato AQPs to be considered ‘bona fide’ tobacco AQPs (NtAQPs; Table 1 ). Seventy-three of these 76 tobacco AQP genes were also identified in the genome of the more recently sequenced K326 cultivar (Nitab4.5v) [ 43 ] (Table 1 ). To determine the precise protein sequences and gene structures of the tobacco AQPs, the surrounding genomic region of the identified coding sequences were examined in all forward translated frames. The likely protein products and associated intron/exon structures were curated through alignments with respective Solanaceae homologues. Our gene models were then independently validated and supported by alignments against tobacco whole transcriptome mRNA-seq data (obtained from Edwards et al., 2017), which also aided in defining the 5′ and 3′ UTRs. A comparison between our manually curated AQP protein and gene models against the computational predictions for the TN90 and K326 cultivars [ 42 , 43 ] revealed that 15% of TN90 and 50% of K326 computed AQP models were incorrectly annotated (Table 1 ). Errors in the computed gene models were encountered across all NtAQP subfamilies and consisted of either missing or truncated 5′ and 3’UTRs, absent exons, truncated exons (ranging from 4 to 87 amino acids), and exon insertions (16–57 amino acids) due to inclusion of adjacent intron sequence (Fig. 1 , Additional file 2 : Figure S1). A summary of our NtAQP gene models, identifiers and genomic locations for the TN90 and K326 cultivars are available in Additional file 1 : Table S2. FASTA sequencing files of coding DNA sequence (CDS), protein, and genomic sequence can be found in Additional file 3 . Sequences of these high confidence NtAQP protein and gene models have been submitted to NCBI (Table 1 ). Table 1 List of the 76 tobacco aquaporin genes identified in this study Full size table Fig. 1 Representative examples of our curated gene models validated with RNA-seq data. Our curated models were aligned to those computed in Edwards et al. (2017). The examples depicted in the figure have high ( NtTIP2;3 t ), medium ( NtPIP2;9 t ) and low expression levels ( NtNIP2;1 s ). Mapped genomic reads locate to mRNA encoding regions and as such denote exon boundaries and UTRs. Red boxes in the Edwards predicted gene models denote missing coding regions as indicated by deviations from the RNA-seq localisation Full size image Through the process of curating the tobacco AQP gene and protein sequences, we have made correction to several previously mis-annotated AQP genes of tomato and potato namely, StXIP3;1, StXIP4;1 , SlXIP1;6 , SlPIP2;1 , and SlTIP2;2 (Additional file 1 ; Table S3). We also identified through our tobacco genome sequence analysis an erroneous non-synonymous single nucleotide mutation (C > T, CDS position 619) in the reported mRNA sequence of the frequently studied tobacco AQP1 gene (NtAQP1; assigned as NtPIP1;5 s in this study). The mutation results in a Histidine (H) to Tyrosine (Y) substitution at amino acid position 207 being incorrectly reported in the initial cloning of this gene and subsequent use ( [ 44 ]; NCBI AF024511 and AJ001416). This substitution is notable since His207, which corresponds to the His193 position of the well-studied crystal structures of Spinach PIP2;1 [ 6 , 45 , 46 ], is highly conserved across all angiosperm PIP AQPs and is a key regulator in the gating and therefore transport capacity of the AQP channel [ 6 , 45 , 47 ]. The inadvertent use of this H207Y NtAQP1 mutant in functional characterisation studies may have implication on the conclusions drawn for this frequently studied plant AQP. In support of His207 being the correct residue in NtAQP1, we found that independently generated gDNA-seq assemblies as well as RNA-seq mapped reads from both the TN90 and K326 cultivars had the His207 residue (Additional file 2 : Figure S2). Furthermore, several closely related NtAQP1 orthologues across several Solanaceae species, including 3 additional Nicotiana species, all had the His207 residue (Additional file 2 : Figure S2). Gene structures and phylogenetic analysis of tobacco AQPs To place the 76 curated NtAQP protein sequences into their respective subfamilies, we used phylogenetic analyses incorporating characterised AQP isoforms from a diverse set of angiosperms: Arabidopsis ( Arabidopsis thaliana , Brassicales), tomato ( Solanum lycopersicum, Solanales), rubber tree ( Hevea brasiliensis, Malpighiales), rice ( Oryza sativa, Poales) and soy bean ( Glycine max , Fabales) (Additional file 4 : Figure S3). The NtAQPs segregated into five distinct subfamilies that commonly occur in higher plants, namely the NIPs [ 16 ], SIPs [ 5 ], XIPs [ 4 ], PIPs [ 29 ] and TIPs [ 22 ] (Fig. 2 , Additional file 4 : Figure S3). An emerging problem among the increasing number of studies characterising plant AQP families across species is the confusion in nomenclature that either misses or incorrectly assigns orthology between AQP genes. Such confusion is seen in the nomenclature between tomato and potato AQPs. At least in this case, the naming inconsistency is predominantly a result of the two family characterisations being published concurrently by different groups [ 40 , 41 ]. Towards contributing to a more congruent naming structure of AQPs between species, especially within a single family of angiosperms, we aligned our NtAQP naming convention with that of tomato AQPs, given their more consistent nomenclature to likely Arabidopsis AQP orthologues. Additional file 1 : Table S2 lists the tobacco AQPs with their corresponding tomato and potato orthologous genes. Fig. 2 Phylogeny and gene structures of 76 tobacco aquaporins. Phylogenetic tree was generated using the neighbour-joining method (via MEGA7) from MUSCLE aligned protein sequences. Confidence levels (%) of branch points generated through bootstrapping analysis ( n = 1000). Gene structures are located adjacent to their respective location on the phylogenetic tree; blue rectangles correspond to the exons; green rectangles and arrows to the 5′ and 3’UTRs, respectively. Scale bar at the top of gene structures indicates nucleotide length. The last letter in the NtAQP names denote the likely origin of the gene (s = N. sylvestris , t = N. tomentosiformis , x = unknown) Full size image Sixty five of the 76 NtAQP genes had clear orthologs in tomato which directed their naming (Additional file 2 : Figure S4 and Additional file 1 : Table S2). The 11 tobacco AQPs with no apparent tomato or potato ortholog were allocated designations unique to tobacco (denoted by black stars in Additional file 2 : Figure S4). Gene lengths varied between NtAQPs from 1091 bp to 6627 bp, with a single extreme instance of 17,278 bp ( NtPIP2;11 s ) due to a large intron insertion (Fig. 2 ). The exon-intron patterning of NtAQP genes were highly conserved with that of their tomato and potato orthologs (Additional file 1 : Table S2) [ 40 , 41 ]. Individual AQPs within the PIP, TIP, NIP and SIP subfamilies were well conserved across the three Solanaceae species (Additional file 2 : Figure S4). The XIPs were an exception as they predominantly phylogenetically clustered within each separate species, pointing to a high degree of intra-species XIP diversification within the Solanaceae (Additional file 2 : Figure S4). A distinctive feature in the phylogeny was that most NtAQPs reside as pairs, supported by high bootstrap values (Fig. 2 ). The high homology in protein sequences between members of these phylogenetic pairs also extended to highly similar nucleotide sequences and gene structures (Fig. 2 ). Tobacco AQP protein sequence comparisons General structural features of NtAQP proteins Topological analysis using TOPCON (see materials and methods), predicted that all NtAQP proteins consist of six transmembrane helical domains, five intervening loop regions and cytoplasmic localised N- and C- terminal tails, which is consistent with the typical structure of AQPs (Fig. 3 ). The size of the transmembrane helical domains appear to be an integral property of the AQP structure given their remarkably conserved lengths across the subfamilies (Fig. 3 a). Conversely, the length of the loop regions showed substantial variability between subfamilies (Fig. 3 a). The most pronounced was Loop A, which is prominently longer and apoplastically exposed in the PIP2s (18aa) and shorter in the NIPs (8aa) compared to the average length of TIPs, SIPs, and XIPs (14aa). The cytoplasmic Loop B, is shorter in XIPs (20aa vs. 24aa). Loop C is nearly double the length in the XIPs (38aa) compared to the other subfamilies (20aa). Loop D is slightly longer in the PIPs (12aa) and shorter in the SIPs (7aa), while Loop E is substantially longer in the XIPs (32aa) and shorter in the NIPs (20aa) (Fig. 3 a). The cytoplasmically localised N- and C-terminal tails are the most varied in size of any of the AQP domains (Fig. 3 a). The N-terminal tail ranges from 59aa in the NIPs to just 7aa in the SIPs and the C-terminal tail from 30aa in the NIPs to 14aa in the PIPs. Fig. 3 Protein sequence comparisons of NtAQP subfamilies. a Diagrammatic illustration of an AQP depicting protein topology and lengths of the; N-terminal tail (N-term), TransMembrane domains (TM) 1–6, Loops A-E, and C-terminal tail (C-term). The average amino acid (aa) lengths of each structural feature are listed for the different NtAQP subfamilies. Common length of a domain is represented in grey, while deviations from the common length are in colour; PIPs (orange), NIPs (purple), SIPs (green), TIPs (blue) and XIPs (yellow). b Overall and intra-domain sequence similarities for each NtAQP sub-family. Schematic representation of the AQP domains is illustrated at the top, with aligned columns showing protein sequence identical sites (black) and the BLSM62 similarity score (grey) between members of the given NtAQP subfamily Full size image Examining sequence conservation of the different protein domains across the subfamilies, revealed that the transmembrane helices are generally the more highly conserved feature of the AQP (Fig. 3 b). Loop B and E are also highly conserved relative to the other domains, which is likely owing to their direct role in forming the transmembrane pore. Conversely, Loops A and C, along with the two terminal tails were found to be the least conserved domains within each NtAQP sub-family (Fig. 3 b). To learn more about the putative functional characteristics of the different NtAQPs, we used multiple protein sequence alignments to report residue compositions at key positions in the protein known to regulate AQP function (Table 2 ). Included are the dual Asn-Pro-Ala (NPA) motifs, the five Froger’s position residues (P1-P5), and the residues of the aromatic/Arginine filter (ar/R filter), all of which are specific pore lining residues that contribute to determining which substrates permeate though the AQP pore. We also reported on several other sites known to be post-translationally modified, which influence channel activity and membrane localisation (Table 2 ). Table 2 Amino acid composition of NtAQPs at known functionally important positions Full size table NtPIP subfamily The NtPIPs represent the largest NtAQP subfamily with 29 members that are phylogenetically divided into PIP1 and PIP2 subgroups. Despite being the largest subfamily, the NtPIPs were among the most conserved in protein sequence (> 50%; Fig. 3 b). The apoplastic exposed Loops A and Loop C were the exceptions having only ~ 20% sequence identity and varying in size between PIP1 and PIP2 proteins (Fig. 3 ). This sequence diversification could be of functional importance given Loop A is involved in PIP-PIP dimerization mediated primarily through a conserved cysteine residue, which is present in all NtPIPs [ 48 , 49 ]. The generally high sequence similarity across most of the PIP protein domains was also reflected in both PIP1s and PIP2s having identical configuration of residues across the NPA and ar/R motifs; which were predominantly hydrophilic residues (Table 2 ). Only Froger’s position 2 showed variation with amino acids of different properties (G, M or Q) occupying this position (Table 2 ). The NtPIP1s are predominantly distinguished from NtPIP2s by having longer N-terminal and shorter C-terminal tail sequences. The N-terminal tail is involved in calcium-dependent gating of the pore which occurs through interactions involving two acidic residues (Asp28 and Glu31, Table 2 ) [ 45 ]. Pore gating is also triggered by pH involving protonation of a Loop D histidine (His-193, Table 2 ) and phosphorylation of a Loop B serine (Ser115, Table 2 ) [ 45 , 47 ]. These four residues were identified in each NtPIPs indicating the entire subfamily retains these modes of regulation (Table 2 ). The Loop B serine (Ser115), or phosphorylatable threonine, was also conserved in members of XIPs, TIPs and SIPs (but not NIPs), suggesting a shared mechanism of gating regulation between different NtAQPs (Table 2 ). Two commonly phosphorylated serine sites were found conserved in the longer C-terminal tail of NtPIP2s (Ser274 and Ser277; Table 2, Additional file 2 : Figure S5). The phosphorylation status of these serine residues are known to facilitate protein-protein interactions, influence trafficking to and from the plasma membrane, and alter the transport capacity of the pore [ 5 , 50 ]. NtPIP1 proteins have the second of these serine residues (Ser277), but are not predicted to be phosphorylated (Table 2 ; Additional file 2 : Figure S5). A strongly conserved positively charged lysine or arginine directly preceding the second phosphorylated serine is found across all NtPIPs, and also more broadly across PIPs from other plant species (data not shown), with the exception of NtPIP1;5 and PIP2;11 which have a histidine (Additional file 2 : Figure S5). Histidine can achieve a positive charge through protonation, indicating a possible pH regulated functional state of the C-terminal tail in these NtPIPs. NtNIP subfamily NIPs were found to have the lowest overall sequence identity sites (~ 10%), suggesting a highly divergent subfamily at the sequence level (Fig. 3 b). The sequence variation was evenly distributed across all AQP domains, with only Loop B and Loop E retaining modest conservation with > 30% identical residues per site. This comparatively higher conservation likely reflects these two loops being directly involved in forming the main pore structure and controlling substrate selectivity. Loops B and E each contain a NPA motif, and Loop E also contains ar/R and Froger’s residues (Table 2 ). Across the NtNIPs, there was substantial variation in the residues constituting the dual NPA motifs (NP A / S / V ) and across all 5 Froger’s positions (Table 2 ). And all but LE2 of the ar/R residues were variable, although the residue that were present tended to be more hydrophobic (Table 2 ). Also notable in the NtNIPs, were their distinctively longer N and C terminals (~ 57-30aa) compared to those in other subfamilies (Fig. 3 a). The extended C-terminal tail contains numerous serine residues, many of which were predicted to be phosphorylated (Additional file 2 : Figure S5). Included were serine residues at homologous positions to the confirmed phosphorylated sites of Ser262 in GmNOD26 (a soybean NIP) and Ser277 in PIPs (Table 2 ). The Ser115 phosphorylation site that controls aspects of pore gating in PIPs was conserved and predicted to be phosphorylated in only NtNIP4;3 s, with all other NtNIPs having a structurally rigid proline residue at this position (Table 2 ). NtTIP subfamily Conservation among the NtTIPs was ~ 22% sequence identity (Fig. 3 b). Similar to the NIPs, the highest sequence conservation occurred in Loops B and E (> 40%). The dual NPA motif, ar/R H2 and Froger’s P3 to P5 are well conserved among the different TIP subgroups. The exceptions being NtTIP2;1 s with a NPD configuration of the first NPA motif, and the NtTIP5;1 proteins which have a H > N substitution at ar/R H2 (Table 2 ). The other ar/R and Froger’s sites are rather variable among the NtTIPs, especially ar/R LE2 which varies between amino acids of quite differing properties (V, R or Y; Table 2 ). A histidine opposed to phenylalanine located at ar/R LC of NtTIP2s, TIP4s and TIP5s (Table 2 ), suggests an enhanced capacity to transport ammonia [ 51 ]. The Ser115 phosphorylation site that controls pore gating in PIPs was identified in 5 of the 22 NtTIPs, with the remaining NtTIPs possessing a threonine which is also a potentially phosphorylatable residue. NtTIP2 and NtTIP5 proteins have a conserved histidine (His131) in Loop C that is involved in a similar pH regulated gating of the pore to that of His193 in Loop D of PIPs and NIPs [ 52 , 53 ]. The C-terminal tail of NtTIPs contained on average less than 2 serine residues, none of which were predicted to be phosphorylation targets (data not shown). NtSIP subfamily While only comprising of 5 genes, the NtSIP subfamily had low sequence conservation, with Loop A the least conserved (Fig. 3 b). The first NPA motif varied with NP A / T / L combinations (Table 2 ). Substantial variation was also was found in other key residues with completely different configuration of residues in the ar/R and Froger’s P1-P2 between NtSIP1 and NtSIP2 proteins (Table 2 ). The N- terminal tail of NtSIPs were distinctly shorter than other subfamilies (~7aa) (Fig. 3 a). NtXIP subfamily The XIPs are a small sub-family with high sequence identity (~ 75%). The first NPA motif is replaced by a NPV motif in all four NtXIP proteins (Table 2 ). There is a strong consensus in the residues residing in the Froger’s and dual NPA motifs, with the only variation being I/A at ar/R H2 (Table 2 ). Concordant with other studies of XIPs, the loop C of NtXIP is substantially longer (~38aa) compared to that of other subfamilies [ 54 ]. NtXIPs have the conserved phosphorylated Ser115, although it was not a predicted phosphorylation target (Table 2 ). The C-terminal tail of NtXIPs contained a single serine residue which was not predicted to be phosphorylated (data not shown). Subcellular localisation of tobacco AQPs in planta AQPs can facilitate diffusion of a range of substrates across various plant membranes and the specific membrane localisation can vary between the different subfamilies, which ultimately influences sub-cellular flow and compartmentalisation of solutes. Computational prediction programs can be used as an initial inference of subcellular localisation to further help elucidate putative biological activities and physiological functions of candidate proteins [ 55 ]. We conducted subcellular prediction analyses using three commonly used software programs, Plant-mPLoc, Wolf Psort and YLoc (see materials and methods). Consistency in prediction across the three programs was found for 35 (46%) of NtAQPs (Table 2 ). Consensus in predicted localisation was mainly observed for the PIP2s and the NIPs, which were generally predicted to be plasma membrane (PM) localised. The TIPs and SIPs appeared to have the most contrasting predictions in subcellular localisation results, with TIP localisations ranging between tonoplast, PM, peroxisome, cytoplasmic and extra cellular localisation; and SIPs having PM, tonoplast, chloroplast, ER and extra cellular localisations across the 3 prediction tools (Table 2 ). To complement the predictions, representative tobacco AQPs from the larger PIP, TIP and NIP subfamilies were visualised in planta using GFP:NtAQP fusions. NtSIPs were not included in this analysis as they are a smaller AQP subfamily, while NtXIPs are already established as localising to the PM [ 56 ]. Plant AQPs retain their capacity for faithful subcellular localisations between tissues, even when translocated across plant species, as evident from numerous studies examining subcellular localisation or physiological manipulation using transgenic AQPs foreign to the host species [ 5 , 57 , 58 , 59 , 60 , 61 , 62 ]. As such, we introduced our tobacco GFP:AQP transgenes into Arabidopsis, to be able to utilise established GFP marker lines that delineate specific subcellular compartments [ 63 ]. Such marker lines are crucial in guiding the correct interpretations of subcellular locations, given the close proximity of certain subcellular structures occupied by AQPs. For example, both the PM and ER are possible locations, but parts of the ER network lay immediately adjacent to the PM, making it difficult to discern between ER, PM, or co-localisation. Interpretations are further compounded by the large vacuoles of most plant cells that occupy much of the internal volume, pushing the cytoplasm and its contents to the periphery. This can give the illusion of PM localisation even for cytosolic proteins such as ‘free’ GFP, especially if only examined as a 2D-optical slice at the whole cell level (Fig. 4 ai). Fig. 4 In planta sub-cellular localisation of PIP, TIP and NIP aquaporins. Confocal images of root cortical cells of transgenic 8-day-old Arabidopsis seedlings. a, b, d, f GFP marker lines; false coloured purple. c, e, g NtAQP:GFP lines; false coloured green. Subpanels (i-iv) are; (i.) Optical cross-section midway through a root cortical cell. (ii) GFP signal associated with nucleus; confocal image (left) DIC image (right). (iii.) Close-up of cell peripheral margin. (iv.) Maximum intensity projections compiled from serial z-stack images. a GFP-only localisation. b Plasma membrane (PM:GFP) marker. c NtPIP2;5 t (PIP:GFP). d Endoplasmic reticulum (ER:GFP) marker. The ER is known not uniformly be present around the cell periphery which is reflected by regions of bright GFP signal (solid arrowhead) interspersed with regions of no GFP signal (open arrowhead). e NtNIP2;1 s (NIP:GFP). f Tonoplast (Tono:GFP) marker showing characteristic features of the tonoplast membrane including, transvacuolar strands (v) and general undulating appearance (arrow). g NtTIP1;1 s (TIP:GFP). Notable sub-cellular features are marked by a; asterisks for the nucleus, ‘V’ for transvacuolar strands, arrowheads indicate instances of varied brightness (solid = high signal, empty = no signal) in GFP fluorescence in d (iii) and e (iii), or undulations of the tonoplast in f (iv) and g (iv). Scale bar 5 μm Full size image We used confocal microscopy to visualise the subcellular localisation of GFP:NtAQP and GFP marker lines using both 2-D slices and 3-D optical stacks. To avoid signal contamination from chlorophyll auto-fluorescence, which excites and emits at wavelengths close to GFP, we examined root cells. GFP marker lines localising to the cytoplasm, plasma membrane (PM), ER, and tonoplast (tono), were used as these are the expected possible locations of the PIPs, TIPs, or NIPs (Fig. 4 ). Key differences between the four sub-cellular features were clearly discernible in the vicinity of the nucleus, the topography of the signal, and 3D renders of serial Z-stack images of the cells (Fig. 4 b-g). The PM:GFP marker localised exclusively to the periphery of the cell when adjacent to the nucleus (Fig. 4 bii), the ER:GFP wrapped around the nucleus (Fig. 4 dii), and Tono:GFP localised internally to the nucleus leaving a signal void on the side adjacent to the PM (Fig. 4 fii). PM:GFP produced a sharp defined integration with the cell margin (Fig. 4 biii), featuring as an outer shell in the 3D render (Fig. 4 biv). The ER:GFP peripheral signal was mottled in appearance (Fig. 4 di), consisting of localised bright specks with distinct regions of no signal (Fig. 4 diii), that appeared as a ‘web’ in the 3D render (Fig. 4 div). Tono:GFP was present as large undulating ‘sheets’ of signal associated with the trans-vacuolar strands (tonoplast-delimited cytoplasmic tunnels) and folds of vacuole membrane (tonoplast) (Fig. 4 fi-iv), which had a distinct ‘wavy’ topography (Fig. 4 fiii). Having established the defining features of the marker lines, we moved to examining the representative NtAQPs. Distinct in planta subcellular localisation patterns were observed for the PIP, TIP and NIP NtAQPs, consistent with the known membrane targeting properties of these different AQP subfamilies (Fig. 4 c,e,g). The GFP signal of the representative PIP (NtPIP2;5 t) appeared sharp and uniformed around the cell periphery, with signal running external to the nucleus and forming a smooth outer shell in the 3D render with no discernible signal in any internal structures (Fig. 4 c). This pattern was concordant with a PM:GFP marker (Fig. 4 b), indicating a strong integration of NtPIP2;5 t into the PM. The representative NtNIP (NtNIP2;1 s), had features indicating it co-localises to the PM and ER. The peripheral localised NtNIP GFP signal was mottled in appearance with distinct specks of intense bright signal similar to the ER marker. However, unlike the ER marker, these specks were dispersed along a consistent basal signal continuous around the cell periphery, indicative of PM localisation (Fig. 4 ei-iii). The 3D render further demonstrated the shared shell-like PM signal overlapping the mottled web-like ER patterned signal (Fig. 4 eiv). The localisation of the representative NtTIP (NtTIP1;1 s) is consistent with integration into the tonoplast. The NtTIP GFP signal showed a uniform yet diffuse localisation within the cell consistent with tonoplast labelling. The NtTIP GFP signal surrounded the nucleus on the cytosolic but not plasma membrane side (Fig. 4 gi-ii), and the labelled membrane had a wavy topography with the occurrence of internal membranes resembling transvacuolar strands (Fig. 4 giii-iv). The PM integration of NtPIP2;5 was predicted by all 3 software programs, whereas the tonoplast localisation of NtTIP1;1 s was only predicted by Plant-mPLoc. Lastly, the PM localisation of NtNIP2;5 s was anticipated in all 3 programs, but none predicted its co-localisation with the ER (Table 2 ). Parental association and recent evolutionary history of tobacco AQPs The distinctive phylogenetic pairing of most NtAQPs in our initial phylogenetic characterisation, is likely characteristic of the recent evolutionary origin of tobacco, which arose from an allotetraploid hybridisation event between N. sylvestris and N. tomentosiformis only ~ 0.2 M years ago [ 42 , 43 ]. To explore the evolution of the tobacco AQP family, we identified the AQP gene families in the two parental lines using NtAQP nucleotide coding sequences as queries in BLAST searches. Initially, 40 and 41 AQPs were identified in both N. sylvestris and N. tomentosiformis respectively, which is comparable to the number of AQP genes found in the related diploid species of tomato and potato (Table 3 ). As shown in this work, tobacco has 76 AQPs, almost a full set from each parental species (40 N.sylvestris , and 42 N.tomentosiformis ), being consistent with a recent allotetraploid hybridisation event. The introduction of the parental N. sylvestris and N. tomentosiformis AQPs into the NtAQP phylogeny, transformed the majority of the distinct NtAQP phylogenetic pairs into small clades of four genes where each of the paired NtAQPs was now clearly associated with an AQP from one of the two parents (e.g. NtPIP1;1 sub-clade, Fig. 5 ). This phylogenetic relationship confirmed that the distinctive phylogenetic pairing of NtAQPs corresponds to orthologous ‘sister’ genes arising from hybridisation, with both parental genomes having contributed one AQP gene to each tobacco sister pair (Fig. 5 ). Initially 30 sister gene pairs were identified that had a clear match to an orthologous gene from both N. sylvestris and N. tomentosiformis (Fig. 5 ). The ancestral origin of the NtAQP genes were denoted in the nomenclature by the addition of a suffix ‘s’ or ‘t’ (e.g. NtPIP1;1 s and NtPIP1;1 t ), to indicate a N. sylvestris or N. tomentosiformis lineage, respectively. Table 3 Summary of total AQPs currently identified within Solanaceae Full size table Fig. 5 Phylogenetic relationship of tobacco, N. sylvestris and N. tomentosiformis AQPs. Phylogenetic trees for each AQP sub-family were generated using the neighbour-joining method from MUSCLE alignments of nucleotide coding sequences. Confidence levels (%) of branch points generated through bootstrapping analysis ( n = 1000). N. sylvestris (N. syl) and N. tomentosiformis (N. tom) AQPs are colour coded in blue and orange, respectively. Green stars indicate a loss of a parental gene in tobacco post-hybridisation; Blue and Red stars indicate gene loss events in N. sylvestris and N. tomentosiformis , respectively. Purple and Yellow stars indicate pre-hybridisation gene gain events in N. sylvestris and N .tomentosiformis, respectively Full size image One NtAQP gene had no resolved match to a N. sylvestris or N. tomentosiformis parental AQP and was assigned a suffix ‘x’ ( NtPIP2;1x ). The lack of a clear parental match to NtPIP2;1x likely means that the orthologous gene has been lost in the parental genome post tobacco emergence, or the orthologous parental AQP was not identified due to incomplete coverage of sequencing data. Either way, the presence of this gene in the tobacco genome allows us to infer its presence in a parental genome at the time of hybridisation. We predict that NtPIP2;1x was inherited from N. tomentosiformis , as it occurs in a distinct clade with a tobacco sister gene ( NtPIP2;1 s ) and an orthologous N. sylvestris AQP ( N.sylPIP2;1 ), but lacks a N. tomentosiformis progenitor ortholog (orange box, Fig. 5 ). As such, assigning NtPIP2;1x as a N. tomentosiformis descendant, brings the total number of AQPs in the parental genomes to 40 in N. sylvestris and 42 in N. tomentosiformis , with the total number of genes within the PIP, NIP and TIP subfamilies being very similar to those of tomato and potato (Table 3 ). The phylogenetic analysis also revealed recent evolutionary events in the tobacco , N. sylvestris and N. tomentosiformis AQP families. These events were recognised by deviations from the conventional four-gene small sub-clade groupings comprised of the tobacco sister genes and their respective parental orthologs. Seven AQP gene loss events were recognised in N. sylvestris , six of which occurred prior to the tobacco hybridisation event as the given AQP was absent in both N. sylvestris and tobacco (blue stars, Fig. 5 ). In several cases, the remnants of the eroding N. sylvestris pseudo gene were also inherited and identifiable in the tobacco genome (e.g. SIP1;1 and PIP2;7 ; Fig. 5 ). Two gene loss event was recognised in N. tomentosiformis , with no representative NIP1;1 or NIP2;1 orthologs identified in either N. tomentosiformis or tobacco (red star, Fig. 5 ). Five parental AQP genes have been lost in tobacco, three from N. tomentosiformis and two from N. sylvestris origins (green stars, Fig. 5 ). Instances of gene gains were also evident in both parental species prior to the tobacco hybridisation event (purple and orange stars, Fig. 5 ). These gained genes were distinct in the phylogenies as they did not uniquely match a specific Solanaceae gene ortholog, appearing instead as a duplicate copy of an existing AQP gene within the tobacco parental species (Additional file 2 ; Figure S4). Four AQP gene gain events occurred in N. sylvestris , two of which ( N.sylNIP3;1 and N.sylPIP1;2 ), began redundant gene erosion prior to tobacco hybridization (purple stars, Fig. 5 ). The third, N.sylPIP2;11b , is retained as a functional unit in N. sylvestris but has eroded in tobacco; hence the designation ‘b’ as opposed to a unique numerical identifier. The fourth gene, N.sylPIP1;8 , has been retained in both N. sylvestris and tobacco as a functional gene (purple star, Fig. 5 ). A single gene duplication event was recognized in N. tomentosiformis , giving rise to PIP2;2 and PIP2;3 orthologs which were both inherited and subsequently retained as functional genes in tobacco (orange star, Fig. 5 ). Tobacco AQP gene expression The NtAQP transcriptome dataset To provide insight into possible physiological roles of the various NtAQP isoforms, publicly available whole transcriptome RNA-seq datasets [ 42 , 43 ] were processed and analysed to compare organ-specific expression patterns of the 76 tobacco AQPs. Although, all datasets had great read depth (100–200 million paired reads per tissue), the Sierro et al. (2014) transcriptome of the TN90 cultivar was chosen for analysis, as it provided the most extensive sampling of different tissues at various developmental stages (young leaf, mature leaf, senescent leaf, stem, root, young flower, mature flower, senescent flower and dry capsules). Although the NtAQP sister genes are highly homologous in their nucleotide coding sequences (~ 96.5%), the SNPs that are present occur at a frequency and distribution enabling unique mapping of reads to differentiate between sister genes. In the TN90 dataset, we detected expression from 75 out of 76 NtAQPs, with only NtXIP1;4 t having no mapped mRNA reads. However, NtXIP1;4 t is an expressed gene, albeit at very low levels, as indicated by the low transcript abundance detected in the K326 cultivar (data not shown). To validate the accuracy of the NtAQP expression profiles, we compared it to RNA-seq data from N. sylvestris and N. tomentosiformis ; with the assumption that the majority of AQP orthologs will have retained similar expression profiles between these closely related species. The parental datasets are independently derived from those of the tobacco dataset, and sampled root, leaf and floral tissues at substantial read depths (~ 265 million paired reads per tissue) [ 64 ]. Correlations of relative transcript abundances was compared in two-dimensions; (i) between AQPs within a given tissue and (ii) a given AQP across tissues (Additional file 2 : Figure S6). Within equivalent tissues, the relative transcript abundance of N.sylAQP vs. NtAQPs and N.tomAQP vs. NtAQPt genes, correlated well (R 2 root, leaf, flower: 0.91, 0.74, 0.98 and 0.65, 0.74, 0.80, respectively). Across tissues, the majority (> 80%) of NtAQPs and NtAQPt genes showed matching expression profile to their respective parental orthologs (Additional file 2 : Figure S6). As expected, the relative transcript abundance between AQP sister genes within tobacco (i.e. NtAQPs vs. NtAQPt ), correlated better than orthologs between parental lines (i.e. N.sylAQP vs. N.tomAQP ) (Additional file 2 : Figure S6). Overall, the largely conserved patterns indicate that the tobacco transcriptome data provides a suitably accurate representation of the NtAQP transcriptome. Profiling the NtAQP transcriptome Among the NtAQP subfamilies, gene expression of PIPs and TIPs was generally greater than for SIPs, XIPs and NIPs (Fig. 6 a). Among the most highly expressed NtAQPs , PIP1;5 s , PIP1;5 t, PIP1;3 s and PIP1;3 t stood out as being constitutively expressed in all major plant organs, while TIP1;1 s and TIP1;1 t , were present in all tissues except for the dry capsule (Fig. 6 a). Some highly expressed genes also showed a level of tissue specificity, with NIP4;1 s and NIP4;1 t expressed only in flowers, and TIP3;1 s, TIP3;1 t and TIP3;2 t predominantly in the flower capsule (Fig. 6 a). Fig. 6 Expression patterns of NtAQP genes in different tissues. a Absolute NtAQP gene expression. Heatmap of gene expression (transcripts per million) of NtAQPs across different tissues. Green shading represents higher expression, graduating to a light blue for lower expression, as per key. Included in the last column is the average gene expression across all tissues examined; red shading for high expression moving towards yellow for low expression, as per key. b Relative expression compared to the highest expressing tissue for the given NtAQP. Heatmap of tissue-specific gene expression with values standardised to the tissue showing the highest expression for that given NtAQP. Yellow indicates high expression graduating towards blue for low expressing tissue. c Comparison of expression patterns between AQP sister genes. Heatmap of significant fold change differences in expression ( p < 0.05) between sister genes across the different examined tissues. Blue indicates higher expression of the ‘s’ gene and orange higher expression of the ‘t’ gene Full size image To examine differential expression between plant organs, the expression levels of a given AQP were standardised relative to its highest expressing tissue (Fig. 6 b). AQP s with a broad expression distribution throughout the plant could be readily identified (e.g. SIP1;2 and PIP1;5 sister pairs, Fig. 6 b). Other AQPs show tissue specific expression: young flowers ( PIP2;11 s & PIP2;11 t ; NIP2;1 s ), leaves ( PIP2;5 s and PIP2; 5 t; XIP1 ;6 s ; PIP2;1x ) or roots ( TIP1; 2, TIP1;3 , TIP2;2 , and TIP2;3 genes). At the sub-family scale, NtNIPs and NtTIPs are found to be preferentially expressed in roots, stems and flowers, with a low tendency for expression in leaves (Fig. 6 b). NtPIPs and NtSIPs are more broadly expressed, while there is no expression of NtXIPs in either the stem or dry capsule (Fig. 6 b). Within subfamilies we see gene members with specialised or preferential tissue expression. For example, some NtPIPs preferentially expressed in the roots ( PIP1;1 s & PIP1;1 t ; PIP2;4 s & PIP2;4 t ), others express preferentially in leaves (e.g. PIP2;5 t & PIP2;1x ), while PIP2;11 s & PIP2;11 t have become specialised in young flowers (Fig. 6 b). Discrete tissue-specific specialisation was also observed for members of the other families, for instance, TIP3;1 and TIP3;2 genes express only in dry capsule (seeds), and expression of NIP4;1 and NIP4;2 was only detected in flowers (Fig. 6 b). Next we compared differences in expression between sister genes to explore possible functional divergence. In general, sister gene pairs showed matching patterns of tissue-specific expression (Fig. 6 b). However, of the 31 proposed sister gene pairs, 18 showed notable differential expression levels in at least one tissue (Fig. 6 c). In the majority of these instances a single sister gene of the pair was more highly expressed in several plant organs. Examples include, NIP5;1 s, SIP2;1 t, SIP1;2 t, PIP2;6 t, PIP2;4 s, PIP1;3 t and PIP1;1 s . There were also several instances of contrasting expression where sister genes show distinctions in preferential expression between plant organs. For example, TIP3;1 s with 4-fold higher expression in the capsule compared to its sister pair TIP3;1 t , which is expressed > 10-fold higher in roots (Fig. 6 c). Further examples of contrasting expression include, NtPIP2;5 t (leaves) against NtPIP2;5 s (roots) and NtNIP6;1 s (leaves and dry capsule) against NtNIP6;1 t (roots) (Fig. 6 c). Conservation with other Solanaceae species As a means of exploring conservation in biological activities and physiological functions between AQP orthologous of different species, we compared tissue-specific expression levels of NtAQPs with their orthologs from the closely related tomato and potato species. This was done by comparing the relative gene expression across root, leaf and floral tissues of AQP genes we have identified as being orthologs between the Solanaceae species (e.g. NtPIP1;1 s & NtPIP1;1 t in tobacco, SlPIP1;1 in tomato and StPIP1;2 in potato; listed Additional file 1 : Table S2). We were able to perform this analysis on the PIPs, TIPs, NIPs and SIPs but not the XIPs given the previously mentioned difficulty of assigning orthology between the species. Even randomised pairwise comparisons of expression patterns between NtXIPs with those of tomato and potato, could not find consensus patterns, hinting further towards the unique intra-species diversification of XIPs within the Solanaceae (Additional file 2 : Figure S7). In the majority of instances (25 of 36 Solanaceae AQP ortholog sets), the tobacco sister genes had similar patterns of relative expression levels between the three organs to their orthologs from both tomato and potato, implying conserved physiological roles for the orthologs across the Solanaceae family (e.g. NIP1;1 , NIP3;1 , NIP4; 2, PIP2;6 , PIP2;9 , PIP2;11 , TIP5;1 , and SIP1;1 ; Fig. 7 ). Some deviations in tissue-specific expression patterns were observed between orthologs, suggesting possible species-specific functional diversification. The predominant observed deviations were instances where either the tobacco, tomato or potato AQP differed in their tissue-specific expression pattern compared to the orthologs from the other Solanaceae. Examples include; the tobacco NtNIP5;1 , NtPIP1;2, TIP1;1 ; the tomato SlPIP2;8 , SlTIP2; 1, SlTIP3;1, and SlTIP3;2 genes; and the potato StPIP1;2 ( NtPIP1;1 ortholog), StTIP1;2, StTIP1;1 ( NtTIP1;3 ortholog) and StTIP2;4 ( NtTIP2;3 ortholog) genes (Fig. 7 ) . Additionally, we observed one case where a NtAQP sister gene ( NtPIP2;5 s ), differed in expression from the tomato, potato and its NtAQP “t” sister gene; suggesting a potential diversification in gene function within tobacco. More complex deviations were also observed involving tobacco sister genes having contrasting expression to each other, that matched a similar contrast in expression between the tomato and potato orthologs (e.g. NtPIP2;1 and NtNIP6;1 sister genes; Fig. 7 ). Fig. 7 Tissue-specific gene expression patterns of AQP isoforms in tobacco, tomato and potato. Graphs contain relative gene expression (standardised to highest expressing tissue) across root, leaf and flower tissues for tobacco sister genes (light and dark blue) and their corresponding tomato (red) and potato (brown) orthologs as listed in Additional File 1 : Table S2 Full size image Discussion The growing amount of research into AQPs is greatly advancing our understanding of their diversity and functional roles, towards manipulating them to potentially enhance plant performance and resilience to environmental stresses [ 5 , 29 , 31 , 65 , 66 ]. The establishment of the tobacco AQP gene family allowed us to efficiently contribute to the current knowledge of AQP biology by, comparing regions of homology within and across closely related species, analysing pore-lining residues, identifying key structural characteristics, and providing necessary information and candidates for future functional screens. Furthermore, elucidating orthology between the already characterised tomato [ 40 ] and potato [ 41 ] AQPs, enables comparisons between isoforms across these Solanaceae species, which will facilitate the translation of knowledge from tobacco into its closely related and horticulturally important crop species. NtAQP protein sequence analysis and associations with AQP function We found that the tobacco AQP family comprises of 76 members, making it one of the largest AQP families characterised to date; second only to the polyploid canola ( Brassica napus ) with 121 members [ 14 , 15 ]. The 76 NtAPQs include members of each of the five major AQP subfamilies common to angiosperms (i.e. NIPs, PIP, TIPs, SIPs, and XIPs). Correctly defining and analysing the NtAQP protein structures, sequence homology, and comparison of functionally relevant residues, helps towards predicting potential permeating substrates, post-translational regulation, and subcellular localisations. AQP monomers have a highly conserved structure, with transmembrane (TM) segments providing a structural scaffold and defining the channel environment, with the connecting loops also having significant roles in channel function [ 45 ]. We found a high conservation in length and sequence identity of the NtAQP TM domains; their variability likely constrained to maintain structural integrity of the AQP monomer [ 67 ]. Additionally, conservation of critical residues in TM domains is essential for tetramer formation, with modifications leading to aberrant AQP oligomerisation [ 68 ]. NtAQP loops and termini had notable differences in lengths and lower sequence conservation across subfamilies; such variation has implications for AQP monomer interactions, pore accessibility and cellular membrane destinations [ 54 , 69 ]. AQP solute selectivity are conferred through specific structural features of the AQP monomer’s pore, and substrate interactions with pore-lining residues. We surveyed known specificity-determining residues across the NtAQPs, including the aromatic arginine (ar/R) filter, NPA domains, and Froger’s positions [ 7 , 8 , 11 , 70 ]. We observed an increased sub-family conservation in the loops harbouring these specificity-determining residues, in particular Loops B and E which have a direct role in forming the transmembrane pore. Each subfamily had their unique characteristic combination of amino acids at these locations concordant with known subfamily substrate specificities. For example, NtPIPs have more polar residues in their ar/R filter which is consistent with PIPs in general having the propensity to permeate water, whereas the NtNIPs have more hydrophobic amino acids in their ar/R filter, consistent with their poorer water permeability and preference for substrates such as ammonia, urea and metalloids instead [ 7 , 11 ]. Additional to the specificity-determining pore lining residues, post-translational modification of specific residues (e.g. through protonation or phosphorylation), also directly or indirectly determine the transport mechanics of the AQP monomer [ 71 ]. Plants rely on these secondary mechanisms to ensure tight regulation of AQPs, especially in response to stresses. Gating of the monomeric pore in response to external stimuli is a key control over AQP function. Among currently characterised residues involved in gating (listed in Table 2 ), we found subfamily-specific conservation across the NtAQPs. For example, all NtPIPs had the Loop D Histidine (His193) which is highly conserved across all plant PIPs, and can be protonated in response to changes in cytosolic pH (e.g. flooding induce hypoxia), and leading to the closure of the PIP pores [ 47 ]. pH regulated responses are important for AQP as is the C-terminal tail of the PIP proteins [ 71 ]. These facts drew our attention to the identified Lysine/Arginine > Histidine substitution in the C-terminal tails of NtPIP1;5 and NtPIP2;11 (Addition file 2: Figure S5). The normally positively charged Lysine/Arginine residue present in all other NtPIPs, and highly conserved across plant PIPs in general, directly precedes a functionally important phosphorylated serine. Together this suggests a likely functional relevance of a positively charged residue at this position in PIP regulation. The Histidine present at the equivalent position in NtPIP1;5 and NtPIP2;11 can still obtain the conserved positive charge upon protonation, implying a possible novel pH control over the regulatory influences normally imposed by the PIP C-terminal tail. Some sharing of gating mechanisms between NtAQPs from different subfamilies can be inferred from our analysis. For example, the Loop B serine (Ser155) which in PIPs is involved in phosphorylation dependent disruption of N-terminal tail gating [ 45 , 46 ], is conserved in some members of the other NtAQP subfamilies. NtPIPs and NtNIPs both seem to be regulated by phosphorylation in their C-terminal tails given the abundance of serine residues. The phosphorylation state of the C-terminal tail is known to regulate channel activity and also control trafficking to the plasma membrane [ 46 , 72 ]. Interestingly, the NtTIPs had a dearth of serine residues in the C-terminal tail, suggesting a lack of a C-terminal phosphorylation-dependent regulation mechanism. This perhaps is due to differences in functional requirements being integrated into the vacuole membrane versus the plasma membrane integration of PIPs and NIPs. Consistent with differing regulatory requirements, we found that NtTIP2 and NtTIP5 proteins possessed a conserved histidine (His131) in loop C that is involved in a similar pH regulated gating of the pore to that of His193 in Loop D of PIPs and NIPs [ 52 , 53 ] (erroneously reported as located in loop D of VvTnTIP2;1 in Leitão et al., 2012). However, unlike the cytosolic PIP/NIP Loop D His193, the TIP Loop C His131 is likely orientated into the vacuole and thus responding to the vacuole contents and environment. Other structural features NtAQP of note include: the longer Loop D of PIPs compared to the other subfamilies which aids in its ability to cap the pore entrance [ 45 ]; the substantially longer Loop A of PIPs compared to the other NtAQPs, known to play a role in tetramer formation by mediating disulphide bonds between PIP1 and PIP2 isoforms [ 48 ]; the long N- and C-terminal tails of NtNIPs, important for protein regulation, trafficking, and protein-protein interactions [ 73 ]; the distinctly short N-terminal of SIPs associated with their intracellular destination into the ER [ 74 ]; the long Loop C of NtXIPs, characteristically enriched with flexible glycine residues allowing it to tuck into the channel opening and interact with selectivity filter residues and permeating solutes [ 54 , 75 ]. NtAQP subcellular localisation Determining AQP subcellular localisations can help elucidate physiological roles within the plant. For instance, integration into plasma membrane indicates solute transport in and out of the cell; localisation to the tonoplast implies a role in vacuole storage; or retained in the ER membranes to coordinate shuttling of substrates and nutrients between plant membranes [ 28 , 56 , 74 , 76 , 77 ]. We utilised sub-cellular localisation prediction software commonly used for fast in silico predictions of AQP isoform membrane integration. These software incorporate known sorting signals, amino acid composition and functional domains to generate results [ 55 ]. Using three software prediction tools (Plant-mPLoc, WolfPsort and YLoc) generally concluded that PIP, NIPs and XIPs are predominantly localise to the PM; all of the Plant-mPloc and some of the WolfPsort outputs predicted tonoplast localisation for the TIPs; and the SIP localisations were quite varied. Although these predictions are a useful beginning, it should be noted that we only found a 46% consensus in the predicted AQP subcellular localisations between the three software tools. The discrepancies highlight the complexity of AQP membrane integration processes and our current limited understanding of AQP trafficking motifs [ 69 ]. GFP:NtAQP fusions and crucially a set of established subcellular GFP marker lines, allowed us to directly visualise and confidently determine in planta sub-cellular localisation of representative NtAQPs. The representative PIP ( NtPIP2;5 t ), NIP ( NtNIP2 ;1 s) and TIP ( NtTIP1;1 s ) NtAQPs had distinct sub-cellular localisations, consistent with what is known about these AQP subfamilies in other plants [ 28 ]. Concordant with studies of these subfamilies in other species [ 22 , 78 ], we found that the NtPIP and NtTIP localised to the plasma membrane and tonoplast, respectively. The NtNIP2;1 co-localised to the PM and ER, which was not captured with the prediction software, which instead reported only PM integration. This sub-optimal PM targeting could limit the functional capacity of NtNIP2;1 and its subsequent physiological role (see discussion). Nicotiana AQP gene evolution Tobacco recently descended from a allotetraploid hybridisation event between N. sylvestris and N. tomentosiformis , which are distantly related within the Nicotiana genus [ 79 ]. Genome downsizing is a widespread biological response to polyploidization, eventually leading to diploidization [ 80 ]. However, due to the short evolutionary time frame since its inception (0.2 M years), tobacco has undergone a limited amount of genome downsizing. As a result, the NtAQP family is characteristically comprised of sister gene pairs, which we could assign to their given parental origins. Tobacco has lost only around 10% of it duplicated genes with no observed preferential gene loss from either parent [ 43 ]. Concordant with this estimation, 7 gene loss events (~ 8.6% of total inherited parental AQPs) were identified in tobacco, with 3 and 4 of these being redundant ortholog losses from the N. sylvestris and N. tomentosiformis genomes, respectively. According to our expression analysis, the NtAQP gene copies inherited from both N. sylvestris and N. tomentosiformis (‘s’ and ‘t’ genes, respectively), were overall equally expressed, which agrees with broader genomic studies on tobacco [ 43 ]. The redundancy of the homeologs presumably would allow for one of the sister genes to accumulate mutations without an immediate effect on fitness, most often leading to non-functionalisation (gene-loss), or in some instances sub-functionalisation or even neo-functionalisation. To this end, we observed instances where one AQP gene of a sister pair was consistently preferentially expressed throughout several plant organs (e.g. PIP1;1 s, PIP1;3 t, SIP2;1 t and NIP5;1 s ); suggesting that the redundant lower-expressing sister gene could become non-functional over time. Alternatively, some sister genes showed distinct tissue-specific diversification, such as the NtPIP2;5 gene pair, where the s- and t-genes were more highly expressed in the roots and leaves, respectively, and which maybe candidates for sub- or even neo-functionalisation. We were able to identify several AQP gene gain and loss events between the parents since their divergence within the Nicotiana genus, ~ 15 Ma ago [ 64 ]. Both the N. sylvestris and N.tomentosiformis have a genome rich of repeat expansion (accumulation of transposable elements), making them nearly 3 times the size of that of tomato and potato (2.6 Gb vs. 0.9 Gb) [ 64 , 81 , 82 ]. Regardless of the discrepancy in genome size, there was close conservation of AQP ortholog numbers within these diploid Solanaceae species; with the PIPs and TIPs consistently the larger subfamilies. We saw a significant diversity in XIPs occurring in the Solanum (tomato and potato) and the Nicotiana species. This diversity manifested as discrepancies in isoform numbers between the species and as lower sequence identity; depicted in the phylogeny as a separation of tomato, potato and Nicotiana isoforms into distinct groups. XIPs are a more recently characterised AQP subfamily, with isoforms lacking in monocots and in Brassicaceae, and having a lower overall sequence identity compared to other AQP subfamilies [ 17 ]. The tomato and potato XIP are predominantly found clustered on a single chromosome, indicating that recent segmental gene duplications within tomato and potato likely explain the lack of direct gene orthology to tobacco XIPs [ 83 ]. Gene expression analysis The NtAQP transcriptome was found to be largely conserved with those of its parental species, consistent with it recent evolutionary emergence. We also noted that the expression profiles between AQP sister genes within tobacco, correlated better than the expression patterns of the orthologous AQP between the parental lines. Such improved homogeneity in expression patterns is a common outcome of hybridisation events as both genomes (e.g. the ‘s’ and ‘t’ AQPs genes) are now subjected to the same regulatory network [ 84 , 85 ]. Within tobacco, our NtAQP gene expression analysis revealed a wide range of patterns across tissue types, consistent with the known diversity of AQP functions [ 4 ]. It revealed that some AQPs had high levels across numerous tissues throughout the plant (e.g. PIP1;3 t and PIP1;5, TIP1;1 sister pairs), implicating involvement in broad spanning processes (e.g. substrate transport from roots to shoots to flowers), while others had highly organ specific expression (e.g. TIP1; 3, NIP4;1 , and TIP3;1 sister genes, in roots, flowers and seed capsules, respectively). In general, the XIPs and majority of NIPs had lower overall expression levels, although there is the possibility that their expression might change in response to a specific stimulus, or that they are expressed at similar levels, but in very specific cell types making up a small population of the total tissue sampled for RNA-seq. Tissue specific expression patterns can help towards assigning physiological roles for the NtAQPs. We observed general trends between the AQP subfamilies. The NtXIPs were observed to have low but ubiquitous expression throughout the plant and previously reported to permeate bulky solutes such as urea and boric acid, but not water. Little is known about XIP physiological roles, but their unique transport capacity and rapid evolutionary diversification, even just within the Solanaceae, implies a role in environmental adaptive responses. The tobacco PIPs appeared to have more isoforms with leaf-specific expression compared to the other subfamilies. These are likely to be involved in roles typically reported for PIPs across plants species, including; leaf cell expansion, leaf movement, mediating water exiting the xylem, control of stomatal aperture and gas transport (e.g. CO 2 ) for photosynthesis [ 86 , 87 , 88 ]. Several PIPs have targeted expression in flowers ( PIP1;7 t , PIP1;8 s , PIP2;2 t , PIP2;3 t , and PIP2;8 , PIP2;9 , PIP2;11, PIP2;13 sister pairs), some of which would be involved in mediating water supply during stigma, anther and petal development [ 89 , 90 ]. Much like the PIPs, several isoforms within the NIPs ( NIP4;3 s and NIP4;1 and NIP4;2 sister genes) and TIPs ( TIP5;1 sister genes) had targeted expression to the flower. The tissue-specificity of these NtNIP s and NtTIP s is consistent with the floral tissue localisation of Arabidopsis NIP4;1 , NIP4;2 and TIP5;1 , which have known roles in pollen development and pollen germination [ 53 , 91 ]. Additionally, we identified NtTIP3;1 and NtTIP3;2 as being exclusively expressed in the seed capsule. This is consistent with the seed-specific expression of their orthologs in other species [ 92 , 93 , 94 ] where they accumulate in mature embryos and later function in water uptake during seed imbibition and germination [ 94 , 95 , 96 ]. The consistent expression pattern between species implies functional conservation, meaning that NtNIP4;1 , NtNIP4;2 and NtTIP5;1 likely fulfil roles in different aspects of tobacco pollen biology, and NtTIP3;1 and NtTIP3;2 are expected to aid tobacco seed germination. Several PIP and TIP isoforms were found with exclusive or preferential expression in the roots (e.g. PIP1;1 , PIP2;4 , PIP2;5 s , PIP2;6 , TIP1;2 , TIP1;3 , TIP2;5 and TIP2;2 s), where they could be functioning in lateral root emergence [ 97 , 98 ], regulation of cell water uptake and homeostasis [ 33 ], or nutrient absorption through ammonium loading in vacuoles [ 99 , 100 ]. The latter possible role of ammonium loading is especially pertinent to the two NtTIP2 proteins listed, which have a histidine residue in the ar/R LC position characteristic of ammonia transporting TIPs [ 51 ] . The putative roles put forward for the various NtAQPs above, could equally apply to many of the tomato and potato AQPs and vice versa, given the general family-wide conservation in tissue-specific expression patterns between these three Solanaceae species. The generally high conservation in expression patterns between Solanaceae AQP orthologs supports the accuracy of our NtAQP orthology; assigned based on protein sequence homology. The similarity at both the protein and transcript levels strongly implies functional conservation for many of the AQP orthologs across these Solanaceae species. Knowledge of the extent of such conservation is valuable as it can help facilitate translation of findings across Solanaceae species for traits of agronomic importance and help direct engineering efforts. Deviations are also interesting (of which we observed several), as they hint at potential novel species-specific functions, or help explain physiological differences between species. For example, NIP2;1 is an unique NIP with a distinct GSGR ar/R filter motif and a precise loop C spacing between NPA motifs allowing it to permeate and aid silicon transport from root to shoot in a number of high silicon accumulating species [ 101 , 102 ]. But, Solanaceae species are considered poor silicon accumulators [ 101 , 102 ], which matches an apparent deterioration of the NIP2;1 lineage in Solanaceae as seen in our cross-species comparisons with; NIP2;1 being lost in N. tomentosiformis prior to tobacco hybridisation; a subsequent absence of a NtNIP2;1 t gene; both N.sylNIP2;1 and NtNIP2;1 s have a unfavourable loop C length for silicon transport, as does SlNIP2;1; potato does not possess a NIP2;1 ; the different expression patterns of NtNIP2;1 and SlNIP2;1 hint at diverging roles; NtNIP2;1:GFP is poorly localised to the PM likely limiting function capacity; and no other NtNIP has a GSGR ar/R filter configuration for redundancy. Conclusions We determined that the tobacco AQP family consists of 76 members divided into five subfamilies each with subtle characteristic variations in protein structures, pore lining residues, and post-translational regulatory mechanisms. Characterisations of key residues and regions broaden our knowledge of AQP biology by guiding future functional studies to help identify substrate specificity residue combinations. The annotation of putative post-translational regulatory sites supports current knowledge of AQP regulation not only within the more widely studied PIP subfamily, but also across the TIP, NIP, SIP and XIP sub-groups. Members of the different NtAQP subfamilies were found to localise to specific sub-cellular membranes, which contribute collectively to a dynamic and extensive transport system. These subcellular profiles help towards elucidating physiological roles with, for example, PM-localising NtAQPs likely facilitating diffusion of solutes in and out of cells, and tonoplast-localising isoforms helping with intracellular distribution of solutes. Tobacco is a recent allotetraploid, which accounts for its large AQP family size and characteristic phylogenetic pairing of sister genes inherited and retained from its parents; Nicotiana sylvestris and Nicotiana tomentosiformis. By establishing heritage of NtAQP sister genes we were able to reconstruct the recent evolutionary history of the NtAQP family, which contributes to establishing potential functional homology of candidate genes. Expression analysis of the NtAQPs revealed diverse tissue-specificities, consistent with the broad spanning physiological functions of AQP. Some NtAQPs were expressed widely, while other showed specialised or strong preferential expression within a single tissue. We found that the expression specificity for a number of NtAQPs resembled that of orthologous AQPs with established physiological roles in other species, allowing us to assign putative homologous functions in tobacco. The conservation in AQP protein structure and gene expression patterns were high with other Solanaceae species, which will facilitate the translation of knowledge from tobacco into closely related and horticulturally important crops. Methods Identification of tobacco, N.sylvestris and N.tomentosiformis AQPs The tobacco genome and the protein sequences for TN90 [ 42 ] and K326-Nitab4.5v [ 43 ] cultivars were obtained from the Solanaceae Genomics Network [ 103 ] and imported into the Geneious (V9.1.5) software [ 104 ]. To comprehensively identify putative aquaporin genes in tobacco, multiple BLASTP searches were performed against the TN90 tobacco predicted proteome, using each of the potato ( Solanum tuberosum ) and tomato ( Solanum lycopersicum ) aquaporin proteins sequences as queries. From each individual homology search, the top 3–5 matches were compiled as putative NtAQPs; with the list being consolidated at the end of the search routine. A similar process was used to identify AQPs in N. sylvestris and N. tomentosiformis (tobacco parental genomes), however tobacco aquaporin coding sequences were used in BLASTN queries. Sequence alignments were conducted using MUSCLE [ 105 ]. Whole family and sub-family sequence alignments were used to flag aberrant AQP protein sequences for closer inspection. Phylogenetic analysis and classification of tobacco, N. sylvestris and N. tomentosiformis AQPs MUSCLE aligned nucleotide or protein sequences were used to construct phylogenetic trees using neighbour-joining (NJ) method (pair-wise deletion; bootstrap = 1000) in MEGA7 software [ 106 ]. Tobacco AQP naming convention was based on homology to that of the tomato AQPs. N. sylvestris and N.tomentosiformis AQP gene names were assigned based on homology to tobacco AQPs. Structural features of tobacco AQPs The tobacco aquaporin intron/exon structures were identified by aligning CDS and genomic sequences. Comparisons of gene sequences (computed and our curations) and RNA-seq data were visualised through JBrowse. The topologies of the curated NtAQPs were defined using TOPCONS [ 107 ]. The complement of known functionally relevant residues were collected from MUSCLE aligned NtAQP protein sequences. Alignment statistics (e.g. % sequence identity and similarity using BLSM62 matrix) were collected from MUSCLE aligned sequences of individual subfamilies. Prediction of phosphorylation sites were performed using NetPhos 3.1 prediction score ≥ 0.8 [ 108 ]. Subcellular localisation predictions were achieved using; YLoc [ 55 ], Wolf PSort [ 109 ] and Plant-mPloc [ 110 ]. Subcellular localisation in planta (Arabidopsis) Tobacco AQP GFP fusion constructs were generated via Gateway cloning of a TIP ( NtTIP1,1 s ), PIP ( NtPIP2;5 t ) and NIP ( NtNIP5;1 t ) coding sequences from pZeo entry vectors into the pMDC43 destination vector [ 111 ]; which produced N-terminal GFP:NtAQP fusion proteins driven by the constitutive 2x35S CaMV promoter. Arabidopsis transgenic lines were generated via agrobacterium (GV3101) floral dipping plant transformation method (Clough and Bent 1998). The GFP marker line (MG0100.15) used as a cytosolic localisation marker was generated in our lab via the Gateway cloning of the mGFP6 variant of GFP contained as a pZeo entry clone into the pMDC32 destination vector [ 111 ]; which drives constitutive expression of the mGFP6 transgene via the 2x35S CaMV promoter. The PM:GFP line was also generated in our lab, built in the pMDC83 Gateway destination vector and consisting of the Arabidopsis PIP2;1 (an already established PM marker [ 63 ]) with a mGFP6 C-terminal fusion, all driven by the 2x35S CaMV promoter. Arabidopsis seeds were liquid sterilised using hypochlorite, washed several times and sown on Gamorg’s B5 medium containing 0.8% Agar and the antibiotic hygromycin for selection of transformants. After 8 days of growth, arabidopsis seedlings were gently removed from the agar, mounted in Phosphate Buffer (100 mM NaPO 4 buffer, pH 7.2) on a standard slide and covered with coverslip, and visualised with a Zeiss LSM 780 Confocal microscope using a 40x water immersion objective (1.2 NA). Light micrographs of cortical cells in the root elongation zone were visualised using Differential Interference Contrast (DIC), with GFP fluorescence captured using excitation at 488 nm and emission detection across the 490–526 nm range. Autofluorescence was detected in the 570–674 nm range and excluded from GFP detection channel. Images were processed using Fiji (ImageJ) program [ 112 ]. AQP gene expression analysis Transcript expression of the identified aquaporins was extracted from published, publicly available datasets, via two avenues [ 1 ]; mining of processed transcript expression matrices and [ 2 ] analyses of raw RNA-Seq reads uploaded to GenBank Sequence Read Archive (SRA). Processed transcript expression of N. tabacum K326 [ 43 ] was extracted from The Sol Genomics Network [ 103 ]. Data was extracted as transcripts per million (TPM) and so was mined without further processing. This data set contained tissue specific expression of the leaf and root. Raw RNA-Seq reads from both N. tabacum K326 and TN90 [ 42 ] were downloaded from the GenBank SRA (TN90: SRP029183; K326: SRP029184) via command line into paired end fastq files. Read libraries were tissue specific from either the leaf, root, young leaf, young flower, mature leaf, mature flower, senescent leaf, senescent flower or dry capsule. On average each tissue was represented by a RNA-seq library of ~ 110 million paired reads. The raw reads were processed using Trimmomatic [ 113 ] to remove adapter sequences. Processed reads were aligned to the N. tabacum genome, either the K326 [ 43 ] or TN90 [ 42 ], using the Quasi align mode within Salmon [ 114 ] invoking a k-mer length of 31, with relative abundance reported as transcripts per million (TPM). Mapping rates of the K326 and TN90 transcriptomes were between 73 and 78%, and 89–94%, respectively. Raw RNA-seq reads for the parental genomes of N. sylvestris and N. tomentosiformis were obtained from [ 64 ]. RNA-seq libraries were libraries were derived from root, leaf, and flower tissues, with an average of 265 million paired reads for each tissue type. Reads were processed as above and mapped to the N. sylvestris and N. tomentosiformis genomes obtained from [ 64 ]. Tomato and potato root, leaf and flower expression data was retrieved through the EMBL-EBI Expression Atlas, and originally published by [ 115 ] and [ 116 ]. Availability of data and materials We declare that the dataset(s) supporting the conclusions of this article are included within the article and its additional file(s). All of our curated aquaporin CDS nucleotide and protein sequence data reported for Nicotiana tabacum , N. sylvestris and N. tomentosiformis is available in the Third-Party Annotation Section of the DDBJ/ENA/GenBank databases under the accession numbers TPA: BK011376-BK011532; BankIt2254789. Abbreviations AQP: Aquaporin ar/R: Aromatic/Arginine CDS: Coding DNA sequence ER: Endoplasmic reticulum MIP: Major intrinsic protein NIP: Nodulin26-like Intrinsic protein NPA: Asparagine-proline-alanine N. syl: Nicotiana sylvestris N. tom: Nicotiana tomentosiformis Nt: Nicotiana tabacum PIP: Plasma membrane intrinsic PROTEIN PM: Plasma membrane Sl: Solanum lycopersicum St: Solanum tuberosum TIP: Tonoplast intrinsic protein TM: Transmembrane domain Tono: Tonoplast XIP: X Intrinsic Protein
Scientists have shed new light on how the network of gatekeepers that controls the traffic in and out of plant cells works, which researchers believe is key to develop food crops with bigger yields and greater ability to cope with extreme environments. Everything that a plant needs to grow first needs to pass through its cells' membranes, which are guarded by a sieve of microscopic pores called aquaporins. "Aquaporins (AQPs) are ancient channel proteins that are found in most organisms, from bacteria to humans. In plants, they are vital for numerous plant processes including, water transport, growth and development, stress responses, root nutrient uptake, and photosynthesis," says former Ph.D. student Annamaria De Rosa from the ARC Centre of Excellence for Translational Photosynthesis (CoETP) at The Australian National University (ANU). "We know that if we are able to manipulate aquaporins, it will open numerous useful applications for agriculture, including improving crop productivity, but first we need to know more about their diversity, evolutionary history and the many functional roles they have inside the plant," Ms De Rosa says. Their research, published this week in the journal BMC Plant Biology, did just that. They identified all the different types of aquaporins found in tobacco (Nicotiana tabacum), a model plant species closely related to major economic crops such as tomato, potato, eggplant and capsicum. "We described 76 types of these microscopic hour-glass shape channels based on their gene structures, protein composition, location in the plant cell and in the different organs of the plant and their evolutionary origin. These results are extremely important as they will help us to transfer basic research to applied agriculture," says Ms De Rosa, whose Ph.D. project focused on aquaporins. "The Centre (CoETP) is really interested in understanding aquaporins because we believe they are a key player in energy conversion through photosynthesis and also control how a plant uses water. That is why we think we can use aquaporins to enhance plant performance and crop resilience to environmental changes," says lead researcher Dr. Michael Groszmann from the Research School of Biology and the CoETP at ANU. Aquaporins are found everywhere in the plant, from the roots to flowers, transporting very different molecules in each location, at an astonishing 100 million molecules per second. The configuration of an aquaporin channel determines the substrate it transports and therefore its function, from the transport of water and nutrients from roots to shoots, to stress signaling or seed development. "We focused on tobacco because it is a fast-growing model species that allows us to scale from the lab to the field, allowing us to evaluate performance in real-world scenarios. Tobacco is closely related to several important commercial crops, which means we can easily transfer the knowledge we obtain in tobacco to species like tomato and potato. Tobacco itself has own commercial applications and there is a renewed interest in the biofuel and plant-based pharmaceutical sectors," he says. "This research is extremely exciting because the diversity of aquaporins in terms of their function and the substrates they transport, mean they have many potential applications for crop improvement ranging from improved salt tolerance, more efficient fertilizer use, improved drought tolerance, and even more effective response to disease infection. They are currently being used in water filtration systems and our results could help to expand these applications. The future of aquaporins is full of possibilities," says Dr. Groszmann.
10.1186/s12870-020-02412-5
Nano
2-D materials boost carrier multiplication
Ji-Hee Kim et al, Carrier multiplication in van der Waals layered transition metal dichalcogenides, Nature Communications (2019). DOI: 10.1038/s41467-019-13325-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-13325-9
https://phys.org/news/2019-12-d-materials-boost-carrier-multiplication.html
Abstract Carrier multiplication (CM) is a process in which high-energy free carriers relax by generation of additional electron-hole pairs rather than by heat dissipation. CM is promising disruptive improvements in photovoltaic energy conversion and light detection technologies. Current state-of-the-art nanomaterials including quantum dots and carbon nanotubes have demonstrated CM, but are not satisfactory owing to high-energy-loss and inherent difficulties with carrier extraction. Here, we report CM in van der Waals (vdW) MoTe 2 and WSe 2 films, and find characteristics, commencing close to the energy conservation limit and reaching up to 99% CM conversion efficiency with the standard model. This is demonstrated by ultrafast optical spectroscopy with independent approaches, photo-induced absorption, photo-induced bleach, and carrier population dynamics. Combined with a high lateral conductivity and an optimal bandgap below 1 eV, these superior CM characteristics identify vdW materials as an attractive candidate material for highly efficient and mechanically flexible solar cells in the future. Introduction Atomically thin van der Waals (vdW) layered materials are intensively investigated owing to their fascinating properties. These include exceptional mechanical flexibility and a large range of available band structures and band gaps, from true insulators (hexagonal boron nitride) down to semiconductors (transition metal dichalcogenides, TMDs) and metals (graphene). The TMD-based vdW-layered materials can also have excellent optoelectronic properties, including high in-plane charge carrier mobility. In addition, the very weak vdW bonding between individual layers allows for easy formation of multi-layered structures using the same or different materials, as the lattice matching is not important and a heterostructure with a near-perfect interface may be formed. This new dimension of nanomaterial engineering offers the possibility for a new generation of highly efficient and mechanically flexible optoelectronics. These also include thin-film, mechanically flexible solar cells, owing to band gap tunability by composition and layer thickness, high mobility and a possibility of an ultrahigh internal radiative efficiency of >99% 1 , promoted by good surface passivation and large exciton binding energy. Moreover, absorption of sunlight in semiconducting TMDs monolayers reaches typically 5–10% 2 , 3 , which is an order of magnitude larger than that in most common photovoltaic materials of Si, CdTe, and GaAs 4 . Accordingly, prototype ultrathin photovoltaic devices, a few atomic layers in thickness, have been realized using MoS 2 and WSe 2 5 . For a single photon absorption in a semiconductor, one photoexcited electron–hole pair is created and yields initial nonthermal distribution of a photoexcited carrier. The photoexcited carrier then interacts with phonon, losing energy to the lattice as heat, which is responsible for intervalley scattering of the carrier. Carrier–carrier scattering mediated by Coulomb interaction redistributes the electron energy to form a quasi-equilibrium state, which follows Fermi–Dirac distribution, responsible for intravalley scattering of the photoexcited carrier. When the excess energy of photoexcited carrier increases above the bandgap energy with strong Coulomb interaction, the photoexcited carrier obtains sufficient energy to scatter with an electron in the valence band, consequently exciting an additional electron across the bandgap to the conduction band, which is known as carrier multiplication (CM, see Fig. 1 ). CM owing to this inverse Auger process has been suggested to enhance the efficiency of a solar cell above the Shockley−Queisser limit 6 up to ~46% 7 , 8 . In bulk semiconductors, the CM process is rather inefficient and has a high threshold energy—typically exceeding 4–5 times the bandgap ( E g ) 9 , 10 . This is due to the low density of final states, limited by the momentum conservation rule, and the rapid carrier cooling by phonon scattering. The situation is different in nanostructures, where quantum confinement relaxes strict momentum conservation and could also affect carrier thermalization 10 , 11 , 12 , 13 , which competes with CM 14 . Fig. 1 Pump-probe spectroscopy of carrier multiplication (CM). a The CM process (left), and two different Auger processes identified in vdW materials. b Schematic of the differential transmittance experiment. c Steady-state absorption spectrum of the investigated 2H-MoTe 2 thin film, featuring multiple peaks, including the primary A and B excitons. An indirect bandgap ( E g ) is also marked. In the inset, the smooth background absorption has been subtracted, to better reveal the peaks at excitonic transitions. d Band structure and density of states for 2H-MoTe 2 thin-film. Full size image In general, the conversion efficiency and the threshold energy of CM in a particular material are influenced by (i) Coulomb interactions, which can be promoted by spatial confinement and dielectric screening, (ii) carrier cooling by phonon scattering, (iii) initial/final density of states, and possibly also (especially for thin films), and (iv) surface/defect trapping. The CM conversion efficiency can be promoted by (i) strong Coulomb interaction within spatially confined atomically thin (3–4 Å) layers, (ii) large exciton-binding energies of several hundred meV, much larger than those in quantum wells, and (iii) predominant lower carrier cooling rates by less electron–phonon coupling efficiency due to predominant high exciton binding energies 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 . Therefore TMDs, featuring bandgaps in the 0.7–1 eV 8 range, optimal for CM, could open a possibility by way of increasing the power conversion efficiency of solar cells above the Shockley–Queisser limit. Here, we report on the observation of the CM phenomenon in thin TMD films of 2H-MoTe 2 and 2H-WSe 2 . We demonstrate a small CM threshold energy, as low as twice the bandgap, and a high CM conversion efficiency of nearly 93%, when using the usual modeling 11 . These characteristics are superior to those obtained previously for CM in nanostructures and in bulk materials 9 , 10 , 12 , 13 , 14 , 27 . We introduced ultrafast transient absorption spectroscopy with three independent approaches: (i) photo-induced bleaching of the band-to-band absorption at the direct bandgap, (ii) photo-induced intraband absorption of free carriers, and (iii) carrier population dynamics as a function of pump photon energy. These three independent approaches yield consistent results of the CM phenomenon. Because MoTe 2 possesses a bandgap of 0.85 eV, ideal for CM in solar cells, the current findings identify this material as a strong candidate to maximize the power conversion efficiency in photovoltaics. More generally, these results demonstrate that thin TMD layers have a great potential for advanced light-harvesting technologies and, in particular, for the next generation of highly efficient and mechanically flexible, thin-film solar cells. Results Transient absorption spectroscopy In the past, transient absorption spectroscopy, which monitors the optically generated carrier dynamics and utilizes interband and intraband transitions, has been successfully applied to evaluate the CM conversion efficiency in semiconductor quantum dots (QDs) 28 , 29 . The transient absorption measurement is schematically illustrated in Fig. 1b . A strong pump pulse, with a broadly tunable energy domain, excites carriers from the valence to the conduction band. A weak probe pulse monitors the transmission change with the pump-probe delay time. The photo-induced bleach (PIB), of specific resonant interband transitions, and photo-induced absorption (PIA) owing to intraband excitation of photo-generated free carriers are independently followed. Dynamics of both PIB and PIA signals are monitored by tuning the pump-probe delay time. The differential transmittance, Δ T / T 0 , in frequency domain is directly measured and is defined as Δ T / T 0 = ( T on − T off )/ T off , where T on and T off are the transmission of the probe with and without pump, respectively. The Δ T/T 0 signal represents information on carrier population, either in the specifically probed state (PIB) or within the whole band (PIA). Synthesized sample characterization Our investigations have been performed on a 16.4 nm thin film 2H-MoTe 2 sample grown by chemical vapor deposition (CVD) 30 . The steady-state absorption spectrum is shown in Fig. 1c (see Supplementary Figs. 1 and 2 for material characterization). We observed two broad peaks, with an energy difference of 330 meV, corresponding to A-exciton and B-exciton transitions (following Wilson and Yoffe’s nomenclature) split by spin–orbit coupling 31 . A direct excitonic bandgap near 1.04 ± 0.03 eV (marked as A in Fig. 1c ) and an indirect bandgap of 0.85 ± 0.03 eV (marked as E g in Fig. 1c ) are in good agreement with the calculated band structure, shown in Fig. 1d , as well as with literature values 32 . The steady-state absorption determines the fraction of absorbed pump photons (under pulsed excitation conditions) (see Supplementary Fig. 3 ), which can therefore be used for calibration of the absorbed photon density at different excitation energies. The spectrally and temporally resolved Δ T / T 0 map is displayed in Fig. 2a , including the TA spectra at different time delays as a function of probe energy; the direct transition for A-exciton can be distinguished as photoinduced bleaching. In addition, the weak B-exciton direct transition is still visible although those peaks are rather broad in energy, and both negative PIA and positive PIB contributions can be observed regardless of pump photon energy (see also Supplementary Fig. 4 ). Fig. 2 Carrier kinetics in 2H-MoTe 2 thin film investigated by photoinduced bleaching (PIB). a Spectrally and temporally resolved transient absorption map with a pump photon energy of 1.38 eV and a pump fluence of 27.1 μJ cm −2 . b Kinetics of PIB at different photon fluencies with a pump photon energy of 1.38 eV, in the linear regime. The kinetics are invariant when normalized—see the inset—implying absence of nonlinear effects. c PB kinetics at different excitation energies (1.27, 2.26, and 2.74 eV), normalized to the equal number of absorbed photons. The solid lines are fitting curves based on bi-exponential and tri-exponential function including the incident Gaussian-shaped pulse. Inset: PIB kinetics at two low excitation photon energies normalized by the absorbed photon density and no noticeable difference between them. d The maximum Δ T max / T 0 intensity as a function of the absorbed photon fluence at different pump energies. The linear slope indicates quantum yield. The steeper the slope, the higher the carrier generation yield. e Rise dynamics for two different excitation energies for 2H-MoTe 2 film. The bleaching signal, taken at direct transition A-exciton, sharply rises at a pump photon energy of 1.82 E g , while the slower rise time is shown at 3.65 E g . The schematic demonstrates the photogenerated carriers with (right) and without (left) CM. The processes (1) and (2) represent hot carrier cooling via phonon emission and carrier multiplication, respectively. f Transient Stark shift and biexciton linewidth broadening is shown as a function of delay time. Full size image Photo-induced bleaching at direct bandgap To investigate CM, we first examine the excitation-photon-energy dependence of the PIB signal 4 . For that purpose, we set the probe energy to the direct bandgap, A-exciton at 1.04 eV. First, we check the fluence-dependent PIB dynamics in the thin film using a pump photon energy of 1.38 eV (900 nm, 1.62 E g ) for which CM is not possible. The maximum Δ T / T 0 , corresponding to the maximum density of carrier population at the probed state, increases linearly with the pump photon density up to 1 × 10 14 cm −2 (equivalently pump fluence of ~22 μJ cm −2 ) when the saturation sets in, as shown in Supplementary Fig. 5 . Further, in the linear regime the decay dynamics remain identical. This is illustrated in Fig. 2b by comparison of low-fluence kinetics (pumped at 1.38 eV): the amplitude of the differential transmittance varies with pump fluence (the main panel) but after normalization, all transients collapse into the same curve (inset of Fig. 2b ). We conclude that within the linear excitation regime, the amplitude of the Δ T / T 0 PIB signal at any delay time is proportional to the number of generated carriers. Therefore, we compare the carrier generation rates at different pump photon energies with a fixed absorbed fluence of ~1.19 × 10 19 m −3 to evaluate CM conversion efficiency 9 , 29 , 33 (Fig. 2c ). The maximum intensity nearly doubles at excitation energies of 2 E g < E < 3 E g and triples at excitation energies of 3 E g < E < 4 E g , compared to the maximum intensity at a pump photon energy <2 E g . This reflects clearly a quantum yield (QY) of impact ionization. This is again strongly supported by the fact that the maximum intensity did not alter at excitation energies <2 E g , as demonstrated in the inset. Therefore, we use the maximum differential transmittance as a measure of QY of impact ionization. Determination of QY We now monitor the maximum Δ T max / T 0 intensity of PIB kinetics, typically obtained within 0.6 ps after photoexcitation, as a function of fluence at different excitation energies. In each case, we followed the aforementioned outline procedure 3 , 9 , 22 to ensure that the measurements were performed within the linear regime by taking the maximum Δ T max / T 0 intensity of the PIB signal (Fig. 2d ). The maximum intensity increases linearly with the fluence for all excitation energies. As differential transmittance varies with pump photon energy, Δ T / T 0 is normalized to the absorbed photon density. As can be seen, upon normalization the experimental points for all excitation energies lower than 2 E g (1.51, 1.65, and 1.81 eV), fall into a single linear slope. This implies that carrier generation yield is similar for the pump photon energy range, in agreement with similar investigations of CM in other materials 4 , 5 , 6 , 23 . As CM is not possible for excitation energies below 2 E g , we define this linear slope as corresponding to the carrier generation QY of 1. We now continue the investigations for higher excitation energies and observe a steeper linear slope 10 . Δ T max / T 0 is related to the QY via absorbed fluence ( F abs ); Δ T max / T 0 = φσ PIB F abs , where the proportionality constants are the absorption cross section of the probe ( σ PIB ) and the carrier generation QY ( φ ) 9 . The absorption cross section σ PIB has been experimentally determined for the low pump photon energy range— E pump < 2 E g and φ = 1 by averaging results obtained excitation energies of 1.31, 1.38, 1.46, 1.51, and 1.65 eV, i.e., the QY = 1 level has been arbitrarily set for the below-CM threshold pumping. Using the above relation, the absorption cross-section was found to be σ PIB = 5.4 ± 0.05 × 10 −16 cm 2 . With this value, we then determined φ for higher energies by linear fitting of Δ T max / T 0 vs. F abs for each pump photon energy. The slope reaches the QY of φ = 2 at the pump photon energy of 2.74 eV. Carrier population dynamics For further analysis of free carrier population with the build-up dynamics 28 , 34 , we consider the rise time of PIB signal with a higher temporal resolution of 50 fs at 1.55 (<2 E g ) and 3.1 eV (>2 E g ) pump energies (see SI Method). The bleach, measured at the direct transition of A-exciton (see Fig. 2e ), rises quickly (~0.4 ps) and is saturated for <2 E g owing to hot carrier relaxation (see process (1) in the scheme of Fig. 2d ). For the pump photon energy of >2 E g , we observe a relatively slow (~0.75 ps) and featureless rise time, which is attributed to hot carrier cooling and CM—see the respective processes (1) and (2). This slower rise time at the higher pump photon energy could be explained by the larger excess energy of the photoexcited carrier, resulting in longer intravalley scattering towards the conduction band edge which is probed by PIB signal. We can neglect such a probability because a similar aspect, i.e., slow rise time at a higher excitation photon energy, is also observed in PIA signal, which will be discussed later. Therefore, this comparison of the PIB rise time for below-CM and above-CM threshold pumping provides very clear and independent fingerprint of CM 28 , 34 . Nevertheless, other possible scenarios for the slow rise time including the thermalization process of hot carriers towards the band edge, the state filling time by the carrier population, and the scattering in another valley cannot be excluded. The fingerprint of CM When CM occurs, one photoexcited exciton and additionally excited exciton give rise to the formation of biexciton 28 , 29 , 35 , 36 , 37 , 38 , 39 . As the photoexcited carriers increase, a strong local field is developed, consequently predominating a transient Stark shift and a broadened linewidth of absorbance 28 , 35 , 36 , 37 , 38 . Therefore, the presence of biexciton state will be a monitor of CM at above threshold pumping 29 , 35 , 37 . We compared the pump-excited absorbance spectra for excitation at >2 E g to absorbance spectrum without pumping (Supplementary Fig. 4 ), and clearly observed two evidences of CM (Fig. 2f ). The spectrum at early time delay (0.4 ps) is red-shifted by ~9 meV with respect to one at a late time delay (>500 ps), originating from the Stark shift. We simultaneously observed that the spectral shape and amplitude of the differential signal at >2 E g were overlapped with those of <2 E g , confirming the CM evidence (Supplementary Fig. 4 ). Moreover, the linewidth at early time delay is also broadened by ~145 meV, compared to that at the late time delay. These spectral shift and line-broadening in the time evolution provide evidence of our CM results, corresponding with the population dynamics shown in Fig. 2e . Photo-induced intraband absorption of free carriers As mentioned before, the concentration of photogenerated free carriers can also be inferred from their absorption. Therefore, CM can be independently investigated from pump photon energy dependence of PIA. In this case, the PIA signal is obtained by probing intraband transitions of free carriers 10 , 11 . Therefore, this approach is slightly different from the PIB which probes a specific state (A-exciton in the previously considered via PIB) because the measured PIA represents the response of all the free carriers present in the sample. The advantage of probing PIA in the infrared energy over probing PIB at the band edge energy is that the transient kinetics of PIA preserves the linearity to high carrier number, which are independent of the probe wavelength 28 , 34 , 40 , 41 . A probe energy of 0.24 eV (5200 nm) has been chosen because of the superior signal to noise ratio of the differential PIA measurement, but that particular choice is rather indifferent to the determination of CM conversion efficiency (See Supplementary Fig. 6 ). As with the PIB investigation, we first check the linear response of the system; Fig. 3a, b illustrate low-fluence PIA kinetics obtained for the different pump fluence, with the pump photon energy set at 1.58 and 2.38 eV, i.e., below 2 E g and above 2 E g , respectively. Again, when normalized to the same absorbed photon density, they all coincide, as shown in the inset, implying that the PIA amplitude is proportional to the number of absorbed photons, with no nonlinear effects, which can therefore be used to monitor chances of free carrier concentration. For completeness, we also check that photo-charging effects, which frequently trouble PIA studies, are not present in this case (Supplementary Fig. 7 ). Subsequently, we measure the PIA kinetics for different excitation energies (Fig. 3c, d ) and compare the dependence of their amplitude on the fixed absorbed excitation photon fluence (Fig. 3e ). By this way we can compare carrier generation yield at different pump photon energy, as in the PIB analysis. While we chose the maximum Δ T / T 0 intensity of the early delay time for QY determination of PIA, a similar QY was obtained at a later delay time (see Supplementary Fig. 8 ). For this comparison similar to PIB, we choose the maximum Δ T / T 0 intensity, directly following the pump pulse because it is most likely to represent the total concentration of free carriers. For completeness, we point out that in this case, even if the PIA signal is actually lowered by an ultrafast carrier recombination, the carrier generation yield determined in this way would still lead to feasible CM process. Fig. 3 Carrier kinetics in 2H-MoTe 2 thin film investigated by photo-induced absorption (PIA). Kinetics of PIA at different photon fluencies with a pump photon energy of a 1.58 eV and b 2.38 eV. The kinetics are invariant when normalized—see the inset—implying absence of nonlinear effects. c Photo-induced absorption kinetics at two low excitation photon energies normalized by the absorbed photon density and no noticeable difference in the trace is observed. d PIA kinetics excited at 2.38 eV (2.8 E g , black) and 1.58 eV (1.86 E g , red), normalized to the equal number of absorbed photon density. PIA kinetics excited at 2.38 eV is scaled by a factor of 0.5. e The maximum Δ T max / T 0 intensity extracted from the kinetics of photoinduced absorption as a function of absorbed fluence at different pump photon energies. The linear slope indicates quantum yield. The steeper the slope of the line, the higher the carrier generation yield. Full size image CM conversion efficiency Figure 4 shows the determined carrier generation QY (blue diamonds and red dots for PIB and PIA approach, respectively), as a function of the pump photon energy normalized to the bandgap energy of 2H-MoTe 2 . A step-like onset of CM at threshold energy just above 2 E g is clearly manifested, with an abrupt increase of QY for pump photon energies exceeding ~2 E g . To characterize the observed CM process, we use the commonly applied model proposed by Beard et al. 14 , which can be utilized regardless of the electronic band structure. We fit the experimental data to the formula hν / E g = 1 + φ / η CM , where hν is the incident pump photon energy and η CM the CM conversion efficiency. The fitting yields efficiency of η CM ≈ 95% and ≈99% for the data sets obtained from PIB and PIA approaches, respectively. Alternatively the CM conversion efficiency can be evaluated by comparison of the direct integral of the fitted curves with respect to the step-like characteristics of the ideal CM (Fig. 4 ); this procedure yields the efficiencies of ~45% and ~70% for PIB and PIA data sets, respectively. We note that the CM characteristics obtained from the PIA and PIB measurements are similar, with the identical offset energy and the absolute efficiency being somewhat higher for PIA. We can rationalize this observation by pointing out that PIA monitors all photo-excited free carriers, whereas PIB only investigated for a particular energy and thus provides information on a population of a particular state—the A-exciton in this case. Therefore, the PIA approach seems to be more suitable than PIB, especially for indirect bandgap materials, but that requires further study. Fig. 4 Quantum yield of carrier generation for 2H-MoTe 2 and 2H-WSe 2 thin film. a The QY data for 2H-MoTe 2 film (blue diamonds by PB method and red dots by PIA method) are plotted as a function of pump photon energy normalized by the bandgap of material. The black solid line and the dashed lines in gray color represent simulations of CM efficiency ( η CM ). Full size image In an effort to generalize our study, we investigated CM in another 2D vdW material—2H-WSe 2 film (hollow triangles in Fig. 4 ) which has an indirect bandgap of 0.9 eV (Supplementary Fig. 9 ). The measurements have been performed in the PIA mode, in the same way as described before for 2H-MoTe 2 . Highly efficient CM has also been found in this material, with low threshold energy of ~2 E g and the efficiency of ~97% and 52%, as evaluated using the description by Beard et al. 14 and by direct integration, respectively. Therefore, the high CM conversion efficiency with a low-energy threshold of ~2 E g and the step-like characteristics seem likely to be more general for 2D vdW dichalcogenides. Decay kinetics of photoexcited states To independently validate the claim of CM in the 2H-MoTe 2 vdW-layered material, we examine decay dynamics of the photogenerated carriers. In the past, abrupt change of carrier dynamics when pumped below and above the CM threshold served to identify this phenomenon in QDs of various semiconductors 9 , 11 , 12 , 34 , 42 , 43 . Typically, such a change is facilitated by an efficient Auger recombination switching on as soon as multiple excitons appear simultaneously in the same quantum dot. This effect is not expected in vdW layers where multiple photo-generated carriers can effectively separate within the 2D plane, and in that way escape rapid recombination. Nevertheless, understanding the relevant ultrafast kinetics in the material is important to evaluate the signature of CM. Accordingly, we study first the fluence-dependent PIB dynamics using a pump photon energy of 1.38 eV (900 nm, 1.62 E g ) in which CM does not occur and in the linear regime, as discussed before. The kinetics can be fitted by double exponential decay, with two components: a fast and a slow one, with time constants of 2.9 ± 0.6 ps ( τ f ) and 3 ± 1 s ( τ s ), respectively, (see Supplementary Fig. 10a ). Based on the literature 27 , we attribute the fast component to the decay of the A-exciton state population to either trap states or mid-gap states. The slow component is of the similar order to the radiative recombination time. When we increase the fluence, entering the saturation range of the PIB signal, an additional fast decay component of ~0.36 ps ( τ A ) appears; in a recent report 44 this fast recombination channel has been assigned to defect-mediated fast Auger recombination process arising in vdW materials at sufficiently high free carrier density. We conclude therefore that carrier decay dynamics in the investigated sample change with concentration, and therefore can be used to probe it. We next measured the fluence-dependent kinetics excited at 2.06 eV (2.42 E g ) (Supplementary Fig. 10b ). Except for the very lowest fluence, all the dynamics have three dominant components of 0.31 ± 0.08 ps ( τ 1 ), 3.04 ± 0.6 ps ( τ 2 ), and 2.94 ± 0.7 ns ( τ 3 ). In Supplementary Fig. 10c we directly compare two PIB dynamics: one has been obtained for below CM threshold pumping at 1.38 eV and a relatively high fluence, and the other one for the above CM threshold pump photon energy of 2.06 eV and a lower fluence. As can be seen, the normalized transients are identical within the experimental resolution. Following the earlier reasoning, we conclude that in both cases a similar concentration of free carriers has been obtained; because the absorbed photon flux by the high-energy pumping has been considerably smaller, ~8.75 × 10 18 vs. ~2.81 × 10 19 cm −3 , it confirms a higher carrier generation yield, consistent with the claim of CM. Therefore, the enhancement of the number of carriers generated by a high-energy photon is now supported not only by the amplitude but also by the (concentration-dependent) carrier decay dynamics. Discussion In Fig. 5 , we compare the CM characteristics reported for various semiconductors in different geometries, using the phenomenological description of Beard et al. 14 . In general, the geometry of a material has a dramatic effect on the CM conversion efficiency 9 , 11 , 12 , 13 , 42 , 45 , 46 . For example, CM in PbS nanoplatelets and QDs is more efficient than in their bulk counter-part 9 , 13 . Material composition also has a large effect on both threshold energy and CM conversion efficiency. A step-like feature with threshold energy of 2.5 E g and CM conversion efficiency of 90% has been reported for Si QDs 11 . The most efficient CM materials measured to date have been QDs dots with a threshold energy of over 2.5 E g or 3 E g but with the CM efficiency below 90%. Recent reports for PbSe QD films 47 and improved CM conversion efficiency in nanorods 13 , 27 , and InAs QDs 42 show a threshold in the vicinity of 2 E g but a poor CM conversion efficiency of 35–80%. By contrast, the vdW-layered materials investigated here clearly manifest threshold energies near 2 E g , with no excess energy and the record CM conversion efficiency up to ~99% (using the PIA approach). Fig. 5 Quantum yield for various nanostructures and bulk materials and the extracted carrier multiplication. Comparison of the CM efficiency of thin-film 2H-MoTe 2 and 2H-WSe 2 with various other semiconductors including bulk, QDs, and nanoplatelets. Quantum yields from previous reports were taken from refs. 9 , 11 , 12 , 13 , 42 , 45 , 46 . Full size image The microscopic origin of the high CM conversion efficiency in TMD-based vdW-layered materials is not clear at this point, and will require further investigations and better theoretical understanding of their band structure. As mentioned before, the CM conversion efficiency is governed in principle by the competition between carrier–carrier and carrier–phonon scattering. The electron–electron scattering time constant in 2D materials is considerably shorter than for 1D nanowires and 0D QDs—see Supplementary Table 1 . The ratio of time constants of electron–electron and electron–phonon scattering ( τ e–e scatt / τ e–ph scatt ) is about 30. Therefore, we can expect that the carrier–carrier will dominate over carrier–phonon scattering in 2D materials. Consequently, for the high-energy pumping carrier cooling in 2D materials might be governed by CM, dominating phonon scattering. Further, we speculate that the strong confinement effect by the electrostatic potential barrier built in extremely narrow region of 3–4 Å between layers 48 (Supplementary Fig. 11 ), as well as specific peculiarities of the density of states (DOS pockets) will play a role, acting as efficiency boosters for the CM process. In summary, we demonstrate a highly efficient CM in 2H-MoTe 2 and 2H-WSe 2 with nearly ideal characteristics: The onset energy close to the energy conservation of twice the bandgap energy and up to ~99% CM conversion efficiency. This is ascribed to the specific features of 2D vdW-layered materials, strong Coulomb interaction, and in particular, the efficient carrier–carrier scattering and peculiarities of the band structure. High conductivity, large absorbance, and optimal bandgap could be a promising 2D vdW 2H-MoTe 2 material for next-generation flexible and highly efficient solar cells. Methods Transient absorption spectroscopy To measure the dynamics of photoexcited carriers, transient absorption spectroscopy was performed. A beam from 1 kHz Ti:sapphire regenerative amplifier (Libra, Coherent) operating at 790 nm was divided by a beam splitter, at the 1:9 ratio. The stronger beam drives two optical parametric amplifiers (TOPAS prime, Light Conversion), one of which generates laser light in the ultraviolet to mid-infrared range. The OPA was used as a tunable pump pulse with visible to near-infrared energies (1.27–3.1 eV) that excited electron–hole pairs in the sample. The weaker beam generates white-light continuum and was used as the probe pulse, and was detected by a CMOS sensor for the visible range, and an InGaAs sensor for the near-infrared range (HELIOS, Ultrafast systems). Another OPA pumped by the amplifier was tuned in the range of 0.23–0.25 eV to probe the photoinduced absorption, and was detected by HgCdTe detector (HELIOS IR, Ultrafast systems). Both pump and probe beams were linearly polarized, and parallel to each other. By inserting a KTA (potassium titanyle arsenate, KTiOAsO 4 ) crystal at a sample position, we measured the cross-correlation signal at two different wavelengths (visible for the pump and infrared for the probe), and the measured pulse duration was ~205 ± 20 fs (Supplementary Fig. 12 ). To measure the rise time component in the transient kinetics, another laser system was used to generate 1.55 and 3.1 eV pump energies with a higher temporal resolution of ~50 fs. The absorbed photon density was determined as the pump power measurement passing through pinholes of 50, 75, and 100 μm at the sample position. Depending on the laser wavelength, the focused pump spot size varied within the range of 200–230 μm, much larger than the 50 μm pinhole diameter. Background absorption of the substrate was subtracted. The fraction of absorbed pump photons, which were obtained from ultrafast pulse at different excitation energies, followed the steady-state absorbance curve obtained for CW source (Supplementary Fig. 3 ). The error bar in the maximum of the transient absorption signal was determined from the noise level measured at negative time delays in the kinetics for each data set. The error bar in the QY plot was determined from the standard deviation of the energy measurement of the pump fluence. 2H-MoTe 2 thin films To grow semiconducting 2H molybdenum ditelluride (2H-MoTe 2 ) thin films, 7-nm-thick Mo thin film was deposited on a 300-nm-thick SiO 2 /Si substrate via a sputtering system. The prepared Mo thin film was mounted in a two-zone CVD system. The Mo thin film located in the second furnace zone was tellurized by vaporizing 2 g of tellurium (Te) powder (Sigma-Aldrich) at the first furnace zone. To control the tellurization rate, the temperatures of the Te zone ( T 1 ) and Mo film zone ( T 2 ) were controlled independently. During the growth process, argon and hydrogen gases were flown with rates of 500 and 100 sccm, respectively. T 1 was first heated up to 620 °C over 15 min and then T 2 was ramped to 535 °C in 5 min. When the T 2 temperature reached 535 °C, growth was carried out for 5 h. After growth, T 1 was cooled rapidly by opening the chamber; T 2 was then cooled to room temperature at a rate of 50 °C/min 30 . Transfer method for 2H-MoTe 2 film To transfer the MoTe 2 film from as-grown substrate to a quartz substrate, the poly (methyl methacrylate) (PMMA) was coated on the MoTe 2 film surface. The PMMA-coated MoTe 2 film was dried for more than 30 min in air. After that, the PMMA-coated MoTe 2 film was floated on a buffered oxide etchant (1178-03, J.T. Baker) to etch the SiO 2 layer under the MoTe 2 film. Then, the PMMA/MoTe 2 film was rinsed with distilled water several times. The product film was transferred onto a target substrate. After drying the film, the PMMA layer was removed with acetone and isopropyl alcohol. Sample characterization The linear absorption spectrum of 2H-MoTe 2 film was measured by using a UV/VIS absorption spectrometer (V-670, JASCO). The absorbance shown in Fig. 1b was obtained after subtraction of transmittance and reflectance which were measured in an integrating sphere automatically. Raman spectroscopy (Ranishaw) was performed with an excitation energy of 2.33 eV (532 nm). XRD (SmartLab, Rigaku), AFM (SPA400, SEIKO), SEM (FESEM, JSM7000F, JEOL), and TEM (JEM ARM 200F, JEOL Ltd.) were used to characterize the films. We describe the details of each result in Supplementary Figs. 2 and 3 . Data availability The data that support the findings of this study are available from the corresponding authors upon reasonable request.
Physicists at the Center for Integrated Nanostructure Physics (CINAP), within the Institute for Basic Science (IBS, South Korea), have discovered an intriguing phenomenon, known as carrier multiplication (CM), in a class of semiconductors with incredible thinness, outstanding properties, and possible applications in electronics and optics. Published in Nature Communications, these new findings have the potential to boost the photovoltaics and photodetector fields, and could improve the efficiency of solar cells produced with these ultrathin materials to up to 46%. An interesting class of 2-D materials, the van der Waals layered transition metal dichalcogenides (2-D TMDs), are expected to create the next-generation of optoelectronic devices, such as solar cells, transistors, light emitting diodes (LED), etc. They consist of individual thin layers separated by very weak chemical bonds (van der Waals bonds), and have unique optical properties, high light absorption, and high carrier (electron and hole) mobility. Beyond allowing the option to tune their band gap by changing composition and layer thickness, these materials also offer an ultrahigh internal radiative efficiency of > 99%, promoted by the elimination of surface imperfections and large binding energy between carriers. Absorption of sunlight in semiconducting 2-D TMD monolayers reaches 5-10% typically, which is an order of magnitude larger than that in most common photovoltaic materials, like silicon, cadmium telluride, and gallium arsenide. Despite these ideal characteristics, however, the maximum power conversion efficiency of 2-D-TMDs solar cells has remained below 5% due to losses at the metal electrodes. The IBS team in collaboration with researchers at the University of Amsterdam aimed to overcome this drawback by exploring the CM process in these materials. CM is a very efficient way to convert light into electricity. A single photon usually excites a single electron, leaving behind an 'empty space' (hole). However, it is possible to generate two or more electron-hole pairs in particular semiconductors if the energy of the incident light is sufficiently large, more specifically, if the photon energy is twice the material's bandgap energy. While the CM phenomenon is rather inefficient in bulk semiconductors, it was expected to be very efficient in 2-D materials, but was not proved experimentally due to some technical limitations, like proper 2-D TMD synthesis and ultrafast optical measurement. In this study, the team observed CM in 2-D TMDs, namely 2H-MoTe2 and 2H-WSe2 films, for the first time; a finding that is expected to improve the current efficiency of 2-D TMD solar cells, even going beyond the Shockley-Queisser limit of 33.7%. "Our new results contribute to the fundamental understanding of the CM phenomenon in 2-D-TMD. If one overcomes the contact losses and succeeds in developing photovoltaics with CM, their maximum power conversion efficiency could be increased up to 46%," says Young Hee Lee, CINAP director. "This new nanomaterial engineering offers the possibility for a new generation of efficient, durable, and flexible solar cells."
10.1038/s41467-019-13325-9
Medicine
Study finds air pollution reaches placenta during pregnancy
Hannelore Bové et al. Ambient black carbon particles reach the fetal side of human placenta, Nature Communications (2019). DOI: 10.1038/s41467-019-11654-3 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-11654-3
https://medicalxpress.com/news/2019-09-air-pollution-placenta-pregnancy.html
Abstract Particle transfer across the placenta has been suggested but to date, no direct evidence in real-life, human context exists. Here we report the presence of black carbon (BC) particles as part of combustion-derived particulate matter in human placentae using white-light generation under femtosecond pulsed illumination. BC is identified in all screened placentae, with an average (SD) particle count of 0.95 × 10 4 (0.66 × 10 4 ) and 2.09 × 10 4 (0.9 × 10 4 ) particles per mm 3 for low and high exposed mothers, respectively. Furthermore, the placental BC load is positively associated with mothers’ residential BC exposure during pregnancy (0.63–2.42 µg per m 3 ). Our finding that BC particles accumulate on the fetal side of the placenta suggests that ambient particulates could be transported towards the fetus and represents a potential mechanism explaining the detrimental health effects of pollution from early life onwards. Introduction Fetal development is a critical window of exposure-related susceptibility because the etiology of diseases in adulthood may have a fetal origin and may be attributed to adverse effects of in utero environmental exposures. This causality concept is known as the Developmental Origins of Health and Disease or Barker hypothesis 1 . Ambient outdoor air pollution exposure is such a detrimental environmental factor that has been identified in this context 2 , 3 . Various studies have already described associations between prenatal ambient air pollution exposure and impaired birth outcomes 4 . For instance, combustion-related PM, including BC, is associated with lower birth weight 5 , 6 , preterm birth 7 , 8 , and intrauterine growth restriction 9 , 10 . Up to now, it remains unclear how exactly adverse effects are provoked in the fetus but various potential mechanisms have been proposed including both indirect (e.g., intrauterine inflammation) and/or direct (e.g., particle translocation) manners 11 , 12 , 13 . Numerous studies have indisputably demonstrated that particulate inhalation results in health problems far beyond the lungs 14 . For example, research in different areas of the world with high 15 , moderate 16 , and low 17 ambient PM showed that long-term exposure to particulate air pollution impedes cognitive performance. Accordingly, Maher et al. 18 could detect the presence of combustion-derived nanoparticles from air pollution in the frontal cortex of autopsy brain samples. More importantly, the latter was one of the first studies that provided evidence of particle translocation in humans. Recently, we found BC particles from ambient air pollution in the urine of healthy children 19 , showing the ubiquity of this environmental contaminant having the potential to reach various organ systems. Appropriately, the question arises in which distant organs, such as the placenta, the particles originating from the systemic circulation might deposit. The placenta is a temporary organ that presents a natural barrier between mother and fetus during the entire pregnancy. While it was first considered to be an impenetrable barrier for xenobiotics, it has been shown that several environmental pollutants like alcohol and therapeutics can cross the placenta 20 , 21 . In recent years, studies were conducted to investigate whether (nano)particles can pass the placental barrier. However, these investigations are limited to in vitro cell cultures, ex vivo models and animal studies 22 , 23 . Hence, particle translocation to the human placenta following inhalation under real-life conditions is insufficiently studied while being essential in understanding the effects on fetal health 24 . Here, we postulate that BC particles are able to translocate from the mothers’ lungs to the placenta. Within this framework, we employ our recently established method for detecting carbonaceous particles based on the non-incandescence related white-light (WL) generation under femtosecond pulsed illumination 25 . The study is performed on a subset of term placentae from mothers enrolled within the ENVIR ON AGE birth cohort study and on preterm placentae from spontaneous terminated pregnancies. We report the presence of BC particles at both the fetal and maternal side of all screened term and preterm placentae. The carbonaceous nature of those particulates and their placental embedment to preclude external contamination are confirmed. In addition, the BC load from the term placentae is positively associated with the residential BC exposure of the mothers during gestation. These results suggest that ambient particulates can be transported through the placental barrier towards the fetus, even during early and vulnerable stages of pregnancy. Hence, it strengthens the hypothesis that direct effects induced by the presence of ambient combustion-related particulates are at least partially responsible for observed detrimental health effects from early life onwards. Results Experimental protocol for BC detection in placentae We used our previously established technique based on the WL generation of carbonaceous particles under femtosecond pulsed laser illumination 25 to study the presence and location of BC particles in human placenta (Fig. 1 ). Under femtosecond pulsed illumination, placental tissue generates various label-free signals including second harmonic generation (SHG) and two-photon excited autofluorescence (TPAF). The SHG signal is originating from collagen type I while the TPAF emanates from various structures, e.g., placental cells, elastin and red blood cells. Both the SHG and TPAF signals were generated simultaneously but spectrally separated and detected. The present BC particles were analyzed based on two of the characteristic WL features: (i) the emission signals saturate compared to the TPAF and SHG, which allows thresholding of the particles in each of the two detection channels, i.e., TPAF and SHG, and (ii) the emitted WL ranges over the whole visible spectrum, so the thresholded particles should be present in both channels. A flowchart of the employed protocol is depicted in Fig. 1 . Every step was designed and monitored to preclude any possible contamination. Fig. 1 Flowchart of the experimental protocol for BC detection in the placenta. a Five biopsies are taken in total, of which four on the fetal side of the placenta at distinct positions oriented according to the main blood vessel (black arrow), and one (biopsy 5) at the maternal side of biopsy 1. After sample collection, the biopsies are embedded in paraffin and sections of 4 µm are prepared. b The placental sections are illuminated using a two-photon femtosecond pulsed laser tuned to a central wavelength of 810 nm (10 mW radiant power at the sample). c The WL produced by the BC naturally present in the tissue (white dots) is detected along with the simultaneous generation and detection of two-photon excited autofluorescence (TPAF) of the cells (green) and second harmonic generation (SHG) from collagen (red) (see materials and methods for detailed protocol). A tile scan of 10 × 10 images is acquired resulting in a field of view of 9000 × 9000 µm 2 with a 12960 × 12960 pixel resolution (0.694 × 0.694 µm 2 pixel size) and a pixel dwell time of 2.51 µs. Three different locations per placental section are imaged. d The number of BC particles in the obtained images is determined using a peak-find algorithm, which counts connected pixels above a certain threshold value, i.e., 0.5% and 45% lower than the highest intensity value of the TPAF and SHG pictures, respectively. e BC particles (white dots) in the output figure are defined as the saturated pixels found in both channels, i.e., TPAF and SHG. f Finally, based on the image volume estimated from the point spread function of the optical system, the result is expressed as the total relative number, i.e., the number of detected BC particles per cubic millimeter placenta Full size image We were able to detect BC particles from ambient air pollution present in human placentae in a label-free and biocompatible manner (Fig. 2 ). Fig. 2 Evidence of BC particles at the fetal side of the human placenta. WL generation originating from the BC particles (white and further indicated using white arrowheads) under femtosecond pulsed laser illumination (excitation 810 nm, 80 MHz, 10 mW laser power on the sample) is observed. Second harmonic generation from collagen (red, emission 400–410 nm) and TPAF from placental and red blood cells (green, emission 450–650 nm) are simultaneously detected. Scale bar: 100 µm. The boxes on the right show the black carbon particles present in placental tissue at higher magnification. Scale bar: 30 µm Full size image Validation experiments of WL from BC in placentae Various validation experiments were performed (Fig. 3 ). While we strived to avoid external contamination of the placenta tissues by applying strict experimental guidelines (see details in the methods section), we checked the embedment of the BC particles inside the placental tissue. Optical sectioning in the Z-direction throughout the placental tissue and the corresponding orthogonal projections (Fig. 3a , Supplementary Fig. 3 ) showed that the detected BC particles are embedded in the tissue and are therefore not originating from external pollution. Subsequently, the carbonaceous nature of the identified BC particles was studied. Experiments were conducted to confirm the characteristic features of the emitted WL, which were checked for specificity and sensitivity in our previous studies 19 , 25 . First, we know that carbonaceous particles, including the environmental pollutant BC and commercially engineered carbon black (CB), under femtosecond pulsed near-infrared illumination generate WL that stretches over the whole visible spectrum and is extremely bright and thus saturates easily the detection channel. The emission fingerprint of the identified BC particles was recorded (Fig. 3b ), which shows that indeed the signal ranges over the various emission wavelengths. The WL signal of commercially CB particles was measured as a reference, confirming the WL emission profile. In contrast, the emission fingerprint of the background signals of the placenta tissue consists of a distinct peak that does not continuously range over all wavelengths (Fig. 3b ). Second, the temporal responses of the determined BC particles, reference CB particles and background TPAF (Fig. 3c ) were recorded to be 250, 320, and 1400 ps, respectively. The temporal response of the BC particles is non-resolved from the instrument response function. These results are consistent and validate the carbonaceous nature of the BC particles as their temporal response is known to be instantaneous. Fig. 3 Validation experiments to confirm the carbonaceous nature of the identified particles inside the placenta. a XY-images acquired throughout a placental section in the z -direction and corresponding orthogonal XZ-projections and YZ-projections showing a BC particle (white and indicated by white arrowheads) inside the tissue (red and green). Scale bar: 50 µm. b Emission fingerprint of identified black carbon (BC), reference carbon black (REF) particles and two-photon excited autofluorescence (TPAF) under femtosecond pulsed illumination. c Temporal response of BC and REF particles and TPAF measured by time-correlated single photon counting. The instrument response function (IRF) overlaid in blue. Source data are provided as a Source Data file Full size image Intravariability and intervariability of BC load in placental tissue Subsequently, to evaluate the intravariability (within one biopsy) and intervariability (between different biopsies) of the placental tissue, the routinely collected biopsies from three mothers enrolled in the ENVIRonmental influence ON early AGEing (ENVIR ON AGE) birth cohort study were screened for their BC load (Supplementary Fig. 4 ). No significant difference in BC load was observed between the four fetal sided biopsies for the three screened mothers. On the other hand, significant differences between the fetal and maternal sided biopsies could be seen. The summary of these findings can be seen in Supplementary Fig. 4 . Placental BC load and residential exposure during pregnancy To study the relationship between the mothers’ BC exposure during pregnancy and BC accumulation in their placentae, placental tissue of 10 mothers with low and 10 mothers with high residential BC exposure during the whole pregnancy were screened for their BC load. The selection criteria can be found in the Methods section, while the mothers’ residential locations are shown on a CORINE land cover map in Supplementary Fig. 5 . BC particles were present in all placentae, with an average (SD) particle count of 0.95 × 10 4 (0.66 × 10 4 ) and 2.09 × 10 4 (0.96 × 10 4 ) particles per mm 3 for low and high exposed mothers, respectively. Moreover, the placental BC load was positively associated with mothers’ residential BC exposure averaged over the entire pregnancy (Fig. 4 , Pearson correlation: r = 0.55; P = 0.012 and corresponding Spearman’s Rank correlation: r = 0.43; P = 0.06). Each 0.5 µg per m 3 increment in residential BC exposure during pregnancy was associated with +0.45 × 10 4 particles per mm 3 (95% CI: 0.11 × 10 4 to 0.80 × 10 4 ; P = 0.012) or 38.5% higher placental BC load. A note on the size distribution of the identified particles/aggregates can be found as Supplementary Note 1 . From the size determination of the identified particles/aggregates it is clear that in the biopsy from each mother, larger particle aggregates, ranging between 1.00 and 9.78 µm, can be found (Supplementary Fig. 1 ). These particle aggregates consist of various smaller particles which translocated from the mothers’ circulation to distinct locations inside the placental tissue (Supplementary Fig. 2 ). Fig. 4 Correlation between placental BC load and residential BC exposure averaged over the whole pregnancy. The line is the regression line. Green and red dots indicate low ( n = 10 mothers) and high ( n = 10 mothers) exposed mothers. Pearson correlation r = 0.55, P = 0.012 and corresponding Spearman’s Rank correlation r = 0.43, P = 0.06. Source data are provided as a Source Data file Full size image BC load in preterm placentae To study the ability of BC particles to reach the placenta during early and critical stages of pregnancy, five placentae from spontaneous preterm births were screened for their BC load. The selection criteria can be found in the Methods section. BC particles could be detected in all five placentae (Fig. 5 ), with a median (SD) particle count between 0.45 × 10 4 (0.16 × 10 4 ) and 0.96 × 10 4 (0.46 × 10 4 ) particles per mm 3 . Fig. 5 BC load in placentae from spontaneous preterm births ( n = 5). The whiskers indicate the minimum and maximum value and the box of the box plot illustrates the upper and lower quartile. The median of spreading is marked by a horizontal line within the box. Source data are provided as a Source Data file Full size image Discussion In the prospect that ultrafine particles translocate from the mothers’ lungs into the circulation, we screened placentae for their BC loading to investigate whether in a real-life setting particle transfer through the placenta towards the fetal side occurs. Those particles could be identified inside the placenta (Fig. 2 ) based on the WL generation by the BC particles under femtosecond pulsed illumination (Fig. 1 ). The signals generated by the identified BC particles under femtosecond pulsed illumination matched precisely the carbonaceous nature of combustion-derived particles (Fig. 3 ). While the placenta is a heterogeneous organ 26 and variations in the detected BC particles within one biopsy and between the various biopsies are seen (Supplementary Fig. 4 ), no significant differences are found between the different biopsies taken at the fetal side of the same placenta. Hence, the screening of one biopsy is sufficient to obtain representative results. Our results demonstrate that the human placental barrier is not impenetrable for particles. Our observation based on exposure conditions in real-life is in agreement with previously reported ex vivo and in vivo studies studying the placental transfer of various nanoparticles. An ex vivo human perfusion model evidenced that polystyrene particles with a diameter less than 240 nm are able to cross the placenta and hereby even reach the fetal bloodstream 27 . Also, Vidmar et al. 28 recently demonstrated the translocation of silver nanoparticles to the fetal circulation employing a similar model that mimics the maternal and fetal blood circulation. On the other hand, diesel nanoparticles have been found in maternal red blood cells and plasma, as well as in placental trophoblastic cells from pregnant rabbits exposed to aerosolized diesel exhaust 13 . Our study addressed the gap of human exposure routes of BC particles towards the fetal side of the placenta, although further research is needed, our results suggest that particle transport through placental tissue is indeed possible. The presence of BC particles could be identified in all screened placentae and a positive association has been found between the placental BC load and the mothers’ residential BC exposure averaged over the entire pregnancy (Fig. 4 ). We did not have information on the presence of CB-based tattoos of the participating mothers. However, we believe that tattoos are not confounding the association between gestational exposure to particulate air pollution and the placental BC load. First, the largest fraction of the CB particles permanently stay in the dermis between the collagen fibers, while only a minor fraction can be distributed to the lymphatic system and few particles may reach the blood circulation directly 29 . Second, a primary requirement of confounding is that the level of ambient BC particles is correlated with the presence of CB-based tattoos, which is unlikely. Besides in full term placentae, BC particles could also be detected in placentae from pregnancies spontaneously terminated (Fig. 5 ). The latter shows the presence of BC particles in placental tissue already during the early and vulnerable stages of pregnancy where the fetus is most vulnerable for toxic compounds. However, spontaneous termination of pregnancy may have resulted from complications that could have compromised placental development, and thus its structure and barrier function. Nevertheless, our main analysis is based on full term pregnancies. Further research will have to show whether the particles can cross the placenta and reach the fetus, and if particle translocation is responsible for the observed adverse health effects during early life. Our current study on the detection of BC particles in placenta includes various strengths, which altogether have led to the reported, important insights. First, our established detection method has several advantages over conventionally used techniques such as light and electron microscopy, which are often employed in this type of epidemiological studies 29 , 30 . Summarized, our technique does not require extensive sample preparation (e.g., macrophage isolation) nor labeling, the particles can be imaged directly in their biological context (i.e., maternal and fetal side), additional information about the placental structure can be detected simultaneously and, most importantly, it allows specific and sensitive detection of BC particles 19 , 25 . Second, directly linked to the latter, real-life BC exposures could be measured in the placenta of mothers exposed to relatively low annual ambient BC concentrations (with annual average concentrations ranging between 0.63–2.42 µg per m 3 in the study area). Despite the low annual ambient BC concentrations in the northern part of Belgium, a Pearson correlation of 0.55 could be found between placental BC load and exposure during pregnancy. Third, we could confirm the carbonaceous nature of the identified BC particles and external contamination of the tissues could be excluded. In conclusion, our study provides compelling evidence for the presence of BC particles originating from ambient air pollution in human placenta and suggests the direct fetal exposure to those particles during the most susceptible period of life. The evidence of particle translocation to the placenta might be a plausible explanation for the observed detrimental effects of ambient particulate air pollution on fetal development over and beyond the increased maternal systemic inflammation in response to particulate accumulation in the lungs 12 , 31 . Methods Study population and sample collection and preparation The present study on term placentae is executed within the framework of the ENVIR ON AGE (ENVIRonmental influence ON AGEing in early life) birth cohort 32 . The cohort enrolls mothers giving birth in the East-Limburg Hospital (ZOL; Genk, Belgium) and is approved by the Ethics Committee of Hasselt University and East-Limburg Hospital (EudraCT B37120107805). The study is conducted according to the guidelines laid down in the Declaration of Helsinki. All participating women provided informed written consent. Mothers were asked to fill out a questionnaire to get lifestyle information. The ambient exposure to BC of the mothers was determined, based on their residential address, using a validated spatial and temporal interpolation method 19 , 33 . The method uses land cover data obtained from satellite images (CORINE land cover data set) and pollution data of fixed monitoring stations. Coupled with a dispersion model that uses emissions from point sources and line sources, this model chain provides daily exposure values in a high-resolution receptor grid. Overall model performance was evaluated by leave-one-out cross-validation including 16 monitoring points for BC. Validation statistics of the interpolation tool gave a spatiotemporal explained variance of more than 0.74 for BC. Fresh placentae were collected within 10 min after birth. Biopsies were taken at four standardized sites at the fetal side of the placenta across the middle region, approximately 4 cm away from the umbilical cord and under the chorio-amniotic membrane. The order of the biopsies was clockwise starting at the main blood vessel. Also, one biopsy is taken at the maternal side of the placenta at the equivalent position of biopsy 1 of the fetal side. Accordingly, the biopsies taken at the sides facing towards the fetus and mother are defined as the fetal and maternal side of the placenta, respectively. The intervariability and intravariability between and within biopsies were assessed using placental tissue from three randomly selected, non-smoking mothers, with average residential BC exposure (between 0.96 and 1.32 µg per m 3 ). To evaluate the correlation between the BC exposure of mothers (all non-smokers) during pregnancy and accumulation of BC in placentae, 10 mothers with high residential BC exposure and 10 mothers with low residential BC exposure during pregnancy were selected from the ENVIR ON AGE biobank. High residential BC exposure during pregnancy was defined as: (i) entire pregnancy and third trimester of pregnancy exposure to residential BC ≥ 75th percentile (1.70 µg per m³ and 2.42 µg per m³, respectively), and (ii) residential proximity to a major road ≤ 500 m. Low residential BC exposure during pregnancy was defined as: (i) entire pregnancy and third trimester of pregnancy exposure to residential BC ≤ 25th percentile (0.96 µg per m³ and 0.63 µg per m³, respectively), and (ii) residential proximity to major road >500 m. Biopsies of placental tissue from spontaneous preterm births were collected at the East-Limburg Hospital (ZOL; Genk, Belgium). Five biobanked placentae of mothers with spontaneous termination of pregnancy between 12 and 31 weeks of gestation were randomly selected but taking into account the following criteria: (i) non-smoker, (ii) avoiding possible complications that can cause autolysis (mors in utero) or disturb the histological image (infections), and (iii) best possible spread in gestation time, i.e., pregnancy termination at 12, 16, 19, 25, and 31 weeks. The cases were handled strictly anonymously. Accordingly, no personal information is available except for the inclusion and exclusion criteria and, thus, the residential exposure to ambient air pollution is unknown. The use of these tissues for the detection of BC particles was approved by the Ethics Committee of Hasselt University and East-Limburg Hospital (EudraCT B371201938875). Since the employed samples were biobanked, this specific study is not covered by the law of 7 May 2004 on experiments on the human person. Hence, no written consent was needed according to the Ethical Committees. To study the BC loading in the preterm placentae the available biopsy was screened by imaging three regions within five different sections taken in the middle of the tissue ( n = 15 images). Placental biopsies were fixed in formaldehyde for minimal 24 h and paraffin embedded. 4 µm sections were cut using a microtome (Leica Microsystems, UK) and mounted between histological glass slides. To preclude any particulate contamination, particle-free instruments and sample holders were used and all samples were handled in a clean room with filtered air (Genano 310/OY, Finland). Experimental protocol for BC detection in placentae BC particles naturally present in the placenta were detected using a specific and sensitive detection technique based on the non-incandescence-related WL generation of the particles under femtosecond illumination as published before 19 , 25 . Images of the placental sections were collected at room temperature using a Zeiss LSM 510 (Carl Zeiss, Jena, Germany) equipped with a two-photon femtosecond pulsed laser (810 nm, 150 fs, 80 MHz, MaiTai DeepSee, Spectra-Physics, USA) tuned to a central wavelength of 810 nm with 5 or 10 mW radiant power on average at the sample position using a 10×/0.3 objective (Plan-Neofluar 10×/0.3, Carl Zeiss). WL emission of the BC particles was acquired in the non-descanned mode after spectral separation and emission filtering using 400–410 nm and 450–650 nm BP filters. By employing these two emission filters, the SHG from the placental collagen type I and TPAF of the placental components are collected in the corresponding images. The resulting tile scans had a field of view of 9000 × 9000 µm 2 containing 100 images with a pixel size of 0.694 µm and were recorded with a 2.51 µs pixel dwell time. The spatial resolution of the system in the configuration that the measurements were performed (i.e., 10 × /0.3 objective, 810 nm excitation, identical settings): w x = w y = 1.44 µm and w z = 14.8 µm defined as the sizes of the point spread function in the XY-plane (radius of Airy-disk) and along the optical axis (1/e-thickness), respectively. The images were acquired by ZEN Black 2.0 software (Zeiss). To count the number of BC particles in the tile scans of each placental section, an automated and customized Matlab program (Matlab 2010, Mathworks, The Netherlands) was used. First, a peak-find algorithm detects pixels above a certain threshold value. Here, threshold values of 0.5% and 45% lower than the highest pixel intensity value of the TPAF and SHG image, respectively, were chosen. These thresholds resulted in highly reproducible values, which were checked manually using Fiji (ImageJ v2.0, Open source software, ). Next, the detected pixels of both images are compared and only the matching ones are used to generate the output image and metrics. In addition, the effectively imaged placental area was determined from the TPAF image using Fiji and the focal volume based on the point spread function of the optical system. Finally, the total relative number, i.e., the number of detected BC particles per cubic millimeter imaged placenta, was defined. The customized Matlab program is made available upon reasonable request directed to the corresponding author. Validation experiments of WL from BC in placentae Validation experiments were performed using a Zeiss LSM 880 (Carl Zeiss, Jena, Germany) and 40×/1.1 water immersion objective (LD C-Apochromat 40×/1.1 W Korr UV-Vis-IR, Carl Zeiss). This setup was used as it allows accurate detection of the emission fingerprint and time correlated single photon counting of the BC particles in placental tissue. All settings were kept identical compared to the measurements performed on the LSM 510 setup unless stated otherwise. Approximately, 60 images with a pixel size of 0.297 × 0.297 × 0.500 µm 3 were acquired throughout the placental section using a pixel dwell time of 4.1 µs. In total, a volume of 300 × 300 × 30 µm 3 was imaged. Orthogonal XZ-projection and YZ-projection were made using Fiji. The emission fingerprints of the BC particles inside the placental tissue sections and TPAF from the placental cells were collected under femtosecond pulsed illumination. Note, for this specific experiment, the gain and laser power were changed to avoid saturation of the emission signal in order to be able to observe the trend of the WL signal over all wavelengths. After spectral separation, the emitted signals ranging between 410–650 nm were collected at an interval of 9 nm using the QUASAR thirty-two channel GaASP spectral detector of the LSM 880 system. The resulting 1024 × 1024 lambda image with a pixel size of 0.104 µm was detected with a pixel dwell time of 2.05 µs. As a reference, the emission fingerprint of commercially available carbon black nanoparticles (US Research Nanomaterials, USA) was recorded using identical settings. Following femtosecond illumination, the temporal responses of the emitted signals originating from the BC particles in the placental tissue and from the placental cells were detected using the BiG.2 GaASP detector of the LSM 880 system. The detector was connected to an SPC 830 card (Becker and Hickl, Germany) that was synchronized to the pulse train of the MaiTai DeepSee laser. Recordings of 256 × 256 images with a pixel size of 0.346 µm were acquired using a pixel dwell time of 8.19 µs. The instrument response function was determined by detecting the response (IRF) of the laser pulse using potassium dihydrogen phosphate crystals under identical conditions. The IRF value was used in the analysis of all measurements for curve fitting. As a reference, the temporal response of commercially available carbon black particles was recorded employing the same settings. All time-correlated single photon counting measurements were captured and analyzed using the SPCM 9.80 and SPCImage 7.3 software (Becker and Hickl), respectively. Screening of placental tissue for BC load Both the intravariability and intervariability in BC loading of the placental biopsies were evaluated. The intravariability (within one biopsy) was examined by screening three regions of 10 × 10 images within five different sections taken in the middle of the examined biopsy ( n = 15 images per biopsy). On the other hand, the intervariability (between the four fetal biopsies) was studied by measuring the BC load in three regions of 10 × 10 images within five different sections taken in the middle of the examined biopsy ( n = 60 images per mother). The intervariability was solely assessed between the four fetal biopsies since there is already an existing variability between the fetal and maternal biopsies. To evaluate the BC loading in the placentae of 10 low and 10 high exposed mothers, one biopsy was examined. More specifically, biopsy number 2 was selected with the exception of the following cases: (i) no or too little tissue was available, or (ii) background signal of blood was too high. In the latter cases, biopsy number 3 was chosen. The BC load was measured in three regions within five different sections taken in the middle of the biopsy ( n = 15 images). To study the BC loading in the preterm placentae the available biopsy was screened by imaging three regions within five different sections taken in the middle of the tissue ( n = 15 images). Ultrastructural analysis Following fixation in 2% glutaraldehyde, the biopsies were gently rinsed and postfixed in 2% osmium tetroxide for 1 h. Subsequently, the biopsies were put through a dehydrating series of graded concentrations of acetone and impregnated overnight in a rotator with acetone:spurr (1:1) (Spurr Embedding Kit, Electron Microscopy Sciences). Next, the samples are placed into molds and fresh spurr solution is added followed by polymerization for 24–36 h at 70 °C. Ultra-thin sections (60 nm) were mounted on 0.7% formvar-coated copper grids and examined in a Philips EM 208 transmission electron microscope operated at 60 kV. Digital images were captured using a Morada camera system and analyzed using SIS analysis software (Germany). Statistical analysis All data are represented as means ± standard deviation and were analyzed using the commercially available software Graphpad (Graphpad Prism 6, Graphpad Software Inc., USA) and JMP (JMP Pro 12, SAS Institute Inc., USA). On the intravariability and intervariability data, a two-tailed analysis of variance (ANOVA) was performed followed by the Tukey posttest. To assess the relation between the BC exposure of mothers during pregnancy and accumulation of BC in placentae, the Pearson correlation coefficients were determined. We used the nonparametic Spearman’s Rank test to confirm the results. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are not publicly available as they contain information that could compromise research participant privacy, but are available from the corresponding author (T.S.N.) upon reasonable request. The source data underlying Figs. 3 b, c, 4 , and 5 and Supplementary Figs 1 and 4a-c are provided as a Source Data file.
A new study suggests when a pregnant woman breathes in air pollution, it can travel beyond her lungs to the placenta that guards her fetus. Pollution composed of tiny particles from car exhaust, factory smokestacks and other sources is dangerous to everyone's health, and during pregnancy it's been linked to premature births and low birth weight. But scientists don't understand why, something that could affect care for women in highly polluted areas. One theory is that the particles lodge in mom's lungs and trigger potentially harmful inflammation. Tuesday, Belgian researchers reported another possibility, that any risk might be more direct. A novel scanning technique spotted a type of particle pollution—sootlike black carbon—on placentas donated by 28 new mothers, they reported in Nature Communications. The placenta nourishes a developing fetus and tries to block damaging substances in the mother's bloodstream. The Hasselt University team found the particles accumulated on the side of the placenta closest to the fetus, near where the umbilical cord emerges. That's not proof the soot actually crossed the placenta to reach the fetus—or that it's responsible for any ill effects, cautioned Dr. Yoel Sadovsky of the University of Pittsburgh Medical Center, a leading placenta expert who wasn't involved with the new research. And it's a small study. Still, "just finding it at the placenta is important," Sadovsky said. "The next question would be how much of these black carbon particles need to be there to cause damage." Scientists already had some clues from animal studies that particles could reach the placenta, but Tuesday's study is a first with human placentas. The Belgian researchers developed a way to scan placenta samples using ultra-short pulses from a laser that made the black carbon particles flash a bright white light, so they could be measured. The researchers included placentas from 10 mothers who lived in areas with high pollution and 10 others from low areas. The higher the exposure to pollution, the more particles the researchers counted in the placentas. "As the fetal organs are under full development, this might have some health risks," said Hasselt environment and public health specialist Tim Nawrot, the study's senior author. He is doing additional research to try to tell.
10.1038/s41467-019-11654-3
Earth
A new way to sense earthquakes could help improve early warning systems
Masaya Kimura et al, Earthquake-induced prompt gravity signals identified in dense array data in Japan, Earth, Planets and Space (2019). DOI: 10.1186/s40623-019-1006-x
http://dx.doi.org/10.1186/s40623-019-1006-x
https://phys.org/news/2019-03-earthquakes-early.html
Abstract Earthquake ruptures cause mass redistribution, which is expected to induce transient gravity perturbations simultaneously at all distances from the source before the arrival of P-waves. A recent research paper reported the detection of such prompt gravity signals from the 2011 Tohoku-Oki earthquake by comparing observed acceleration waveforms and model simulations. The 11 observed waveforms presented in that paper recorded in East Asia shared a similar trend above the background seismic noise and were in good agreement with the simulations. However, the signal detection was less quantitative because the significance of the observed signals was not discussed and the waveforms at other stations in the region were not shown. In this study, similar trends were not observed in most of the data recorded near the stations used in the aforementioned study, suggesting that the reported signals were only local noises. We thus took a different approach to identify the prompt signals. We optimized the multi-channel data recorded by superconducting gravimeters, broadband seismometers, and tiltmeters. Though no signal was identified in the single-trace records, the stacked trace of the broadband seismometer array data in Japan showed a clear signal above the reduced noise level. The signal amplitude was 0.25 nm/s 2 for an average distance of 987 km from the event hypocenter. This detection was confirmed with a statistical significance of 7\sigma 7\sigma , where \sigma \sigma is the standard deviation of the amplitude of the background noise. This result provided the first constraint on the amplitude of the observed prompt signals and may serve as a reference in the detection of prompt signals in future earthquakes. Introduction Compressional seismic waves radiating from an earthquake accompany density perturbations, which give rise to widespread transient gravity perturbations \delta \varvec{g} \delta \varvec{g} , even ahead of the wave front. The interest in earthquake-induced prompt gravity perturbations has increased in terms of both theoretical prediction and data signal detection with their potential for earthquake early warning (Harms et al. 2015 ; Harms 2016 ; Montagner et al. 2016 ; Heaton 2017 ; Vallée et al. 2017 ; Kimura 2018 ; Kimura and Kame 2019 ). In this paper, “prompt” denotes the time period between the event origin time and the P-wave arrival time. Study by Montagner et al. ( 2016 ) was the first to discuss prompt gravity signals in observed data. They searched for the signal from the 2011 Mw 9.0 Tohoku-Oki earthquake in the records of a superconducting gravimeter (SG) at Kamioka (approximately 510 km from the epicenter) and five nearby broadband seismometers of the Full Range Seismograph Network of Japan (F-net). Though they failed to identify a prompt signal with an amplitude that was obviously above the background noise level, they found that the 30-s average value immediately before the P-wave arrival was more prominent than the noise level with a statistical significance greater than 99% (corresponding to approximately 3\sigma 3\sigma if the background noise has a normal distribution, where \sigma \sigma is the standard deviation of the noise). Based on this finding, they claimed the presence of a prompt gravity signal from the event. However, 99% significance seems considerably low for definite signal detection because it means that one in hundred samples exceeds a reference level; this is too frequent to claim an anomaly in time series analysis. Heaton ( 2017 ) replied to Montagner et al. ( 2016 ) with an objection that their data analysis did not include the appropriate response of the Earth. He pointed out that in the measurement of prompt gravity perturbations, the acceleration motion of the observational site \ddot{\varvec{u}} \ddot{\varvec{u}} has to be considered because the gravimeter output \left( \varvec{a} \right)_{z} \left( \varvec{a} \right)_{z} is affected by \ddot{\varvec{u}} \ddot{\varvec{u}} , i.e., \left( \varvec{a} \right)_{z} = \left( {\delta \varvec{g}} \right)_{z} - \left( {\ddot{\varvec{u}}} \right)_{z} \left( \varvec{a} \right)_{z} = \left( {\delta \varvec{g}} \right)_{z} - \left( {\ddot{\varvec{u}}} \right)_{z} , where \left( \varvec{x} \right)_{z} \left( \varvec{x} \right)_{z} indicates the vertical component of vector \varvec{x} \varvec{x} with upward being positive. He exemplified in a simple spherical Earth model that the Earth’s motion due to prompt gravity perturbations mostly decreases the gravimeter’s sensitivity. Vallée et al. ( 2017 ) reported the detection of prompt gravity signals from the 2011 event based on both data analysis and theoretical modeling. From the records of regional broadband seismic stations in the Japanese islands and the Asian continent, they selected 11 waveforms based on the study’s criteria. Nine waveforms among them showed a consistent visible downward trend starting from the origin time up to the respective P-wave arrival times (Figure 1 of Vallée et al. 2017 ). They then numerically simulated the prompt signals for the 11 stations considering the acceleration motion of the observational sites, i.e., a direct scenario based on Heaton ( 2017 ). To synthesize the sensor output \left( \varvec{a} \right)_{z} \left( \varvec{a} \right)_{z} , they evaluated both the gravity perturbation \delta \varvec{g} \delta \varvec{g} and the ground acceleration \ddot{\varvec{u}} \ddot{\varvec{u}} directly generated by \delta \varvec{g} \delta \varvec{g} in a semi-infinite flat Earth model. The 11 pairs of observed and synthetic waveforms showed similarities to one another (Figure 3 in Vallée et al. 2017 ). However, their signal detection was relatively less quantitative. In contrast to Montagner et al. ( 2016 ), they did not discuss the significance of the observed signals with respect to background noise. In addition, the 11 observational stations they used were only a small subset of the available approximately 200 stations. Fig. 1 a Model prediction of the prompt gravity perturbation \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} (vertical component with upward positive) of the 2011 Tohoku-Oki earthquake for Kamioka Observatory. We used the infinite homogeneous Earth model of Harms et al. ( 2015 ), and no filter was applied. Time 0 was set to the event origin time t_{\text{eq}} t_{\text{eq}} . The P-wave arrival time on the gravimetric record is 05:47:32.4 UTC (68.1 s after t_{\text{eq}} t_{\text{eq}} ). b Distribution of the prompt gravity perturbation \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} immediately before P-wave arrival at each location. The contour lines are drawn every 10 nm/s 2 . The star, the letters K and M, the red triangle, and the small dots indicate the epicenter, Kamioka Observatory, Matsushiro Observatory, the 71 F-net stations, and the 706 tiltmeter stations, respectively Full size image In this study, we search for prompt gravity signals from the 2011 event using a quantitative approach. Initially, we note that observed waveforms at other stations near those Vallée et al. ( 2017 ) presented barely showed a similar trend beyond noise (“ Local records near the reported stations ” section). Our analyses thus rely not on simulated waveforms but rather mostly on data, and we optimize multi-channel data recorded by different instruments (“ Data ” section). We first analyze SG data at two stations (“ Superconducting gravimeters ” section), but signal detection was unsuccessful. Next, we analyze records of the dense arrays of F-net (“ F-net broadband seismometers ” section) and High Sensitivity Seismograph Network Japan (Hi-net) (“ Hi-net tiltmeters ” section). Although most single-channel records did not show any signal beyond noise, waveform-stacking successfully reduced the noise level and allowed identification of a prominent signal in the F-net data. Results of data analyses Local records near the reported stations Vallée et al. ( 2017 ) selected 11 stations and showed the waveforms recorded at the stations. Their data processing (termed “procedure V” in this paper) and selection criteria are detailed in “ Appendix 1 .” Because the presented waveforms showed a similar downward trend and amplitude in a wide range of hypocentral distance between 427 and 3044 km, the prompt signal waveforms of the 2011 event are not expected to vary significantly within a few hundred kilometers. This long-range spatial characteristic is also supported by the original model of Harms et al. ( 2015 ), who formulated the prompt gravity perturbation \delta \varvec{g}^{\text{H}} \delta \varvec{g}^{\text{H}} in an infinite homogeneous medium by an earthquake, where the superscript H denotes the modeling by Harms et al. ( 2015 ). Figure 1 shows \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} for the 2011 event with contours drawn every 10 nm/s 2 , often termed 1 micro gal in geodesy. The spatial extent of \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} \left( {\delta \varvec{g}^{\text{H}} } \right)_{z} is a few thousand kilometers (Fig. 1 b) as noted by Kimura ( 2018 ). We checked whether reported downward trends were recorded at other stations near those Vallée et al. ( 2017 ) used. Among the 11 stations, Fukue (FUK) in Japan and Mudanjiang (MDJ) and Zhalaiteqi Badaerhuzhen (NE93) in China had other available stations within 100 km and were eligible for this purpose. Figure 2 a (modified from Figure 3 of Vallée et al. 2017 ) shows the observed and simulated waveforms at FUK for reference, and Fig. 2 b shows the waveforms at the F-net stations near FUK. The hypocentral distances of FUK and the other 10 stations range from 1130 to 1390 km (Fig. 2 c). The waveform at FUK (Fig. 2 b) appears similar to that of Vallée et al. ( 2017 ) (Fig. 2 a) as it shows a similar downward trend beyond the noise level with a similar amplitude. They are not identical to each other because of the different signal processing procedures of Vallée et al. ( 2017 ) (procedure V) and this study (termed “procedure K” in this paper). Details of procedure K and the difference in the two procedures are described in “ Appendix 2 .” Fig. 2 Acceleration waveforms at F-net observational stations before P-wave arrival from the 2011 Tohoku-Oki earthquake. The black thick vertical line indicates the event origin time t_{\text{eq}} t_{\text{eq}} . a Simulated (black) and observed (red) acceleration waveforms at FUK (modified from Fig. 3 of Vallée et al. 2017 ). The observed waveform was processed using procedure V. b Observed acceleration waveforms at FUK and around 10 stations processed using procedure K, which is perfectly causal. c Distribution map of the F-net observational stations near FUK Full size image Fig. 3 Acceleration waveforms at observational stations in China before P-wave arrival from the 2011 Tohoku-Oki earthquake. The black vertical line indicates the event origin time, which was set to 0. They were processed using procedure K. a Observed acceleration waveforms at stations near MDJ: NE5E, NE6E, MDJ, NE7E, and NE6D. We plotted waveforms at MDJ for the STS-1 and STS-2 seismometers. Vallée et al. ( 2017 ) used data recorded by the STS-1 type. b Observed acceleration waveforms at stations near NE93: NE94, NE87, NE93, NE92, and NEA3. c Distribution map of the observational stations. The yellow star and red triangles indicate the epicenter and the stations, respectively Full size image However, the other 10 waveforms shown in Fig. 2 b do not generally depict a downward trend. Rather, they generally appear as only noise, although Sefuri (SBR) does seem to show a slight downward trend. Namely, the stations near FUK barely showed the downward trend as shown by the waveform at FUK. Figure 3 shows the records at the stations surrounding MDJ and NE93 processed using procedure K. Again, similar downward trends are not observed at the stations near MDJ nor NE93, and it is difficult to identify a significant signal beyond noise in a single trace. Though the STS-1 broadband seismometer at MDJ shows the downward trend beyond noise, the other stations near MDJ, and the STS-2 broadband seismometer at MDJ, do not show such a signal. At NE93, not only the surrounding stations but also NE93 do not show the trend seen in Vallée et al. ( 2017 ). Eventually, we did not see the downward trend except for a few outliers. This waveform comparison suggests that the downward trend at NE93 (Figure 1 of Vallée et al. 2017 ) was not a signal but an artifact due to procedure V, and the trend at FUK and MDJ was possibly just local site noise or affected by unknown local site responses. Data We analyzed three different types of data: gravity data from two SGs, ground velocity data from the F-net seismographic array, and ground tilt data from the Hi-net tiltmeter array. All 71 F-net stations are equipped with STS-1 or STS-2 broadband seismometers. A two-component borehole tiltmeter is installed at 706 Hi-net stations. These instruments are listed in Table 1 . The instrumental responses of SG, STS-1, and STS-2 to the acceleration input are shown in Additional file 1 : Fig. S1. Table 1 Observation instruments Full size table Superconducting gravimeters We used SG data recorded at a 40-Hz sampling rate (GWR5 channel) (Imanishi 2001 ). Figure 4 shows the recorded data at Kamioka ( t_{\text{P}} = t_{\text{eq}} + 68.1 \,{\text{s}} t_{\text{P}} = t_{\text{eq}} + 68.1 \,{\text{s}} , where t_{\text{eq}} t_{\text{eq}} and t_{\text{P}} t_{\text{P}} denote the event origin time and the visually selected P-wave arrival time, respectively). The data include the sensor response. The background microseism dominated, with an amplitude of 100 nm/s 2 . Obviously, no signal was identified. Figure 5 shows the noise power spectrum. In contrast to the 1-Hz sampling data (GGP1 channel) with a 0.061-Hz anti-aliasing filter used in the analysis of Montagner et al. ( 2016 ), our 40-Hz sampling data contain the signal power in the frequency range higher than 0.061 Hz. After removing the trend component and multiplying a cosine taper in the first and last 10% sections of the time series, we applied a band-pass filter (five-pole 0.001-Hz high-pass and five-pole 0.03-Hz low-pass causal Butterworth filters) to the 1-h data (05:00–06:00 UTC) to reduce the relatively large noise power higher than 0.05 Hz. The lower corner frequency of 0.001 Hz was set to remove the long period tidal variation. After filtering, the noise was significantly reduced (Fig. 6 a). During the prompt period t_{\text{eq}} < t < t_{\text{P}} t_{\text{eq}} < t < t_{\text{P}} , we do not see signals with amplitudes far beyond the noise level of the record prior to the event origin time. Fig. 4 Original SG data at Kamioka with zero direct current offset (at a 40-Hz sampling rate) Full size image Fig. 5 Noise power spectrum of the Kamioka SG data. The time window is 40 min between 05:00 and 05:40 UTC before the 2011 Tohoku-Oki event Full size image Fig. 6 0.001–0.03 Hz band-pass-filtered SG data at a Kamioka and b Matsushiro Full size image For quantitative evaluation, we defined the noise level A_{\text{N}} A_{\text{N}} in the time window [ t_{1} t_{1} , t_{2} t_{2} ] as follows: A_{\text{N}} = \sqrt {\frac{1}{{t_{2} - t_{1} }}\mathop \int \limits_{{t_{1} }}^{{t_{2} }} \left[ {x\left( t \right) - \mu } \right]^{2} {\text{d}}t} , A_{\text{N}} = \sqrt {\frac{1}{{t_{2} - t_{1} }}\mathop \int \limits_{{t_{1} }}^{{t_{2} }} \left[ {x\left( t \right) - \mu } \right]^{2} {\text{d}}t} , where x\left( t \right) x\left( t \right) is time series data and \mu = \frac{1}{{t_{2} - t_{1} }} \int \nolimits_{{t_{1} }}^{{t_{2} }} x\left( t \right){\text{d}}t \mu = \frac{1}{{t_{2} - t_{1} }} \int \nolimits_{{t_{1} }}^{{t_{2} }} x\left( t \right){\text{d}}t . For the Kamioka data, A_{\text{N}} A_{\text{N}} decreased from 70 to 0.4 nm/s 2 after filtering ( t_{1} = 05:40 t_{1} = 05:40 UTC and t_{2} = t_{\text{eq}} t_{2} = t_{\text{eq}} ). Figure 6 b shows the data for Matsushiro (436 km from the hypocenter and t_{\text{P}} = t_{\text{eq}} + 57.3 \,{\text{s}} t_{\text{P}} = t_{\text{eq}} + 57.3 \,{\text{s}} ) after the same filtering process. Although A_{\text{N}} A_{\text{N}} decreased from 80 to 0.7 nm/s 2 after filtering, we did not recognize clear signals during the prompt period. Note that the oscillation with the period of approximately 90 s is a parasitic mode of the instrument (Imanishi 2005 , 2009 ). F-net broadband seismometers The frequency responses of the F-net STS-1 seismometers to velocity are flat between 0.003 and 10 Hz. Consequently, we did not deconvolve the sensor frequency responses from the recorded waveforms. The velocity data were converted to acceleration data taking the finite difference in the time domain. In the vertical component of the F-net data, the typical value of A_{\text{N}} A_{\text{N}} was 1000 nm/s 2 (340 nm/s 2 was the lowest value) dominated by the microseism. To reduce the microseismic noise, we applied the same filters (0.002-Hz two-pole high-pass and 0.03-Hz six-pole low-pass causal Butterworth filters) employed in Vallée et al. ( 2017 ) for all available data from 70 of the 71 stations (omitting one because of the poor recording quality). After filtering, the microseism noise was successfully reduced to as low as 0.2 nm/s 2 (Additional file 1 : Fig. S2). However, we did not recognize clear signals. Only at two stations, FUK and SBR, could we find a downward trend before P-wave arrival. Next, a multi-station signal-stacking method was applied to further enhance the signals of interest. After the band-pass filtering, we selected 27 traces out of the 70 traces based on the noise level and stacked them aligned with t_{\text{P}} t_{\text{P}} at each station because we expected the maximum signal amplitude at the last of the prompt period (Fig. 1 a). Figure 7 a shows the stacked trace, and Additional file 1 : Fig. S3a shows an enlarged view of the trace. The noise of the stacked trace significantly decreased, and the trace successfully showed a significant signal with an amplitude of 0.25 nm/s 2 . Our selection criterion and polarity reversal correction for the stacking are described in “ Appendix 3 ,” and the 27 stations are listed in Additional file 2 : Table S1. The hypocentral distances of the 27 stations are between 505 and 1421 km (the average is 987 km), and the minimum and maximum t_{\text{P}} t_{\text{P}} are 63 and 176 s after t_{\text{eq}} t_{\text{eq}} , respectively. Fig. 7 Stacked waveforms of the filtered data for 30 min before the P-wave arrivals. Time 0 was set to the stacking reference time t_{\text{P}} t_{\text{P}} . a Plot for F-net broadband seismometer data. b Plot for Hi-net tiltmeter data Full size image To quantify the signal detection in terms of statistical significance, we investigated the distributions of background noise and the enhanced gravity signal. Figure 8 shows the histograms of the noise section (between − 30 and − 3 min before the aligned t_{\text{P}} t_{\text{P}} ) and the signal section in the stacked trace. Here, we defined the latter half of the time period − 1 min ( { \fallingdotseq } { \fallingdotseq } minimum t_{\text{P}} - t_{\text{eq}} ) < t < 0 t_{\text{P}} - t_{\text{eq}} ) < t < 0 , i.e., − 30 s < t < 0 < t < 0 as the signal section because all 27 waveforms were expected to contain a signal with increasing amplitude toward the end of this time period, as shown in Fig. 1 a. The noise histogram was approximated by a normal distribution with a standard deviation \sigma \sigma . In our analysis, \sigma \sigma was given by A_{\text{N}} A_{\text{N}} (0.035 nm/s 2 ). On the other hand, before the aligned t_{\text{P}} t_{\text{P}} , the amplitude of the stacked trace generally increased with time and the signal level exceeded 3\sigma 3\sigma at t = - 20 t = - 20 s and 5\sigma 5\sigma at t = - 6 t = - 6 s before finally reaching 7\sigma 7\sigma at t = 0 t = 0 (Fig. 7 a), i.e., the signal detection was verified with a statistical significance of 7\sigma 7\sigma . Fig. 8 Amplitude histograms of the background noise (blue in the left vertical axis) and the prompt gravity signal before P-wave arrival (red in the right vertical axis). The noise histogram was fitted by a normal distribution with a standard deviation \sigma = 0.035 \sigma = 0.035 nm/s 2 (black curve) Full size image Hi-net tiltmeters We also analyzed the data recorded by the Hi-net tiltmeters, which work as horizontal accelerometers. For our analysis, the tilt data in rad were converted into horizontal acceleration in m/s 2 by multiplying with the gravity acceleration (9.8 m/s 2 ). Because the sensor response is not known in the seismic frequency band, we could not deconvolve it from the data; however, tiltmeter records have been used as seismic records by comparing them to nearby broadband seismic records (e.g., within a bandwidth of 0.02–0.16 Hz) (Tonegawa et al. 2006 ). Because tiltmeters are designed to respond to static changes, recordings are also reliable below 0.02 Hz. When compared to the F-net data, the Hi-net tiltmeter data were generally noisy. The typical value of A_{\text{N}} A_{\text{N}} was 2000 nm/s 2 . After removing the trend component and applying the same band-pass filter as employed in Vallée et al. ( 2017 ), we again failed to identify a significant signal in each channel. We then aligned 553 data traces out of 1412 traces (two horizontal components from each station) with respect to the P-wave arrival times for stacking. Our selection criterion is also described in “ Appendix 3 .” The hypocentral distances of the 553 traces are between 264 and 1349 km (the average is 830 km). Figure 7 b and Additional file 1 : Figure S3b show the stacked trace and its enlarged view, respectively. In contrast to the F-net results, the prompt signal was not identified. The noise level A_{\text{N}} A_{\text{N}} was 0.08 nm/s 2 . The predicted signal amplitude of the stacked trace based on the infinite homogeneous Earth model of Harms et al. ( 2015 ) was 2 nm/s 2 (Kimura 2018 ), where the theoretical time series were synthesized at each station and then filtered and stacked in alignment with the P-wave arrival time in the same manner as the observed data. In our analysis, such a large signal was confirmed not to exist in the data, and the upper signal level was constrained as 0.15 nm/s 2 with 95% statistical significance (approximately 2\sigma 2\sigma ) in the horizontal component. Discussion Difference from previous SG study Failure of detecting prompt gravity signals in the SG data is consistent with the result of Montagner et al. ( 2016 ), who also analyzed the Kamioka SG and five nearby F-net stations and could not visually detect a clear signal in the time domain. On the other hand, at the Global Seismographic Network (GSN) station Matsushiro (MAJO), a signal detection was reported by Vallée et al. ( 2017 ). GSN MAJO and the Matsushiro SG are installed in the same tunnel, and the Kamioka SG and the five nearby F-net stations in Montagner et al. ( 2016 ) are located at nearly the same epicentral distance. The results of Montagner et al. ( 2016 ) and Vallée et al. ( 2017 ) seem inconsistent with one another. The signal amplitude at GSN MAJO shown in Vallée et al. ( 2017 ) may have been a mere noise or affected by a local site response. Significance of our stacked trace on theoretical modeling The F-net stacked trace (Fig. 7 a) showed a great improvement of the statistical significance of the signal detection. It provides the first constraint of prompt gravity signals by observation and can work as a reference to validate future theoretical models. Once a model is developed that explains the sensor output in gravimetry and the reliable value of \delta \varvec{g} \delta \varvec{g} is constrained, related physical quantities such as gravity gradient and spatial strain are constrained as well. As Heaton ( 2017 ) noted, ground acceleration \ddot{\varvec{u}} \ddot{\varvec{u}} affects the measurement of prompt gravity perturbation \delta \varvec{g} \delta \varvec{g} . Therefore, in the modeling of prompt gravity signals, not only \delta \varvec{g} \delta \varvec{g} but also \ddot{\varvec{u}} \ddot{\varvec{u}} before P-wave arrival have to be calculated. Vallée et al. ( 2017 ) analytically showed that in an infinite homogeneous non-self-gravitating medium, the induced \ddot{\varvec{u}} \ddot{\varvec{u}} directly generated by \delta \varvec{g} \delta \varvec{g} becomes \ddot{\varvec{u}} = \delta \varvec{g} \ddot{\varvec{u}} = \delta \varvec{g} , suggesting full cancelation of \delta \varvec{g} \delta \varvec{g} by \ddot{\varvec{u}} \ddot{\varvec{u}} . They then numerically investigated \delta \varvec{g} \delta \varvec{g} and the induced site motion \ddot{\varvec{u}} \ddot{\varvec{u}} when exposed to the effects of a free surface in a layered non-self-gravitating half-space and evaluated the sensor output - \left( {\delta \varvec{g}} \right)_{z} + \left( {\ddot{\varvec{u}}} \right)_{z} - \left( {\delta \varvec{g}} \right)_{z} + \left( {\ddot{\varvec{u}}} \right)_{z} . Their simulated waveforms at the 11 stations showed the same downward monotonic trend and similar amplitude of approximately 1 nm/s 2 within the wide range of 427–3044 km from the hypocenter. However, this simulated signal amplitude of 1 nm/s 2 is significantly larger than our identified amplitude of 0.25 nm/s 2 in the F-net stacked trace, suggesting that the simulation of Vallée et al. ( 2017 ) overestimated the sensor outputs. Our stacked waveform and Vallée et al.’s single-channel waveforms cannot be directly compared, but their amplitudes can be compared. Because all 27 stations used for the stacking are in the region where Vallée et al.’s simulated waveforms showed the same trend and amplitude of 1 nm/s 2 , if similar signals were recorded in the 27 traces, the resultant amplitude of the stacked waveform would also become 1 nm/s 2 . The identified signal level of 0.25 nm/s 2 is, however, one-fourth of the expected value. Notably, the polarity of our stacked trace shows a negative trend toward the P-wave arrival, consistent with the observation and simulation of Vallée et al. ( 2017 ). A prospective candidate for a better theoretical model is a normal mode model of a spherical self-gravitating realistic Earth that addresses the fully coupled equations between the elastic deformation and gravity. Very recently, Juhel et al. ( 2019 ) conducted theoretical modeling using such a normal mode approach to compute prompt gravity signals. However, similar to Vallée et al. ( 2017 ), the fully coupled problem was not solved in the study. They first considered the prompt gravity perturbation \delta \varvec{g} \delta \varvec{g} induced by the earthquake elastic deformation and then considered the prompt gravity effect on the elastic deformation, which they termed a “two-step approach.” Their simulation results were quite similar to those of Vallée et al. ( 2017 ) and seemed to overestimate the sensor output as well. Although a fully coupled model requires an enormous number of normal mode summations to precisely evaluate the prompt gravity perturbations, a numerical assessment should be conducted in the future. Possible reasons for no finding with tiltmeters The lack of signal identification in the stacked Hi-net trace (Fig. 7 b) can be attributed to the large amplitude of the noise spectrum in the frequency band of the applied band-pass filter (in contrast to the SG and the F-net data in the vertical component). In this band, the typical noise level is more than 10 times that of the quiet SGs and the F-net. Another reason may be unknown effects of a free surface on the induced \ddot{\varvec{u}} \ddot{\varvec{u}} , which may more effectively cancel the horizontal component of \delta \varvec{g} \delta \varvec{g} compared to its vertical component. Toward future detection of earthquake-induced prompt gravity signals using a gravity gradient sensor We have shown that the identified prompt gravity signals were very small (approximately 0.25 nm/s 2 for the average distance of 987 km); this can be attributed to the cancelation of gravity measurements by the acceleration motion of the ground and suggests that gravimetry is not the best approach for detecting prompt gravity perturbation. A gravity gradient measurement provides an alternative method to detect prompt signals from earthquakes (Harms et al. 2015 ; Juhel et al. 2018 ). A spatially inhomogeneous gravity field induces tidal deformation of an object or spatial strain, which is observable even if the observer moves with the same acceleration as the prompt gravity perturbation. Detecting very small perturbations in the gravity gradient has been a challenge in identifying gravitational waves from space. Abbott et al. ( 2016 ) observed gravitational waves using laser interferometers in a high-frequency range from tens to hundreds of Hz. New state-of-the-art instruments, such as torsion bar antennas (TOBA) (Ando et al. 2010 ; Shoda et al. 2014 ), are being developed. Such instruments are intended to observe spatial strain through the tidal deformation of two crossing bars. The existing prototype TOBA attained a 10 −8 s −2 sensitivity within a low-frequency range of 0.01–1 Hz (Shoda et al. 2014 ). The theoretical gravito-gradiograms and the prompt signal intensity map are shown for the 2011 Tohoku-Oki earthquake (Fig. 9 ). The expected signal level was 10 −13 s −2 . Though this value is 10 −5 times smaller than the attained sensitivity, the next-generation TOBA will attain sufficient sensitivity to detect prompt signals. Prompt earthquake detection will significantly benefit from such ultra-sensitive sensors. Fig. 9 a Theoretical six-component gravito-gradiograms of the 2011 Tohoku-Oki earthquake synthesized for Kamioka Observatory. Time 0 was set to the event origin time t_{\text{eq}} t_{\text{eq}} . b Distribution of prompt gravity gradient changes immediately before P-wave arrival at each location (upper left: \ddot{h}_{11} \ddot{h}_{11} component, upper center: \ddot{h}_{22} \ddot{h}_{22} component, upper right: \ddot{h}_{33} \ddot{h}_{33} component, lower left: \ddot{h}_{12} \ddot{h}_{12} component, lower center: \ddot{h}_{13} \ddot{h}_{13} component, and lower right: \ddot{h}_{23} \ddot{h}_{23} component), where \ddot{h}_{ij} \ddot{h}_{ij} denotes the ij th component of the gravity gradient tensor (see “ Appendix 4 ”). In these figures, the x_{1} x_{1} -, x_{2} x_{2} -, and x_{3} x_{3} -axes correspond to the directions of east, north, and upward, respectively. The star and the letter K are the epicenter and Kamioka Observatory, respectively. The contour lines are drawn every 2 \times 10^{ - 13} \, {\text{s}}^{ - 2} 2 \times 10^{ - 13} \, {\text{s}}^{ - 2} Full size image In “ Appendix 4 ,” we present an explicit expression of theoretical gravito-gradiograms, the waveforms of gravity gradients. We extended the expression of Harms et al. ( 2015 ), who used a seismic dislocation source, to a general source described as a moment tensor. Our extension will contribute to the interpretation of future observational records of various event mechanisms. Conclusions We searched for prompt gravity signals from the 2011 Mw 9.0 Tohoku-Oki earthquake in seismic network data. Though nearly all the single-channel waveforms did not show any signals beyond the noise level except for several outliers, the stacked trace of F-net broadband records showed a clear signal in the vertical component. The identified signal level was 0.25 nm/s 2 for the average distance of 987 km; this detection was verified at a statistical significance of 7\sigma 7\sigma to the background noise. In addition, analysis of Hi-net tiltmeters constrained the upper limit of the signal in the horizontal components as 0.15 nm/s 2 at 95% significance. The stacked F-net trace is the first constraint of earthquake-induced prompt gravity signals by observation and will be used as a reference to validate future theoretical models. Measurement of gravity gradients is a more promising method in the prompt detection of future earthquakes. State-of-the-art instruments, such as torsion bar antennas, are being developed to detect strain acceleration smaller than 10 −13 s −2 . Abbreviations FUK: Fukue F-net: Full Range Seismograph Network of Japan GSN: Global Seismographic Network Hi-net: High Sensitivity Seismograph Network Japan IRIS: Incorporated Research Institutions for Seismology MAJO: Matsushiro MDJ: Mudanjiang NE93: Zhalaiteqi Badaerhuzhen SAC: Seismic Analysis Code SBR: Sefuri SG: superconducting gravimeter TOBA: torsion bar antennas
Every year, earthquakes worldwide claim hundreds or even thousands of lives. Forewarning allows people to head for safety and a matter of seconds could spell the difference between life and death. UTokyo researchers have demonstrated a new earthquake detection method—their technique exploits subtle telltale gravitational signals traveling ahead of the tremors. Future research could boost early warning systems. The shock of the 2011 Tohoku earthquake in eastern Japan still resonates for many. It caused unimaginable devastation, but also generated vast amounts of seismic and other kinds of data. Years later, researchers still mine this data to improve models and find novel ways to use it, which could help people in the future. A team of researchers from the University of Tokyo's Earthquake Research Institute (ERI) found something in this data that could advance earthquake prediction research and might someday even save lives. It all started when ERI Associate Professor Shingo Watada read an interesting physics paper on an unrelated topic by J. Harms from Istituto Nazionale di Fisica Nucleare in Italy. The paper suggests gravimeters—sensors that measure the strength of local gravity—could theoretically detect earthquakes. "This got me thinking," said Watada. "If we have enough seismic and gravitational data from the time and place that a big earthquake hits, we could learn to detect earthquakes with gravimeters as well as seismometers. This could be an important tool for future research of seismic phenomena." Contour maps depict changes in gravity gradient immediately before the earthquake hits.The epicenter of the 2011 Tohoku earthquake is marked by (✩) Credit: ©2019 Kimura Masaya The idea works like this: Earthquakes occur when a point along the edge of a tectonic plate comprising the Earth's surface makes a sudden movement. This generates seismic waves that radiate from that point at six to eight kilometers per second. These waves transmit energy through the Earth and rapidly alter the density of the subsurface material they pass through. Dense material imparts a slightly greater gravitational attraction than less dense material. As gravity propagates at light speed, sensitive gravimeters can pick up these changes in density ahead of the seismic waves' arrival. "This is the first time anyone has shown definitive earthquake signals with such a method. Others have investigated the idea, yet not found reliable signals," said ERI postgraduate Masaya Kimura. "Our approach is unique, as we examined a broader range of sensors active during the 2011 earthquake. And we used special processing methods to isolate quiet gravitational signals from the noisy data." Japan is very seismically active, so it's no surprise there are extensive networks of seismic instruments on land and at sea in the region. The researchers used a range of seismic data from these, as well as superconducting gravimeters (SGs) in Kamioka, Gifu Prefecture, and Matsushiro, Nagano Prefecture, in central Japan. A TOBA with door open to reveal cryogenically cooled sensor platform inside. Credit: ©2019 Ando Masaki The signal analysis they performed was extremely reliable scoring what scientists term a 7-sigma accuracy, meaning there is only a one-in-a-trillion chance a result is incorrect. The researchers say this proves the concept and will be useful in the calibration of future instruments built specifically to detect earthquakes. Associate Professor Masaki Ando from the Department of Physics invented a novel kind of gravimeter, the torsion bar antenna (TOBA), which aims to be the first such instrument. "SGs and seismometers are not ideal, as the sensors within them move together with the instrument, which almost cancels subtle signals from earthquakes," explained ERI Associate Professor Nobuki Kame. "This is known as an Einstein's elevator, or the equivalence principle. However, the TOBA will overcome this problem. It senses changes in gravity gradient despite motion. It was originally designed to detect gravitational waves from the Big Bang, like 'earthquakes' in space, but our purpose is more down to Earth." The team dreams of a network of TOBA distributed around seismically active regions, an early warning system that could alert people 10 seconds before the first ground shaking waves arrive from an epicenter 100 kilometers away. Many earthquake deaths occur as people are caught off-guard inside buildings that collapse on them. Imagine the difference 10 seconds could make. This will take time, but the researchers continually refine models to improve accuracy of the method for eventual use in the field.
10.1186/s40623-019-1006-x
Biology
Research team introduces new technology for analysis of protein activity in cells
Doroteya Raykova et al, A method for Boolean analysis of protein interactions at a molecular level, Nature Communications (2022). DOI: 10.1038/s41467-022-32395-w Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-022-32395-w
https://phys.org/news/2022-08-team-technology-analysis-protein-cells.html
Abstract Determining the levels of protein–protein interactions is essential for the analysis of signaling within the cell, characterization of mutation effects, protein function and activation in health and disease, among others. Herein, we describe MolBoolean – a method to detect interactions between endogenous proteins in various subcellular compartments, utilizing antibody-DNA conjugates for identification and signal amplification. In contrast to proximity ligation assays, MolBoolean simultaneously indicates the relative abundances of protein A and B not interacting with each other, as well as the pool of A and B proteins that are proximal enough to be considered an AB complex. MolBoolean is applicable both in fixed cells and tissue sections. The specific and quantifiable data that the method generates provide opportunities for both diagnostic use and medical research. Introduction Proteomics-based approaches have proven essential in various research settings for purposes such as detection of markers in cancer diagnostics, understanding fundamental research questions like signal transduction mechanisms, regulation of gene expression and mutation effects, identification of vaccine targets, elucidation of the mechanisms of drug action, etc. Over the years, a plethora of such methods to suit the complexity and diversity of research questions has been developed. Several of them are based on genetic constructs, where candidate proteins are fused with reporter molecules that upon interaction reconstitute a functional reporter (e.g., yeast two-hybrid 1 , mammalian membrane two-hybrid 2 , and bimolecular fluorescence complementation 3 ). Alternatively, Förster resonance energy transfer (FRET) can be used to determine proximal binding of fluorophores, with a concomitant change in emission spectra/lifetime 4 . More specifically, FRET is based on the transfer of energy between light-sensitive molecules—a donor and an acceptor, which has an absorption spectrum overlapping with the emission spectrum of the donor. The efficiency of the resonance energy transfer is strongly dependent on the distance between the fluorophores 5 . While FRET is a sensitive technique suitable for determining intermolecular proximity in the range of 1–10 nm 6 , among its limitations are the low signal-to-noise ratio 7 and the necessity to fuse the target proteins to the acceptor/donor, which makes the method unfit for clinical use. An additional consideration to keep in mind is that the distance between fluorophores is not necessarily identical to that of the target proteins. To determine interactions between native proteins, most methods rely on antibodies conjugated to functional groups, for example, antibody-based FRET 8 , in situ proximity ligation assay (in situ PLA) 9 , 10 , or proximity-dependent initiation of hybridization chain reaction (proxHCR) 11 . Both in situ PLA and proxHCR rely on dual-target recognition with secondary antibodies conjugated to oligonucleotides (so-called proximity probes), and utilize DNA as a reporter of proximity events, which allows for powerful signal amplification and improved signal-to-noise ratio over traditional FRET. It is important to emphasize that what all of the above-mentioned methods detect is proximity between proteins. For in situ PLA, the proximity threshold is determined by the antibody size and the oligonucleotide length of the probes. The hybridization of a pair of circularization oligos to the probes, resulting in the creation of a circular ligation product, is only possible when the attachment points of the oligonucleotide components of the probes are located within FRET range (below 10 nm) 6 . When primary antibodies are conjugated to the oligonucleotides (i.e., in the case of primary probes) the maximal theoretical distance between targeted epitopes is 30 nm 9 , while for secondary proximity probes it is estimated to be 40 nm. However, highly expressed proteins may be localized very close to each other—less than 40 nm apart—even if they do not interact. In order to confidently interpret data generated with such methods, it is crucial to obtain information not only on the number of proximity events, but also on the amounts of free proteins involved that can be used to normalize data. To be able to detect both the proteins in complex and the pool of non-interacting proteins, we developed a method—MolBoolean—which is based on the Boolean operators NOT and AND on a molecular level. It reports the amounts of protein A and protein B that do not participate in an interaction with each other (NOT), while at the same time also visualizes the pool of A and B proteins that are proximal enough to be considered an AB complex (AND). In this work, we first demonstrate the utilization of MolBoolean for specific dual detection of single proteins, using a pair of antibodies targeting one protein. To establish the applicability of MolBoolean for simultaneous detection and quantification of free proteins and proteins in complex, we then investigate multiple established protein–protein interactions described in literature. In order to highlight our method’s versatility, we perform stainings against proteins localized in various cell compartments, as well as observe changes in free and complex-bound states upon ligand stimulation, siRNA silencing, and other conditions vs no-treatment conditions. Further, we validate the use of MolBoolean not only in fixed cells, but also in different types of formalin-fixed paraffin-embedded (FFPE) tissue sections. In parallel to the MolBoolean analyses, we perform well-established classic techniques not only to validate our findings, but also to provide grounds to compare and contrast the merits of MolBoolean with those of in situ PLA (visualizing protein interactions) and immunofluorescence (IF, suitable also for visualization of free proteins and compartmentalization). We herein demonstrate that MolBoolean provides information on protein interaction in a manner similar to in situ PLA (i.e., by detecting discrete rolling circle amplification products (RCPs)), but also on the relative quantities of individual proteins, which allows for quantification of the RCPs in each category on a single cell level. Results Principle of the MolBoolean method Similar to in situ PLA, MolBoolean, too, relies on the use of proximity probes and rolling circle amplification (RCA) as means for generating and amplifying signal. However, MolBoolean uses a preformed DNA circle as information receiver that would later indicate whether one or two proteins have been detected. At the basis of MolBoolean, like other immunoassays, is the specific binding of a pair of antibodies to their respective protein targets, and the subsequent recognition of each primary antibody by a proximity probe. The MolBoolean proximity probe is essentially a secondary antibody conjugated to a DNA oligonucleotide, termed “arm”, which is complementary to a specific region in the aforementioned circle. Whenever the information receiver circle and the complementary region in a proximity probe hybridize (Fig. 1a ), double-stranded DNA is formed that can be recognized by a nickase. A key feature of this enzyme is its ability to recognize double-stranded DNA motifs and cut just one of the strands in a defined position. Therefore, once the recognition sequence is formed, the nickase creates a nick in the circle, but not in the proximity probe (Fig. 1a, b , nick position indicated by cyan arrowhead). The circle is at this point interrupted by either one or two nicks, depending on whether one or two proximity probes have bound to their complementary regions in it. Next, one or two identifier “tag” oligonucleotides specific for their respective proximity probe get incorporated in the DNA circle by virtue of their complementarity to a loop-and-hairpin region in the probe (Fig. 1c ), and the circle is sealed whole by ligation (Fig. 1d ). Consequently, the re-formed circle now contains information on whether it has interacted with one or two proximity probes. The circle then gets amplified via RCA, forming long concatemeric DNA products (Fig. 1e ). These RCPs are then hybridized with fluorophore-labeled tag-specific detection oligonucleotides to differentially visualize the identities of the incorporated tags (Fig. 1f ). Single-labeled RCPs represent free proteins, while dual-stained ones represent interactions. Fig. 1: Schematic representation of the MolBoolean principle for detection of interacting and free proteins A and B. a After binding their respective target proteins A and B, proximity probes A (black and magenta) and B (black and green) hybridize to the circle. Arrows signify oligonucleotide polarity. b The circle gets enzymatically nicked (cyan arrowhead indicates nicking position). c The circle gets invaded by reporter tags (tag A in magenta, tag B in green). d Enzymatic ligation of the reporter tags to the circle follows. e Rolling circle amplification (RCA) creates long concatemeric products (RCPs). f RCPs are detected via fluorescently labeled tag-specific detection oligonucleotides. Full size image Detailed information on how all oligonucleotides were designed and the significance of all their functional regions for MolBoolean is available in Supplementary Notes , subsection Oligonucleotide Design. In solution testing of the MolBoolean specificity To validate the specificity of the enzymatic steps in the MolBoolean method, we performed in solution tests to ensure that hybridization of the arms to the circle provided a template for the nickase, and that the tag oligonucleotides were specifically incorporated in the nicked DNA circle (Supplementary Fig. 1 ). The circle was only nicked when an arm oligonucleotide was added, showing that the nickase requires a double-stranded DNA substrate (Supplementary Fig. 1 , wells 6–9). Tags were only incorporated where the cognate arm was hybridized to the circle, which demonstrates that tag incorporation is dependent on the identity of the proximity probes (Supplementary Fig. 1 , wells 10–14). In contrast to the in situ protocol (see Methods) where rigorous washes were used to remove all enzymes before the next step, the experiments in solution included heat inactivation steps instead. Heating leads to partial or complete denaturation of the double-stranded oligonucleotide complexes. Even though they can reform after cooling of the sample, that lowers the efficiency of hybridization compared to in situ conditions. Therefore, the in solution test was used to validate specificity rather than efficiency. Sensitive single protein detection MolBoolean is designed to detect free and bound proteins alike. To test the method’s performance in conditions in which all probes were bound in proximity (provided the specificity of all antibodies used is 100%), we co-stained cells with two antibodies against distinct epitopes of a protein. The principle is analogous to how ELISA, PLA and proximity extension assay (PEA) achieve their high selectivity – via dual recognition of a single protein by two pairs of antibodies, – and the first bottleneck is the same: the specificity of the primary antibodies used. This assay demonstrates how MolBoolean can be applied for antibody validation, showing whether two antibodies against the same target protein are equally good at binding it, and the extent to which they cross-react with other proteins. Figure 2 demonstrates MolBoolean staining with two antibodies against β-catenin raised in different species with a high degree of overlap in the honeycomb pattern typical of β-catenin 12 , and compares the results with in situ PLA and IF performed with the same primary antibodies. Omitting controls in which either one of the antibodies was excluded are shown both for MolBoolean and for in situ PLA. Fig. 2: Single protein targeting and antibody validation. a MolBoolean staining and quantification with two antibodies, one raised in mouse (M, magenta), and the other raised in rabbit (R, green), against distinct epitopes of β-catenin in MCF7 cells. Dual staining is shown in white and Hoechst33342 staining of nuclei is shown in blue. b MolBoolean technical controls, in which either one of the primary antibodies was omitted from the reaction mix. c In situ PLA colocalization staining, omitting controls and quantifications with the same pair of anti-β-catenin antibodies in MCF7 cells. In situ PLA signals are shown in magenta and nuclei in blue. d Immunofluorescent staining of MCF7 cells with the same pair of anti-β-catenin antibodies. Mouse antibody (magenta), rabbit antibody (green), overlay (white) and nuclei (blue). White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins (MolBoolean) or protein complexes only (in situ PLA) shown as number of RCPs per cell. Data pooled from three independent experiments. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Outliers shown as solid circles. Source data are provided as a Source Data file. Full size image MolBoolean analysis of E-cadherin and β-catenin To validate the performance of the MolBoolean method we performed a series of stainings for E-cadherin and β-catenin in various conditions. E-cadherin is a cell adhesion protein that primarily localizes in the membrane but can also be found in the endosomes and the trans-Golgi network 13 (Uniprot P12830 14 ). β-catenin is a transcriptional coactivator in the Wnt signaling pathway, and plays an important role in cell adhesion together with E-cadherin (Uniprot P35222 14 ). Upon phosphorylation, β-catenin accumulates in the nucleus; when interacting with E-cadherin, it is found in the plasma membrane 15 . We began with a staining in MCF7 cells, which express both E-cadherin and β-catenin, against a biological control (U2OS cells) that does not express E-cadherin 16 . As expected, in MCF7 cells we observed free proteins in the cytoplasm, as well as the specific honeycomb pattern of interaction in the plasma membrane, reflecting colocalization of the two proteins in the adherens junctions (Fig. 3a , expanded in Supplementary Fig. 2a ). In contrast, in the osteosarcoma cell line U2OS only free β-catenin was detected (Fig. 3a , expanded in Supplementary Fig. 2a ). Fig. 3: MolBoolean staining of E-cadherin with an interaction partner vs no-interaction partner. a Co-stain of E-cadherin which is differentially expressed in MCF7 (positive for E-cadherin) and U2OS cells (negative for E-cadherin) and interaction partner β-catenin. ( p = 1.76e−55; p = 1.89e−55; p = 7.84e−52 for E-cadherin-β-catenin complex, free E-cadherin, free β-catenin respectively). MolBoolean signals are shown for E-cadherin (magenta), β-catenin (green), E-cadherin-β-catenin complex (white) and nuclei (blue). b E-cadherin and LMNA/C in HaCaT cells are expected not to colocalize. MolBoolean signals are shown for E-cadherin (magenta), LMNA/C (green), E-cadherin-LMNA/C complex (white) and nuclei (blue). In situ PLA signals are shown in magenta and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins (MolBoolean) or protein complexes only (in situ PLA) shown as number of RCPs per cell. In ( a ) n MCF7 = 243, n U2OS = 125 cells. Data pooled from three independent experiments. Kruskal–Wallis and two-sided Dunn’s test with Bonferroni correction was used to analyze statistical variance. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Outliers shown as solid circles. **** p < 0.0001. Source data are provided as a Source Data file. Full size image To further test the robustness of our method, we performed a no-interaction control stain, which targeted E-cadherin and Lamin A/C (LMNA/C). Lamin A/C, in contrast to β-catenin, primarily localizes in a different subcellular compartment than E-cadherin. Lamin A/C is part of the nuclear lamina 17 , therefore little to no interaction should occur between the two proteins. Both MolBoolean and in situ PLA detected a small number of interactions, reflecting spurious antibody binding events (Fig. 3b ). In addition, MolBoolean recorded high expression of free E-cadherin and low expression of free LMNA/C. Omitting controls where either one of the primary antibodies was not used, and IF (Supplementary Fig. 2b ) were performed in parallel and resulted in staining consistent with MolBoolean. For an additional no-interaction control involving another pair of target proteins in different organelles, see Supplementary Notes and Supplementary Fig. 2c, d . To validate that the dual-colored RCA products observed in E-cadherin—β-catenin stains contain both tags and are not just individual single-colored RCPs located in close proximity, we designed padlock probes targeting the MolBoolean proximity probes (see Methods for design). Padlock probes consist of two target-complementary segments, connected by a linker sequence 18 . Upon recognition of the target DNA sequence (Fig. 4a ), the 5′ and the 3′ end of the padlock probe can be joined by ligation with T4 ligase (Fig. 4b ), creating a circular DNA molecule that is amplifiable by RCA 18 (Fig. 4c ). As a result, regardless of the target proteins’ proximity, each padlock probe always generates its own individual fluorescent signal and never a dual signal (Fig. 4d ). We therefore used primary antibodies against E-cadherin and β-catenin in MCF7 cells and then either performed the MolBoolean protocol, or substituted the circle hybridization, nicking and tag ligation steps with padlock probe hybridization (Fig. 4e ). Due to the higher overall number of signals detected with padlock probes, data were normalized by dividing the signals in each category by the total number of RCPs per cell. Quantification, normalization, and comparison of the resulting images showed a significantly higher fraction of complexes per cell in the MolBoolean versus the padlock experiment. Fig. 4: Padlock probe design and in situ application in fixed cells. a After proximity probes A and B bind their respective target proteins A and B, padlock oligonucleotides A and B hybridize to their respective arm in place of the MolBoolean circle. Each padlock contains the complementary tag sequence (magenta corresponds to tag A and green corresponds to tag B). b Enzymatic ligation of the 5’ and 3’ ends of each padlock, templated by the corresponding arm, leads to padlock circularization. c RCA, primed by the arms, creates long concatameric RCPs. d RCPs are detected via fluorescently labeled tag-specific detection oligonucleotides. e In situ application of the padlock probes and signal quantification. E-cadherin and β-catenin co-stain in MCF7 cells with padlock probes vs MolBoolean. ( p = 1,46e−57; p = 9,69e−68; p = 1,98e−50 for E-cadherin-β-catenin complex, free E-cadherin, free β-catenin respectively). E-cadherin is shown in magenta, β-catenin in green, E-cadherin-β-catenin complex (MolBoolean), or overlay (padlock probes) in white and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins shown as normalized number of RCPs per cell. n padlock = 173, n MolBoolean = 243 cells; data pooled from three independent experiments. Kruskal–Wallis and two-sided Dunn’s test with Bonferroni correction was used to analyze statistical variance. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Outliers shown as solid circles. **** p < 0.0001. Source data are provided as a Source Data file. Full size image Next, to validate that the MolBoolean method can specifically visualize interacting proteins, we assayed for E-cadherin and β-catenin in cells that harbor a pathogenic cytoplasmic missense mutation (V832M) in the β-catenin binding site of E-cadherin, which leads to reduced interaction, lowered surface expression of E-cadherin and a failure of mutants to aggregate in culture 19 . By using a pair of AGS cell lines stably transfected with either wild-type (WT) E-cadherin or V832M, we reasoned that the expression levels of E-cadherin would be comparable, but the levels of interaction should differ 19 . In agreement with previous in situ PLA data 19 , in mutant cells we observed decreased cell aggregation and a dramatic reduction in complex formation, while the levels of free E-cadherin remained stable (Fig. 5a ). Fig. 5: MolBoolean and in situ PLA staining of E-cadherin and β-catenin under various conditions in fixed cells or in tissues. a Co-stain in stable AGS cell clones transfected with wild-type E-cadherin (WT, top panel) or E-cadherin with a V832M mutation in the β-catenin binding site (AGS V832M, bottom panel). ( p = 6.51e−48; p = 0.17; p = 4.53e−26 for E-cadherin-β-catenin complex, free E-cadherin, free β-catenin respectively). b Co-stain in HaCaT cells, in the absence (“control”, top) or presence (“TGF-β1 treated”, bottom) of TGF-β1. ( p = 7.64e−24; p = 1.66e−15; p = 0.92 E-cadherin-β-catenin complex, free E-cadherin, free β-catenin respectively (MolBoolean) and p = 2,94e−55 (in situ PLA)). c Co-stain in FFPE kidney tissue sections. MolBoolean signals are shown for E-cadherin (magenta), β-catenin (green), E-cadherin-β-catenin complex (white), and nuclei (blue). In situ PLA signals are shown in magenta and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins (MolBoolean) or protein complexes only (in situ PLA) shown as number of RCPs per cell in the case of fixed cell analysis, or in percentage of RCPs in each category (free protein A, free protein B, and AB complex) per frame in the case of tissue analysis. n WT = 1160, n V832M = 810 cells ( a ), and n control = 371, n treated = 113 cells ( b ). Data pooled from three independent experiments, and in ( a , b ) normalized against total number of signal/cell. Two-sided Wilcoxon rank sum test was used to analyze statistical variance in fixed cell data. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Outliers shown as solid circles. **** p < 0.0001, ns not significant. Source data are provided as a Source Data file. Full size image As a demonstration of MolBoolean’s ability to detect dynamic changes in protein complex formation under different conditions, we once again resorted to the hallmark interaction of cell adhesion, E-cadherin–β-catenin, but this time included a cell treatment regime (Fig. 5b and Supplementary Fig. 3a ). Prolonged treatment with TGF-β1 has been shown to lead to loss of cell-cell contacts, disruption of the interaction between E-cadherin and β-catenin in the adherens junctions, and subsequent E-cadherin translocation to the cytoplasm 20 . We therefore analyzed the pools of free and bound E-cadherin and β-catenin in HaCaT cells before (“control” condition) and after 48 h of TGF-β1 treatment (“TGF-β1 treated” condition) in order to assess how MolBoolean performs in an inducible biological system. Both MolBoolean (Fig. 5b ) and IF staining (Supplementary Fig. 3a ) showed morphological changes and disruption of cell-cell contacts in the treated cells. To account for the increased cell area and consequent increase in the total number of RCPs/cell detected in the treated condition, we normalized all MolBoolean data by dividing the number of signals in each category (free E-cadherin, free β-catenin, or proteins in complex) over the total number of detected signals in each cell (Fig. 5b , MolBoolean quantification). We observed a significant decrease in the fraction of E-cadherin–β-catenin complexes and a significant increase in the free E-cadherin relocating from the adherens junctions to the cytoplasm as a result of the TGF-β1 treatment, whereas the levels of unbound β-catenin remained stable (Fig. 5b , MolBoolean quantification). Since there is only one category of signal detected by in situ PLA, i.e., interactions, normalization cannot be performed for in situ PLA data. In situ PLA showed an approximately twofold increase in interaction for the TGF-β1 treated cells compared to control (Fig. 5b , in situ PLA quantification). In addition, we performed the same experiment by using primary probes, where E-cadherin and β-catenin primary antibodies were directly conjugated to the arms, and obtained comparable results (see Supplementary Notes and Supplementary Fig. 3b ). Finally, we explored the E-cadherin–β-catenin interaction in FFPE prostate tissue, and observed honeycomb pattern of staining (Fig. 5c and Supplementary Fig. 3c ) with MolBoolean, in situ PLA and IF. For MolBoolean, nearly 50% of the detected signal per image was from E-cadherin–β-catenin complexes (Fig. 5c , pie chart). MolBoolean analysis of proteins in various cell compartments To showcase the MolBoolean performance for various biological targets, we assayed several established protein–protein interactions in different cell organelles and went on to compare the results to in situ PLA (to ensure that protein–protein interactions are detected) and IF (to show that the staining patterns of the individual proteins are comparable). Technical controls for in situ PLA and MolBoolean in which one or the other primary antibody of the pair was omitted are shown in Supplementary Information for each experiment. To demonstrate that MolBoolean can be utilized for the quantification of free and interacting proteins confined to “crowded” compartments of the cell, we applied our method to stain MCF7 cells for Emerin (EMD) and Lamin B1 (LMNB1) (Fig. 6a and Supplementary Fig. 4a ). Lamin B1 is localized in the nuclear membrane and to some extent the nucleoplasm, and, like other lamins, provides structure to the nuclear lamina and participates in multiple nuclear processes 21 , 22 . Emerin, a stabilizer of the nuclear envelope found on the inner and outer nuclear membrane as well as on the membrane of the endoplasmic reticulum (ER), is a known interaction partner to Lamin B1 23 , 24 . MolBoolean staining demonstrated a fraction of free Emerin in the region of the ER, a fraction of non-interacting Lamin B1 in the nucleoplasm, and a significant region of interactions in the nuclear envelope (Fig. 6a ). The presence of EMD-LMNB1 complexes was verified by in situ PLA (Fig. 6a , right panel), and the nuclear co-localization of the two proteins was additionally shown with IF (Supplementary Fig. 4a , bottom panel). Technical controls for MolBoolean and in situ PLA in which either the anti-Emerin, or the anti-Lamin B1 antibody was omitted (omitting controls) are displayed in Supplementary Fig. 4a . Fig. 6: MolBoolean and in situ PLA staining of nuclear proteins in MCF7 cells. a EMD and LMNB1 co-stain. MolBoolean signals are shown for EMD (magenta), LMNB1 (green), EMD-LMNB1 complex (white) and nuclei (blue). In situ PLA signals for EMD-LMNB1 complex are shown in magenta and nuclei in blue. b FUS and HNRNPM co-stain. MolBoolean signals are shown for FUS in magenta, HNRNPM (green), FUS-HNRNPM complex (white) and nuclei (blue). In situ PLA signals for FUS-HNRNPM complex are shown in magenta and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins (MolBoolean) or protein complexes only (in situ PLA) shown as number of RCPs per cell. Data pooled from three independent experiments. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Source data are provided as a Source Data file. Full size image Nuclear proteins are notoriously difficult to stain, so we tested MolBoolean on an assay that features nuclear interactions: FUS-HNRNPM (Fig. 6b ). FUS is a DNA/RNA-binding protein residing in the nucleus with the exception of the nucleoli 25 , whereas HNRNPM is a pre-mRNA-binding protein involved in splicing and found in the nucleoplasm according to the Human Protein Atlas 16 ( ). We performed MolBoolean and found a high level of colocalization in the nucleus, but also relatively high levels of free FUS and HNRNPM (Fig. 6b ). Comparable level of protein complexes (Fig. 6b , in situ PLA panel) and subcellular localization (Supplementary Fig. 4b , IF panel) were observed with classical validation methods. Dynamic changes in protein states under varying conditions After demonstrating that MolBoolean efficiently stains abundant proteins, we wanted to further explore how sensitive and specific the method is for detection of decreased amounts of one interaction partner. We performed a stain for PDIA3 (also known as ERp57) and calreticulin (CALR) in HaCaT cells (Fig. 7a and Supplementary Fig. 5a ). PDIA3 is a protein primarily located in the endoplasmic reticulum (ER), where it participates in the folding of newly synthesized glycoproteins together with Calnexin and Calreticulin 26 , 27 . It has also been detected in the cytoplasm, cell membrane 27 , 28 , and nucleus 29 . In addition to being localized to the ER, the lectin chaperone Calreticulin, too, has been found to localize in the cytosol 30 , in association with the vitamin D receptor, and further plays a role in nuclear export 31 , 32 . As determined by Western blot, we achieved 92.5% silencing of PDIA3 in HaCaT cells via siRNA-treatment (Fig. 7a , Western blot membrane and quantification below) and compared MolBoolean in the siRNA-treated cells (“PDIA3 knock-down” condition) to mock-transfected ones (“control” condition). Significant decrease of free PDIA3 was detected in the PDIA3 knock-down cells compared to the control (Fig. 7a , MolBoolean quantification), as well as significant downregulation of Calreticulin in accordance with literature 33 , and a dramatic drop in complex formation. In contrast, the level of dual signal remained high in cells with normal expression of PDIA3. In situ PLA confirmed statistically significant decrease in interactions upon silencing (Fig. 7a , in situ PLA panel). IF data confirmed the subcellular protein distribution and the silencing of PDIA3 (Supplementary Fig. 5a ). Fig. 7: MolBoolean staining in dynamic conditions. a PDIA3 and CALR co-stain in untreated HaCaT cells (“control”, top), and after 72 h treatment with siPDIA3 (“PDIA3 knock-down”, bottom). Membrane represents Western blot, and Western blot quantification of silencing efficiency (92.5% knockdown, based on normalization against total protein stain) is shown in the bar chart below. ( p = 2.56e−32; p = 2.81e−35; p = 1.94e−15 for PDIA3-CALR complex, free PDIA3 and free CALR respectively (MolBoolean); p = 2.19e-38 (in situ PLA)). MolBoolean signals are shown for PDIA3 (magenta), CALR (green), PDIA3-CALR complex (white) and nuclei (blue). In situ PLA signals for PDIA3-CALR complex are shown in magenta and nuclei in blue. b Clathrin and PDGFR-β co-stain in BJ-hTERT cells, in the absence (“control”, top) or presence (“PDGF-BB treated”, bottom) of PDGF-BB. ( p = 6.76e−20; p = 5.88e−05; p = 1.36e−17 for Clathrin-PDGFR-β complex, free Clathrin and free PDGFR-β respectively (MolBoolean); p = 3.85e−22 (in situ PLA)). MolBoolean signals are shown for Clathrin (magenta), PDGFR-β (green), Clathrin-PDGFR-β complex (white) and nuclei (blue). In situ PLA signals for Clathrin-PDGFR-β complex are shown in magenta and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. Quantification of protein complexes and free proteins (MolBoolean) or protein complexes only (in situ PLA) shown as number of RCPs per cell. n control = 103, n knock-down = 104 cells ( a ). n control = 150, n treated = 140 cells ( b ). Data pooled from three independent experiments. Two-sided Wilcoxon rank sum test was used to analyze statistical variance. Box plots show median, Q1 to Q3 range, lower and upper whiskers at maximum 1.5 times the interquartile range. Outliers shown as solid circles. **** p < 0.0001. Source data are provided as a Source Data file. Full size image While in the case of PDIA3 silencing we showed the detection of proteins with varying abundance, and with the TGF-β1 treatment of HaCaT cells, we predominantly observed a decrease of complex formation and re-localization of free signal, we further tested MolBoolean’s performance with an inducible interaction that is known to increase after ligand stimulation. We focused on platelet-derived growth factor receptor β (PDGFR-β), a receptor tyrosine kinase activated by ligands such as PDGF-BB 34 (Fig. 7b and Supplementary Fig. 5b ). Upon ligand-induced activation, the membrane-bound PDGFR-β is mostly internalized via Clathrin-coated pits 34 , 35 , which has an important function in downstream signaling in the early endosomes 36 . It has been previously demonstrated with in situ PLA that upon PDGF-BB stimulation PDGFR-β in fibroblasts shows increased colocalization with Clathrin 37 . We therefore treated BJ h-TERT cells with PDGF-BB for 0 min (“control”) and 15 min (“PDGF-BB treated”) accordingly, and applied MolBoolean to quantify and compare the amounts of free Clathrin and free PDGFR-β, as well as the amount of dual signal under both conditions. We also verified the latter with in situ PLA (Fig. 7b ) and IF (Supplementary Fig. 5b ). Clathrin–PDGFR-β colocalization increased significantly upon stimulation as detected by both MolBoolean and in situ PLA (Fig. 7b , quantifications). MolBoolean analysis of proteins in tissue sections FFPE tissue sections are routinely used in histopathological analyses in research and in the clinic. In order to validate that our method can not only be used successfully in cells, but also in tissue applications, we stained kidney tissue against ACE2 and its interaction partner TMPRSS2 (Fig. 8a and Supplementary Fig. 6a ). ACE2 is an important counter-regulator of the renin-angiotensin system with a role in vascular homeostasis, and an entry point of the SARS-CoV-2 virus causing the Coronavirus disease 2019 (Covid-19) 38 , 39 . Our analysis demonstrated its characteristic membranous expression in the proximal renal tubule cells 40 and strong colocalization with TMPRSS2 (Fig. 8a ), a serine-protease that, among other functions, facilitates SARS-CoV-2 viral uptake in the cell 41 , 42 . Fig. 8: MolBoolean and in situ PLA staining and quantification in FFPE tissue sections. a ACE2 and TMPRSS2 co-stain in kidney. MolBoolean signals are shown for ACE2 (magenta), TMPRSS2 (green), ACE2-TMPRSS2 complex (white) and nuclei (blue). In situ PLA signals for ACE2-TMPRSS2 complex are shown in magenta and nuclei in blue. b SATB2 and HDAC1 co-stain in colon. MolBoolean signals are shown for SATB2 (magenta), HDAC1 (green), SATB2-HDAC1 complex (white) and nuclei (blue). In situ PLA signals for SATB2-HDAC1 complex are shown in magenta and nuclei in blue. White frames depict an area shown in enlarged view in the following panel. Scale bars = 10 μm. MolBoolean quantification shown in percentage of RCPs in each category (free protein A, free protein B and AB complex) per frame. Data collected from three independent experiments. Source data are provided as a Source Data file. Full size image Furthermore, we applied MolBoolean to detect SATB2 and HDAC1 in colon tissue (Fig. 8b and Supplementary Fig. 6b ). SATB2 is a nuclear DNA-binding protein 43 which participates in chromatin remodeling by recruiting, among others, HDACs to promotors and enhancers. HDAC1 is a histone deacetylase and a prognostic marker for colorectal cancer involved in epigenetic regulation via transcriptional repression 44 , 45 . SATB2 is known to recruit HDAC1 to DNA 46 , 47 , 48 , and in agreement with that we demonstrated fairly high levels of colocalization between the two proteins in the nuclei of glandular cells in the colon mucosa, accompanied by high levels of free protein both for HDAC1 and for SATB2 (Fig. 8b , pie chart). For additional FFPE staining examples see Supplementary Fig. 6c, d , and Supplementary Notes . Discussion Taken together, our results demonstrate that the MolBoolean method is versatile and works reliably in fixed cells and tissues in order to sensitively and selectively visualize both free and interacting proteins at the same time. It is efficient in discriminating single from dual signals in a wide range of organelles such as the cell membrane, ER, Golgi complex, endosomes, mitochondria, etc (e.g., Figs. 2 , 4 , 7 , Supplementary Fig. 3c ,). It even works reliably in crowded and less accessible compartments of the cell like the nucleus, as shown in the EMD-LMNB1 and FUS-HNRNPM experiments (Fig. 6 ). Like other approaches for determining proximity between proteins, such as in situ PLA and FRET, MolBoolean provides information on whether the proximity probes, be it primary or secondary antibodies, have bound their targets within the distance that would allow for colocalization readout. Thus, the detection of signal with any of these proximity-based approaches should only be considered indirect proof, and cannot be used as indisputable evidence of physical interaction between the two targeted proteins. This is an important caveat, although in many cases, proximity is indeed indicative of two proteins forming a complex. The distance threshold that allows for the formation of a dual-colored RCP in MolBoolean is determined by the size of the affinity reagents used, as well as the length of the oligonucleotides. Therefore, using primary versus secondary antibodies as probes also affects that. In the current secondary antibody-based design this theoretical distance is similar to what has been reported for in situ PLA. Due to the discrete, dot-like nature of RCPs, signal intensity in MolBoolean is not only amplified compared to regular immunostaining techniques, but it also allows for quantification of the number of RCPs, normalization whenever reasonable, and comparison between different conditions. This was clearly demonstrated in our TGF-β1 and PDGF-BB treatments (Figs. 5b and 7b , respectively), and in the PDIA3 silencing experiment (Fig. 7a ). TGF-β1 is a well-known inducer of epithelial-to-mesenchymal transition (EMT) that leads to the acquisition of mesenchymal characteristics, increased motility and invasiveness of induced cells 20 , 49 , 50 . In culture, a reduction in the local density of HaCaT cells and rearrangement of cell adhesion structures, accompanied by decreased expression of E-cadherin at the membrane and increased cytoplasmic localization have been described in response to TGF-β1 stimulation 20 . In line with literature, our prolonged TGF-β1 treatment of HaCaT cells leads to observable modifications in cell morphology and increased migration (i.e. cells lose contact and spread out over a larger surface area), as well as redistribution of free and interacting proteins. This highlights the importance of being able to simultaneously monitor free and complex-bound states. Simply performing in situ PLA in this case would not be informative, as it would be easy to deduce that there is an increase in E-cadherin–β-catenin interactions post-treatment (Fig. 5b ). This misleading observation would be true in absolute numbers, but not in relation to the total number of the two proteins recorded in each enlarged cell. IF, on the other hand, shows the morphological changes, but cannot be used for quantification of the increased amounts of free cytoplasmic E-cadherin after the addition of TGF-β1 (Supplementary Fig. 5b ). The TGF-β1 and the PDGF-BB treatment regimens both are a demonstration of the ability of MolBoolean to sensitively capture the dynamic changes of protein complex formation under different conditions. An important characteristic of the MolBoolean method is its ability to discriminate between RCPs produced by actual colocalization, and closely positioned RCPs generated by two free proteins in conditions of high abundance. This was demonstrated by using AGS cells transfected with either (WT) E-cadherin, or a mutant form with decreased ability to bind the interaction partner β-catenin (Fig. 5a ). Both conditions produce an abundance of the two proteins, but complex formation is recorded to significantly higher levels where WT E-cadherin is expressed. In another assay where we compared a cell line not expressing one interaction partner (U2OS does not express E-cadherin, but does express β-catenin) to a cell line that expresses both (MCF7), we showed that MolBoolean detects free and complex-bound E-cadherin only in the MCF7 cells, whereas free β-catenin was observed in both (Fig. 3a ). These results showcase the MolBoolean sensitivity and specificity. The possibility to discern between free proteins and interacting partners in a single assay is further advantageous in that the parallelization allows for detection on just one tissue slide, thereby saving time and materials (e.g., Figs. 5c , 8 , Supplementary Figs. 3c , 6 ). This can be especially valuable in clinical use, where availability of consecutive tissue sections for diagnostic staining might be limited, but also in research laboratories, where understanding the dynamics of protein complex formation in a group of cells or one cell at a time might be of interest. Like all immunostaining methods, MolBoolean is dependent on the quality of the antibodies used. Rigorous validation of antibody specificity is required to ensure that the antibodies actually target their intended proteins 51 . However, an advantage of MolBoolean is that it offers specific staining of single proteins by means of dual recognition of two different epitopes within the same target, which also allows identification of cross-reactivity (Fig. 2 ). Our M-β-catenin–R-β-catenin assay demonstrated the expected pattern of staining in MCF7 cells and showcased the ability of MolBoolean to detect many more protein molecules per cell compared to in situ PLA. At the same time, it also highlighted once again that all immuno-based methods are highly reliant on antibody affinity, and that less-than-ideal conditions (which are almost inevitably the case in reality) lead to some off-target staining in the form of single-colored signals. The number of reported free proteins versus proteins in complex is a relative measurement, where the ratio is dependent on antibody binding, efficiency of hybridization, and subsequent enzymatic steps. The concentrations of the antibodies used need to be high enough to ensure that the majority of epitopes are bound. Detection of interacting proteins depends on antibodies binding both targets, like for in situ PLA and antibody-based FRET. If, for example, 80% of all available epitopes are bound by an antibody, then 64% (i.e., 80% × 80%) of the protein complexes will be bound by both antibodies. Therefore, one should aim to saturate as many epitopes as possible in order not to disadvantage dual signal detection. To decrease off-target effects, primary antibody conjugates may be used as probes (Supplementary Fig. 3c ), since this eliminates the background from any unspecific binding of the secondary antibodies. In addition to antibody binding, MolBoolean relies on several enzymatic steps. For the recording of dual signal, it is necessary that the information receiver circle is nicked in two places and two tag oligonucleotides are successfully incorporated. Although the efficiency of the enzymatic steps is very high, as demonstrated by in solution tests (Supplementary Fig. 1 ), any reduction will favor the generation of single-colored RCPs. Compared to in situ PLA where crowding of highly expressed proteins might generate false positive detection, MolBoolean offers additional information in terms of identifying non-interacting fractions of each protein, which is especially useful for stains such as the one against E-cadherin and β-catenin, or in the assays where we stained against abundant proteins located in different compartments (e.g., MT-CO1 and GM130 (Supplementary Fig. 2c, d ), or E-cadherin and Lamin A/C (Fig. 3b )). Taken together with the information about the expected background from antibody cross-reactivity (which can be deduced from the quantification of omitting controls), MolBoolean thus allows the conclusion that the protein pairs in the examples above do not form complexes. Still, for situations where the staining results in extremely abundant signals for the proteins of interest, the detection of interactions becomes less reliable, as image analysis will report some adjacent RCPs as one dual-stained object. Further improvements in image analysis, such as 3D analysis, will likely reduce or eliminate this issue. In addition, to refine the results, there is also a possibility to use FRET 52 between the different fluorophores on detection oligonucleotides A and B to determine if they are situated within the same RCP or not. In conclusion, MolBoolean provides opportunities for studying biological processes, has applications in diagnostics, and decreases the risk of false positive signals compared to in situ PLA. Methods Ethical statement This study includes samples of anonymized formalin-fixed paraffin-embedded (FFPE) human tissue samples from ovarian carcinomas that were collected under local ethical guidelines, with informed consent (as stipulated by the Declaration of Helsinki) and approved by the Ethical Committee from Centro Hospital de São João (CHSJ) (Ref.86/2017), with permission to publish data generated. Cell culture and tissue sections All cell lines were cultured in standard conditions (37 °C, 5 % v/v CO 2 ) in a humidified incubator and were grown in complete medium (i.e. medium supplemented with 10% FBS), unless in starvation and/or stimulation conditions, in which case the medium was supplemented with either a very low percentage of FBS, or no FBS at all (referred to as starvation medium). HaCaT and BJ-hTERT cells were cultured in Dulbecco′s Modified Eagle′s Medium (DMEM) supplemented with GlutaMAX™-I and 10% (v/v) Fetal Bovine Serum (FBS) (all from Thermo Fischer Scientific). MCF7 cells (ECCAC 86012803) were cultured in Minimum Essential Medium Eagle (EMEM) with additives: 2 mM Ala-Gln, 1% Non-Essential Amino Acids (NEAA), and 10 % (v/v) FBS, all from Sigma-Aldrich. AGS cell stable clones (E-cadherin WT and V832M, a kind gift from Raquel Seruca, University of Porto) were cultured in Roswell Park Memorial Institute (RPMI) 1640 medium, 10% (v/v) FBS and 1% penicillin-streptomycin (all from Sigma-Aldrich), and supplemented for selection with 10 ng/µL blasticidine (Gibco), renewed every 3-4 days. U2OS cells were purchased from ECACC (cat no: 92022711). They were cultured in McCoy’s 5a medium supplemented with 10% (v/v) FBS and 2 mM Ala-Gln (all from Sigma-Aldrich). For the PDIA3 silencing assay, Silencer Select siRNA (ThermoFisher) was used to transfect HaCaT cells, seeded at a density of 150.000 cells/ml. Either siPDIA3 (ThermoFisher, s6228) (“PDIA3 knock-down” condition), or Silencer™ Select Negative Control No.1 siRNA (“control” condition) were used in a final concentration of 100 nM. Briefly, either type of siRNA was used to transfect HaCaT cells for 72 h using siLentFect™ Lipid Reagent for RNAi (BioRad) according to BioRad’s instructions. Afterwards, either whole-cell lysate was prepared with LDS sample buffer (ThermoFischer, NP0007) for testing of silencing efficiency via Western blot, or cells were fixed in 3.7% PFA on ice for IF, in situ PLA and MolBoolean staining. For an example of presentation of full scan blots, see Supplementary Information . For the disruption of cell-to-cell adhesion assay (Fig. 5a ), HaCaT cells were either stimulated with 2 ng/mL TGF-β1 in DMEM starvation medium (0% FBS) for 48 h with fresh medium replacement after 24 h (“treated” condition), or grown in DMEM starvation medium (0.5% FBS) (“control” condition). For the Clathrin–PDGFR-β assay, as described in ref. 37 , BJ-hTERT cells were starved overnight in DMEM starvation medium (0.2% FBS), and then either stimulated with 20 ng/mL PDGF-BB in for 15 min (“treated” condition), or left untreated in the same medium (“control” condition). Anonymized formalin-fixed paraffin-embedded (FFPE) human tissue samples from ovarian carcinomas were collected under local ethical guidelines, with informed consent (as stipulated by the Declaration of Helsinki) and approved by the Ethical Committee from Centro Hospital de São João (CHSJ) (Ref.86/2017). Anonymized FFPE tissue blocks for all other tissues were purchased from a commercial biobank (Asteand Biosciences/BioIVT) and used in agreement with the terms and conditions of sale. Glass slides with the tissue sections were deparaffinized by 3 × 3 min washes in xylene (Sigma-Aldrich), followed by one wash in 100% xylene and 99.9% ethanol in 1:1 ratio for 3 min, and 2 × 3 min washes with 99.9% ethanol, as well as 1 × 3 min 96% ethanol, 1 × 3 min 70% ethanol and 1 × 3 min 50% ethanol. The tissue slides were rinsed in deionized water and antigen retrieval was performed with Tris-EDTA pH 9 (DAKO) in a pressure cooker at pressure 2 atm at 95 °C for 40 min. In situ PLA All in situ PLA experiments were performed with the Duolink® In Situ Red Starter Kit Mouse/Rabbit (Sigma Aldrich) according to the manufacturer’s instructions. In brief, PFA-fixed cells on 8-well Nunc™ Lab-Tek™ II CC2™ Chamber Slides (Sigma-Aldrich) or deparaffinized FFPE sections that had undergone antigen retrieval (for details see Cell culture and tissue sections) were encircled with ImmEdge Hydrophobic Barrier PAP Pen (Vector Laboratories) to ensure the reaction mixes in the next steps will cover the cells/tissue. The cells were then permeabilized in 1× TBS (Thermo Fisher Scientific) 0.2% v/v Triton X-100 (Sigma-Aldrich) for 10 min, followed by 2 min wash with 1× TBS. Blocking was done with Odyssey blocking buffer (LiCor) for 1 h in a humidified chamber, and afterwards the cells were incubated with either a mixture of two primary antibodies against the respective proteins of interest, raised in different hosts (mouse or rabbit), or with either one of these antibodies (to serve as omitting controls). Primary antibodies were incubated overnight at 4 °C, followed by 3 × 5 min wash in 1× TBS, and an incubation with a mix of the Duolink® In Situ PLA® Probe Anti-Rabbit PLUS, Affinity purified Donkey anti-Rabbit IgG (H + L) and Duolink® In Situ PLA® Probe Anti-Mouse MINUS, Affinity purified Donkey anti-Mouse IgG (H + L) (all from Sigma-Aldrich) in the concentrations recommended by Sigma-Aldrich. Next followed hybridization and ligation with the Duolink® Ligation mix, and finally, RCA and signal detection were performed using the Duolink® In Situ Detection Reagent Red. Nuclei were labeled with Hoechst33342 (1:250, Thermo Fisher Scientific). The slides were then mounted with SlowFade Gold antifade reagent (Thermo Fisher Scientific), and images were acquired with Zeiss AxioImager M2 with a Zeiss Plan-Apochromat 63x NA 1.4 oil objective and deconvolved with Huygens Essential (Scientific Volume Imaging, the Netherlands, ) using the Deconvolution Wizard option. Quantification and colocalization analyses were performed with the CellProfiler 50 software on the deconvolved but otherwise unaltered images. IF staining PFA-fixed cells or FFPE tissues (for preparation, see Cell culture and tissues) were stained using standard immunofluorescence techniques. Permeabilization was performed as for in situ PLA, and subsequently both cells and tissues were blocked with Odyssey blocking buffer (LiCor) for 1 h. After blocking, the primary antibodies of interest were diluted in blocking buffer and applied to the slides overnight at 4 °C in a humidified chamber, and afterwards washed in 1× TBS for 3 × 5 min. Next, fluorophore-labeled secondary antibodies and Hoechst33342 (1:250, Thermo Fisher Scientific) were added for 1 h at 37 °C, followed by 1× TBS-Tween-20 wash and mounting with SlowFade Gold antifade reagent (Thermo Fisher Scientific). Images were acquired with Zeiss AxioImager M2 with a Zeiss Plan-Apochromat 63x NA 1.4 oil objective and then deconvolved with Huygens Essential (Scientific Volume Imaging, the Netherlands, ) using the Deconvolution Wizard option. MolBoolean sequence design All MolBoolean oligonucleotide sequences (Table 1 ) were designed by hand and tested in Nupack 53 (nupack.org) for the formation of secondary structures and hybridization at different concentrations, temperatures and salinity. Table 1 DNA design for MolBoolean Full size table Padlock probes We designed padlock probes (see Table 1 ) as previously described 17 . Each probe is 99 nt long, out of which 25 nt in the 5′ end and 24 nt in the 3′ end are complementary to the MolBoolean arms. Hybridization to the arm will bring the 5′- and 3′ end together and act as a template for ligation. The 5′- and 3′ ends of the padlock probes are joined by a 50 nt linker. Circle ligation The circle parts 1 and 2 (Table 1 ) were ligated by using 0.02 U/μl T4 ligase (Thermo Fisher Scientific) in T4 DNA ligase buffer (Thermo Fisher Scientific) for 2 h at room temperature. Non-ligated oligonucleotides were removed via digestion with a mixture of 2 U/μl exonuclease I, 0.5 U/μl lambda exonuclease, and 0.5 U/μl T7 exonuclease (all from New England Biolabs) in 1× exonuclease I reaction buffer (New England Biolabs) at 37 °C overnight, followed by heat inactivation of the enzymes at 80 °C for 30 min and subsequent validation by gel electrophoresis. NHS-ester conjugation of MolBoolean proximity probes The antibody components of the probes were concentrated to a minimum of 2 μg/μl using Amicon Ultra-15 Centrifugal Filter Units (Sigma-Aldrich). Succinimidyl 6-hydrazinonicotinate acetone hydrazine (SANH) Crosslinker (Solulink) was added at a 25-molar excess to the antibodies, and incubated under gentle agitation for 2 h at room temperature, protected from light. Each conjugation reaction underwent a buffer exchange to 100 mM NaH 2 PO 4 (Sigma-Aldrich), 150 mM NaCl (Sigma-Aldrich), pH 6, with the use of Zeba Spin Desalting Columns, 7 K MWCO (Life Technologies), according to manufacturer’s instructions. Activated antibodies were incubated in 100 mM NaH 2 PO 4 150 mM NaCl, pH 6 with a 3-molar excess of aldehyde-modified arm A (for anti-mouse IgG) or B (for anti-rabbit IgG) and 10 mM aniline (Sigma-Aldrich) as a catalyst. The reactions were incubated protected from light and with gentle agitation for 2.5 h in room temperature, before a buffer exchange to 1× PBS (Thermo Fischer Scientific), followed by size-exclusion purification. oYo-Link conjugation of primary MolBoolean proximity probes For preparation of MolBoolean probes using direct conjugation of primary antibodies to arm oligonucleotides A and B respectively, 100 µg of anti-mouse E-cadherin (AMAb90862, Atlas Antibodies) and 100 µg of anti-rabbit β-catenin (AMAb91209, Atlas Antibodies) primary antibodies (Table 2 ) in PBS formulation were used. The next steps were performed according to the recommended protocol for oYo-Link TM conjugation (AlphaThera). In brief, 5′-oYo-Link-modified arms A and B (Table 1 ) were ordered from AlphaThera via the oYo-Link Oligo Custom option, and each arm was resuspended in 100 µL of nuclease-free water in accordance with the manufacturer’s instructions. Each antibody was then mixed well with the corresponding arm (1 µg of antibody per 1 µL of oYo-modified arm), centrifuged, and subjected to Light-Activated Site-Specific Conjugation (LASIC) under 365 nm black light for 120 min on ice, in order to achieve covalent binding of oligo to antibody. Size-exclusion purification followed. Table 2 Antibodies used for MolBoolean experiments Full size table Size-exclusion purification The conjugated probes were purified from unconjugated antibody and oligonucleotide by ÄKTA Pure chromatography (GE Healthcare) using a Superdex 200 10/300 GL column (GE Healthcare). Successful purification was confirmed by separating the conjugates on a Novex TBU 10% gel (Life Technologies) at 150 V for 60 min in a water bath pre-heated to 50 °C. DNA was visualized using SYBR Gold Nucleic Acid Gel Stain (Life Technologies), and protein was visualized using Coomassie brilliant blue stain (Bio-Rad). The gel was imaged on Odyssey Fc with the Image Studio Lite v5.2.5 software (LI-COR Biosciences). In solution specificity tests All reagents were diluted to concentrations corresponding to those used in situ in proportion to the reaction volume. Unhybridized MolBoolean oligonucleotides were used as size references (Supplementary Fig. 1 , wells 1 through 5 on the gel represent the ligated circle, arm A, arm B, tag A, and tag B respectively). To demonstrate the specificity of the nickase, we prepared a 2x Digestion Master mix (0.25 U/mL Nt.BsmAI in water and 2x NEBuffer CutSmart (both from New England Biolabs)) and mixed it with circle in a final concentration of 0.1 µM. A quarter of this reaction mix was set aside, and the rest was divided into three equal parts to which we added as follows: arm A at a final concentration of 0.2 µM; or arm B at a final concentration of 0.2 µM; or arms A and B at a final concentration of 0.2 µM each. The resulting four digestion reactions were incubated at 37 °C for 1 h, and then at 65 °C for 20 min in order to heat-inactivate Nt.BsmAI according to the manufacturer’s instructions (shown in Supplementary Fig. 1 , wells 6 through 9 respectively). Next, we proceeded by preparing five ligation reaction mixes on the basis of the digestion mixes from the previous step. For the reaction mix in Supplementary Fig. 1 , well 10, we added tag A at a final concentration of 1 µM to the mixture of nicked circle and arm A. In the same way, for the reaction mix shown in well 11, we added tag B to the nicked circle and arm B mix. For the mix in lane 12, we combined both tags A and B and added them to the mixture of nicked circle and both arms prepared at the previous step. In addition, for the mixture in well 13, we added tag B to the mix of nicked circle and arm A, whereas for the mixture in well 14, we added tag A to the mix of nicked circle and arm B. These five reactions (Supplementary Fig. 1 , wells 10–14) all had 1x T4 Ligation buffer and 0.05 U/µL of T4 Ligase (both from ThermoFisher) added after hybridization between tags and arms was allowed for 30 min at 37 °C first. After enzyme addition, the five samples were incubated for another 30 min at 37 °C to allow for ligation. Next, the ligase in the samples was heat-inactivated at 80 °C for 20 min as per the enzyme’s manufacturer’s recommendations. All samples were loaded on a denaturing Novex TBU 10% gel (Life Technologies) after boiling in 50% urea for 5 min at 95 °C, and the gel was run at 130 V for 35 min in a 65 °C water bath. DNA was visualized using SYBR Gold Nucleic Acid Gel Stain (Life Technologies). The gel was imaged on Odyssey Fc with the Image Studio Lite v5.2.5 software (LI-COR Biosciences). MolBoolean experimental procedure Cells were seeded in the desired density on 8-well Nunc™ Lab-Tek™ II CC2™ Chamber Slides (Sigma-Aldrich) and treated according to the experimental condition. Fixation was performed with ice-cold 3.7% PFA (Sigma-Aldrich) for 15 min on ice. The chamber slides were dried and stored at −20 °C until use or used fresh. The wells were removed from the slides and subsequently lined with ImmEdge Hydrophobic Barrier PAP Pen (Vector Laboratories). The cells were permeabilized with 1× TBS (Thermo Fisher Scientific) 0.2% v/v Triton X-100 (Sigma-Aldrich) for 10 min, followed by 2 min wash with 1× TBS. Blocking was done either with Odyssey blocking buffer (LiCor), or homemade blocking buffer (2% BSA w/v (Jackson Immunoresearch) in 1× TBS 0.1% Tween (Sigma-Aldrich) 0.02% sodium azide (Sigma-Aldrich)), either one supplemented with 2.5 mg/mL salmon sperm DNA (Thermo Fisher Scientific) for 1 h at 37 °C in a humidified chamber. The cells were then incubated with pairs of mouse and rabbit primary antibodies against the proteins of interest, diluted in either Odyssey blocking buffer, or homemade blocking buffer, and were incubated overnight at 4 °C in a humidified chamber, followed by 3 × 3 min washes in 1× TBS. The primary antibodies used are shown in Table 2 . The cells were incubated with 3 μg/mL of each proximity probe (A and B), diluted in either Odyssey blocking buffer, or homemade blocking buffer for 1 h at 37 °C in a humidified chamber, followed by 1 × 3 min wash in 1× TBS 1 M NaCl (Thermo Fisher Scientific) 0.05% v/v Tween-20, followed by 2 × 3 min wash in 1× TBS 0.05% Tween-20 (TBS-T). Subsequently, the cells were incubated in 1× T4 DNA ligase buffer, supplemented with 0.25 mg/mL BSA (Sigma-Aldrich) with 0.05 μM circle for 1 h at 37 °C in a humidified chamber, followed by 3 × 3 min wash with 1× TBS-T. Afterwards a mix of 0.125 U/μl Nt.BsmAI in 1× NEBuffer CutSmart (New England Biolabs), and 0.25 mg/mL BSA was added for 30 min at 37 °C in a humidified chamber, followed by 3 × 3 min wash with 1× TBS-T. For the hybridization of the tag oligonucleotides, the cells were incubated in 1× TBS, 0.25 mg/mL BSA, and 0.5 μΜ tag oligonucleotides A and B (Table 1 ) for 30 min at 37 °C in a humidified chamber and ligated in 1× T4 DNA ligase buffer, 0.25 mg/mL BSA, 0.05 U/µl T4 ligase for 30 min at 37 °C, followed by 1 × 3 min wash with 1× TBS 1 M NaCl 0.05% v/v Tween-20, and 1 × 3 min wash with 1× TBS-T. For the RCA, the cells were incubated in 1x phi29 polymerase buffer (Monserate), 0.25 mg/mL BSA, 1.25 mM dNTPs (Thermo Fisher Scientific), and 1 U/μl phi29 polymerase (Monserate) for 90 min at 37 °C in a humidified chamber, followed by 2 × 10 min wash with 1× TBS-T and then incubated in 1× TBS 1 M NaCl 0.05% v/v Tween-20, 0.25 mg/mL UltraPure Salmon Sperm DNA Solution, Hoechst33342 (1:250) (Thermo Fisher Scientific), 0.025 μΜ detection oligonucleotides A and B (Table 1 ) for 30 min at 37 °C in a dark humidified chamber, followed by 1 × 10 min wash with 1× TBS 1 M NaCl, 1 × 10 min wash with 1x TBS, and 1 × 5 min wash with 0.2× TBS in the dark. Slides were mounted with SlowFade Gold antifade reagent (Thermo Fisher Scientific) according to manufacturer’s instructions and sealed with Menzel Gläser coverglass #1.5 (VWR). During cell and FFPE tissue section imaging, at least three images per well or FFPE tissue section were taken in a single focal plane according to the Nyquist criteria. The microscope images were acquired using either a Zeiss AxioImager M2 (Fig. 4 , all IF images) or a Leica TCS SP8 X microscope (all other images) using the Zen Blue 2 or the LasX software respectively. The former was used with a 63x/1.4 oil apochromat (Zeiss) objective lens, a Hamamatsu C11440 camera and an HXP 120 V (Zeiss) light source for excitation. The latter was used with a water immersion HC PL APO 63x/1.20 NA, motCORR CS2 objective lens (Leica), and the Leica White light Laser. Images were deconvolved with Huygens Essential (Scientific Volume Imaging, the Netherlands, ) using the Deconvolution Wizard option. Quantification and colocalization analyses were performed with the CellProfiler 54 software on the deconvolved but otherwise unaltered images. Adjustments of brightness and contrast were then made on figure images for visualization purposes only. Pseudo-coloring was applied to all images; Hoechst33342, Texas Red, and Atto647N are depicted in blue, magenta, and green respectively. Data analysis Deconvolved split-channel images in grayscale.tif format were analyzed using the CellProfiler software version 4.1.3 54 . For MolBoolean analysis, a pipeline for signal quantification, with slight modifications between assays, was compiled with the following modules: IdentifyPrimaryObjects , IdentifySecondaryObjects , EnhanceOrSuppressFeatures , IdentifyPrimaryObjects , ExpandOrShrinkObjects , CombineObjects , MaskObjects , MeasureObjectIntensity , DisplayDensityPlot , ClassifyObjects , FilterObjects , OverlayOutlines , SaveImages , MeasureImageAreaOccupied , RelateObjects , ConvertObjectsToImage , GrayToColor , and ExportToSpreadsheet . First, IdentifyPrimaryObjects was used on the nuclear stain channel to identify nuclei based on their diameter measured in pixels and the application of two-class Otsu thresholding. Thereafter , IdentifySecondaryObjects was used to identify the cells, by means of expanding the nuclei by distance—N, by a certain number of pixels set by user. The images with the RCPs were filtered to remove background with a white top-hat filter through the enhance speckles feature in the EnhanceOrSuppressFeatures module and IdentifyPrimaryObjects was used in both filtered images containing RCPs, in order to identify the RCPs within a certain diameter (measured in pixels), using two-class Otsu or global manual thresholding. The identified RCPs were then shrunk to a single point in the ExpandOrShrink module and expanded again by distance—B to a certain number of pixels, using minimum cross-entropy thresholding, so as to better encapsulate the RCPs, using the IdentifySecondaryObjects module. The identified expanded RCPs were merged in the CombineObjects module and any signal outside the defined cells was removed through the MaskObjects module in order to avoid inclusion of unspecific background specks or RCPs from cells that have been excluded from the analysis. The intensity of the RCPs in both channels was measured with the MeasureObjectIntensity module and plotted on a density plot through the DisplayDensityPlot module. In the ClassifyObjects module, a manually determined threshold based on the density plot was used for the classification of objects, where the aim was to avoid background and include the highest intensity signal. In preparation for downstream analysis, the manual thresholds for both images were set to zero. The RCPs were then filtered by means of the FilterObjects module, based on their classification as protein complex or free protein and the bins of RCPs generated, were saved as images with the nuclei outlines overlaid, through the SaveImages and OverlayOutlines modules. Thereafter, the RCPs were assigned to cells with the RelateObjects module. In order to get a quality control of the segmentation and classification of the RCPs, the classified RCPs were shrunk again to a single point through the ExpandOrShrinkObjects module and converted to binary images through the ConvertObjectsToImage module, on which the outline of the RCPs was overlaid by means of the OverlayOutlines module. GrayToColor module was used in order to generate a quality control image, consisting of the original images with RCPs, as well as the reduced to single point RCPs, classified to bins of protein complex and free proteins. The color-coded outlines of the RCPs assigned classification were overlaid to the quality control image through the OverlayOutlines module and the generated images were saved through the SaveImages module. The data were thereafter exported to a CSV file with the ExportToSpreadsheet module and were used for downstream analysis in which data were binned by applying angled thresholds based on the intensities of every class of signal (analysis with description and examples is available at Github: 55 ), and statistics. For an example of specific settings that we used for our CellProfiler pipeline, see Supplementary Notes . For in situ PLA analysis, a pipeline for signal quantification, with slight modifications between assays, was compiled with the following modules: IdentifyPrimaryObjects , IdentifySecondaryObjects , EnhanceOrSuppressFeatures , IdentifyPrimaryObjects , MaskObjects , RelateObjects , and ExportToSpreadsheet . First, IdentifyPrimaryObjects was used on the nuclear stain channel to identify nuclei based on their diameter measured in pixels and the application of three-class Otsu thresholding. Thereafter , IdentifySecondaryObjects was used to identify the cells, by means of expanding the nuclei by Distance - N, by a certain number of pixels. The images with the RCPs were filtered to remove background with a white top-hat filter through the enhance speckles feature in the EnhanceOrSuppressFeatures module and IdentifyPrimaryObjects was used in the filtered images containing RCPs, in order to identify the RCPs within a certain diameter (measured in pixels), using robust background or two-class Otsu thresholding. Thereafter, any signal outside the defined cells was removed through the MaskObjects module in order to avoid inclusion of unspecific background specks or RCPs from cells that have been excluded from the analysis and the RCPs were assigned to cells with the RelateObjects module. The data were thereafter exported to a CSV file with the ExportToSpreadsheet module and were used for downstream analysis and calculations (R code is available at 55 ). For an example of specific settings we used for our CellProfiler pipeline, see Supplementary Notes . Data were quantified as number of single- and dual-colored RCPs either recorded per individual cell (in the case of fixed cell stains), or as detected in an image frame (for tissue stains). Statistics and reproducibility No statistical method was used to predetermine sample size. No data were excluded from the analyses. The experiments were not randomized. The investigators were not blinded to allocation. Statistical differences were analyzed using nonparametric tests two-sided Wilcoxon rank sum test for comparison of two groups or Kruskal–Wallis and two-sided Dunn’s test with Bonferroni correction for comparison of three or more groups. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data generated or analyzed during this study are included in this published article and its Supplementary Information . Source Data are also provided with this paper, containing the data underlying the quantifications and statistical analyses; CellProfiler output files can be found in the Zenodo repository under 56 . Source data are provided with this paper. Code availability Custom code with examples on how it was used in data analysis of MolBoolean and in situ PLA is available in the Github repository under 55 .
Proteins constitute principal building blocks in all living organisms. They are often described as the workers of the cell, where they⁠—together or individually⁠—perform numerous essential tasks. If something goes wrong, the consequences are often serious. Both research and health care have expressed the need for effective tools to analyze the functions and activities of proteins, and in a new article in scientific journal Nature Communications, Professor Ola Söderberg's team introduces MolBoolean, a technology that is expected to open important doors in cell and cancer research. "MolBoolean is a method to determine levels of a pair of proteins in individual cells, while in parallel identifying the percentage of these proteins that bind each other. We have continued to develop a tool that we created in 2006, which today is used all over the world, and the functions that we are now adding generate information about the relative amounts of different proteins as well as relative amounts of protein complexes, thus making the cells' activity status and communication more visible," says Ola Söderberg, Professor of Pharmaceutical Cell Biology at Uppsala University. "This can be compared to assessing a restaurant; 10 positive reviews give some information, but it is important to know whether they are based on ten or 1,000 opinions," The group's results are piquing great interest in both academia and industry, and currently the Stockholm-based company Atlas Antibodies—which recently overtook the patent for MolBoolean—is preparing the method for the market. The tool is expected to be of great value in research, and in the long term, also in health care to improve diagnosis and choice of treatment for cancer. Schematic representation of the MolBoolean principle for detection of interacting and free proteins A and B. a After binding their respective target proteins A and B, proximity probes A (black and magenta) and B (black and green) hybridize to the circle. Arrows signify oligonucleotide polarity. b The circle gets enzymatically nicked (cyan arrowhead indicates nicking position). c The circle gets invaded by reporter tags (tag A in magenta, tag B in green). d Enzymatic ligation of the reporter tags to the circle follows. e Rolling circle amplification (RCA) creates long concatemeric products (RCPs). f RCPs are detected via fluorescently labeled tag-specific detection oligonucleotides. Credit: Nature Communications (2022). DOI: 10.1038/s41467-022-32395-w "Our research aims to develop tools to visualize processes within cells and to increase, for example, the knowledge of what happens in cancer cells. Knowing the balance between free and interacting proteins is important when studying cell signaling, and we believe that MolBoolean will be appreciated in many research laboratories," states Doroteya Raykova, researcher at the Department of Pharmaceutical Biosciences and first author of the article. MolBoolean is developed by Ola Söderberg's group at Uppsala University's Faculty of Pharmacy. The work has been carried out in collaboration with researchers at the universities of Porto and Uppsala, SciLifeLab and Atlas Antibodies.
10.1038/s41467-022-32395-w
Medicine
3-D-printed model of stenotic intracranial artery enables vessel-wall MRI standardization
Ju-Yu Chueh et al, Development of a high resolution MRI intracranial atherosclerosis imaging phantom, Journal of NeuroInterventional Surgery (2017). DOI: 10.1136/neurintsurg-2016-012974 Journal information: Journal of NeuroInterventional Surgery
http://dx.doi.org/10.1136/neurintsurg-2016-012974
https://medicalxpress.com/news/2017-04-d-printed-stenotic-intracranial-artery-enables.html
Abstract Background and purpose Currently, there is neither a standard protocol for vessel wall MR imaging of intracranial atherosclerotic disease (ICAD) nor a gold standard phantom to compare MR sequences. In this study, a plaque phantom is developed and characterized that provides a platform for establishing a uniform imaging approach for ICAD. Materials and methods A patient specific injection mold was 3D printed to construct a geometrically accurate ICAD phantom. Polyvinyl alcohol hydrogel was infused into the core shell mold to form the stenotic artery. The ICAD phantom incorporated materials mimicking a stenotic vessel and plaque components, including fibrous cap and lipid core. Two phantoms were scanned using high resolution cone beam CT and compared with four different 3 T MRI systems across eight different sites over a period of 18 months. Inter-phantom variability was assessed by lumen dimensions and contrast to noise ratio (CNR). Results Quantitative evaluation of the minimum lumen radius in the stenosis showed that the radius was on average 0.80 mm (95% CI 0.77 to 0.82 mm) in model 1 and 0.77 mm (95% CI 0.74 to 0.81 mm) in model 2. The highest CNRs were observed for comparisons between lipid and vessel wall. To evaluate manufacturing reproducibility, the CNR variability between the two models had an average absolute difference of 4.31 (95% CI 3.82 to 5.78). Variation in CNR between the images from the same scanner separated by 7 months was 2.5–6.2, showing reproducible phantom durability. Conclusions A plaque phantom composed of a stenotic vessel wall and plaque components was successfully constructed for multicenter high resolution MRI standardization. Atherosclerosis MRI Stenosis Vessel Wall googletag.cmd.push(function() { googletag.display("dfp-ad-mpu"); }); Statistics from Altmetric.com See more details Picked up by 6 news outlets Tweeted by 4 Mentioned by 1 peer review sites On 1 Facebook pages 44 readers on Mendeley Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. ?xml version="1.0" encoding="UTF-8" ? Request permissions Atherosclerosis MRI Stenosis Vessel Wall Introduction Recent randomized trials showed that despite treatment of intracranial atherosclerosis (ICAD) with aggressive medical management, some patients still have a high risk of stroke. 1 , 2 More rigorous patient selection based on characteristics of intracranial plaques may make it possible to identify patients who would benefit from new therapies, such as refined endovascular procedures 3–5 or novel medical therapies. High resolution MRI (HRMRI) is also a promising technique to differentiate various pathologies that may be the cause of intracranial artery stenosis (eg, atherosclerosis vs vasculitis vs other vasculopathy) and allow characterization of ICAD plaque composition. 6–10 While HRMRI research has been growing throughout the past decade with great success in carotid artery plaque analysis, 11 , 12 its clinical application to ICAD has been limited by a lack of standardization. The approach to developing HRMRI imaging in ICAD has been fragmented, with most investigators focused on designing and validating their own sequences in small, underpowered, single center studies. There is currently no universal standard protocol for HRMRI imaging that would facilitate multicenter studies. Furthermore, data generated by HRMRI are dependent on MR instrumentation, sequence parameters used, the MR environment, and well as patient factors, which can make comparisons of results from multiple centers difficult. In order to advance the field of HRMRI ICAD research, establishment of a multicenter network to provide a research infrastructure for promoting collaboration, sharing of protocols and data, and providing a quick and efficient mechanism for studying HRMRI in ICAD is needed. A critical factor for the development of such a multicenter HRMRI ICAD network is the need for a static model, or phantom, to standardize image quality across sequences and centers. We have developed a patient specific basilar artery stenosis imaging phantom to provide the image quality assessment and standardization that is required for the development of a multicenter international HRMRI ICAD network. Our MRI phantom has been specially designed to evaluate, analyze, and optimize the performance of MRI scanners or sequences suitable for imaging small structures. Since the intracranial arteries are very small (average diameter 2–5 mm), with even smaller plaque components, the ability of an individual MRI scanner to generate high quality images of such small structures must be established. Our phantom is based on details from a patient's HRMRI ICAD images, and will enable practical assessment of the image quality obtained from HRMRI sequences using various MR instruments at various sites, without subjecting multiple human subjects to long time periods in the MRI scanner and controlling for patient motion artifacts, during sequence assessment and optimization. Herein, we describe the design and construction of the phantom and the methods used to assess MR image quality at multiple sites. Methods Phantom construction With permission from our institutional review board and informed consent, HRMRI imaging data from a patient with ICAD was used to acquire a detailed structure of plaque components ( figure 1 A). The images were segmented for the lumen, fibrous cap, and lipid core (Mimics and Magics; Materialise, Leuven, Belgium) ( figure 1 B). The computer model ( figure 2 A) was used to create the infusion mold by 3D printing ( figure 2 B), as previously described. 13 Polyvinyl alcohol (PVA Mowiol 56-98; Höchst AG, Frankfurt/Main, Germany) with an average molecular weight of 195 000 g/mol and a hydrolysis degree of 98% was mixed with dimethyl sulfoxide (DMSO, D4540; Sigma-Aldrich, St Louis, Missouri, USA) and water. The mixture was allowed to cool to room temperature and was then infused into the acrylonitrile butadiene styrene core shell mold by liquid injection molding, followed by three freeze–thaw cycles for curing. The core shell mold was immersed in xylene for acrylonitrile butadiene styrene dissolution, yielding a PVA stenotic vessel wall ( figure 2 C). Download figure Open in new tab Download powerpoint Figure 1 Frontal maximum intensity projection image from time of flight MR showing a focal stenosis of the basilar artery (A, top panel, arrow). Single axial slice of high resolution vessel wall MRI showing intracranial atherosclerosis of the basilar artery (A, bottom panel, arrow). The resulting patient specific virtual phantom of atherosclerotic plaque (B) with fibrous cap (C, top panel) and lipid core (C, bottom panel). Download figure Open in new tab Download powerpoint Figure 2 (A) The stenotic basilar artery is used as a ‘core’ to build a core shell mold (B). Hydrogel is infused into the core shell model and undergoes several freeze–thaw cycles for curing. After mold dissolution in xylene, a hydrogel vascular replica is obtained (C). The segmented fibrous cap was mimicked by a mixture of 0.41 wt% gadolinium chloride, 0.44 wt% agarose, 3 wt% carrageenan, 0.05 wt% sodium azide, 96 wt% water, and 0.1 wt% sodium chloride. A lipid core was simulated using a 95.95 wt% milk, 0.05 wt% sodium azide, and 4 wt% carrageenan mixture. Milk was selected over other oil based solutions because as an emulsion of fat and water with the major lipid component being triglycerides, dissolution and dispersion of the carrageenan (a high molecular weight polysaccharide) was more controllably achieved. To precisely control the volume of each plaque component, a plaque mold made of silicone with a known shape and dimension was built. Each mixture was infused into the silicone container and set at −80°C for 1 hour. The silicone container was carefully cut open to release the shaped plaque component. The shaped plaque components were then glued to the PVA vascular replica of a stenotic vessel wall by adding several layers of PVA coating and performing the curing process. Two samples of the phantom were manufactured and each secured in a closed 50 mL centrifuge tube. The models were secured by two through holes at either end of the centrifuge container using a friction fit with silicone tubes that attached to the model. The centrifuge container was filled with distilled water to maintain hydration of the PVA phantom. Imaging of the phantom The plaque phantom was scanned using four different 3 T MRI platforms (Siemens Trio, Siemens Skyra, Philips Achieva, GE MR750 Discovery) at eight different sites. Details of the MR scanner types and parameters are shown in table 1 . For each experiment, the two phantom models were imaged side by side using a 3D T2 weighted sequence, and imaging planes were planned to provide cross sectional thin slice views centered on the stenosis of both phantoms. In addition, longitudinal slices through each phantom were acquired with a multislice T2 weighted sequence. View this table: View inline View popup Table 1 MRI models and parameters for phantom scan comparisons We also explored the utility of high resolution cone beam CT to provide a detailed benchmark of lumen geometry for HRMRI based stenosis measurements. Three stenotic vessel wall models were constructed using the techniques described above, filled with Omnipaque 350, sealed with a three-way stopcock, and embedded in water. Images were reconstructed at an isotropic resolution of 0.20 mm from VasoCT acquisitions 14 obtained with a monoplane angiographic system (Allura Xper FD20; Philips Healthcare). Image quality assessment Quantitative comparisons of the scan results for both structural dimensions of plaque components (eg, lumen radius) and image contrast between plaque components were based on the thin cross sectional slices from 3D T2 weighted TSE/FSE acquisitions. The two imaged phantoms in the field of view were extracted from the background and fluid filled tube using manually initialized region growing segmentations and separately analyzed, as shown in figure 3 . Global affine and diffeomorphic non-rigid image registrations were performed to capture geometric variability using elastix 15 and Advanced Normalization Tools. 16 In an iterative fashion, all images were aligned with a template that was subsequently refined to represent the mean geometry and image intensity following a strategy employed in the construction of population average brain models. 17 During each pairwise registration step, a mutual information similarity metric was optimized. Average transformation matrices 18 and deformation fields were used to define an updated template space, and a refined template in this space was obtained by weighted averaging of normalized image intensity values. Templates were refined using 10 iterative registration and normalization steps. Projections of the regions of interest onto the original image slices were visually inspected to ensure adequate delineation of the vessel wall and plaque components in scans of both models. Download figure Open in new tab Download powerpoint Figure 3 Each row represents a set of 3D T2 weighted TSE slices for sample model No 1 and No 2 scanned at one of the participating sites. After spatial normalization and intensity normalization, semi-automatic segmentation was performed to define the vessel wall (blue), lumen (yellow), lipid core (green), and fibrous cap (red). These segmentations were projected onto a selection of original image slices as displayed above. Lumen diameter was determined on the final instance of the template and on images of each individual sample, as well as on the CT scans. The Vascular Modeling Toolkit (VMTK) 19 was employed to extract and refine a surface representation of the lumen using a manually initialized level set segmentation from each image. The radius of the maximum inscribed sphere was calculated at each point along the central axis of the individual image's surface models, and all values were mapped to the corresponding position along the central axis of the template model using the spatial transformations that were previously calculated. Variation in measured lumen geometry was characterized by calculation of the weighted mean and SD of the sample radii at every point along the template model's central axis. Due to the limited field of view in the slice direction and slight variations in sample positioning within the containers, cross sections could be planned optimally only for one of the two samples imaged. Samples with partial slice coverage of the stenosis did not contribute to the template refinement procedure and were also excluded from mean lumen diameter measurements. To reduce the contribution of less reliable radius calculations based on boundary slices, weighting values tapered off with a smooth Gaussian profile near the sample's centerline endpoints. Image contrast between constituent components of the plaque phantom (ie, lipid core, fibrous cap, and vessel wall) was assessed by calculating a contrast to noise ratio (CNR) for each pair of components, which was defined as the quotient of the difference between the mean intensities and the SD of the background noise. Intensity measurements were obtained in regions that were first defined semi-automatically on the final template image. These binary segmentations of the vessel wall, lipid core, and fibrous cap were obtained from each template image by evolving active contours as implemented in ITK-SNAP. 20 Segmentations were then projected to original images using the transformations computed before in order to calculate intensity means and SDs. Regions for background signal extraction were manually defined in each image. In order to evaluate reproducibility and accuracy of the phantom model construction, per cent stenosis calculations were used to compare lumen diameter profiles extracted from the average HRMRI scans of the two phantoms with the average VMTK based lumen diameter profile obtained from CT scans of the three stenotic vessel wall models. Additionally, the minimum lumen diameter of the stenotic segment was measured and 95% CI calculated. To evaluate the temporal durability of the phantom, lumen diameter and image contrast were compared between phantom images acquired from the same Siemens Skyra scanner performed 7 months apart (scan Nos 1 and 5). To evaluate reproducibility of the phantom images between different MR platforms, image contrast and lumen diameter were compared between scanner models, software, and locations. Within sample absolute differences in CNR between model 1 and model 2 were calculated to estimate image contrast variability between models. We subsequently determined the mean variability over all samples and contrast components, and a bootstrap estimate of 95% CI of the mean. Results Cross sectional HRMRI images of the two phantom models were acquired on 10 occasions at eight different sites ( figure 4 ). For each of the two phantom models, a template image in a reference coordinate system was constructed using 10 iterations of registration and averaging, and used to identify plaque components. Quantitative evaluation of the visible lumen diameter in the vicinity of the stenosis ( figure 5 ) showed that there was minimal variation between the scans, demonstrating reproducibility between scanner types. In addition, the two phantom models yielded very similar radius profiles before and at the stenotic portion of the vessel axis ( figure 5 A vs 5B) demonstrating reproducibility in phantom manufacturing. Unbinned contrast enhanced cone beam CT acquisitions of three stenotic vessel wall models without plaques were used to calculate reference per cent stenosis values along the vessel axis ( figure 6 ). The minimum radius along the stenosis was, on average, 0.80 mm (95% CI 0.77 to 0.82 mm) in model 1 and 0.77 mm (95% CI 0.74 to 0.81 mm) in model 2. The minimum radius of the stenosis measured on the patient's HRMRI (0.75 mm) was slightly smaller than the model, but the difference was within the resolution of the measurement. Download figure Open in new tab Download powerpoint Figure 4 Cross sectional T2 high resolution MRI images of the two phantom models acquired on 10 occasions at eight different sites (see table 1 for site location and scan details). Download figure Open in new tab Download powerpoint Figure 5 Weighted mean and 95% CI of selected samples' maximum inscribed sphere radii along the centerline (abscissa) of the template model for phantom model No 1 (A) and model No 2 (B), as measured by the Vascular Modeling Toolkit. Maximum inscribed sphere radii plotted for each individual sample in phantom model No 1 (C) and model No 2 (D) as measured by high resolution MRI. Only samples with slices covering the stenosis were included (black broken line indicates extent of abscissa overlap among all selected samples). Download figure Open in new tab Download powerpoint Figure 6 Per cent stenosis measurements were obtained from average lumen radius computations on high resolution cone beam CT scans of three stenotic vessel wall models (red line) and compared with similar measurements on average high resolution MRI (HRMRI) scans of phantom model 1 (blue) and 2 (green). The radius at 5 mm before the maximum stenosis was used as reference for conversion of radii to per cent stenosis values. CNR measurements were obtained for each scan based on the back projected delineations of the vessel wall and the two plaque components ( table 2 ). Overall, the highest CNRs were observed for comparisons between lipid (hyperintense on T2 weighted images) and the vessel wall (hypointense). Intra-plaque CNRs ranged from 2.7 to 49.9 with a mean value of 19.4, a median value of 16.3, and >4.5 in 85% of the samples. Image contrast variability between the two phantom models for each comparison between plaque components was 3.81–5.16, with overall absolute difference in CNR of 4.31 (95% CI 3.82 to 5.78). These small variations relative to consistently high plaque component CNR values confirm reliability in manufacturing technique. Variation in CNR between the two models imaged on the same scanner separated by 7 months (scan Nos 1 and 5) was 2.15–6.16, showing again reproducible phantom characteristics over time. However, there was a large increase in CNR during this time period within each model (13.8±5.8). View this table: View inline View popup Table 2 Contrast to noise measurements for individual scans Discussion Intracranial atherosclerotic disease is the most common cause of stroke worldwide, with a high risk of recurrent stroke. But despite the impact, little is known about how intracranial plaque characteristics are related to the risk of stroke. High resolution vessel wall MRI is a promising tool for understanding the pathology, yet multicenter prospective studies are needed to determine the relationship between plaque components and stroke risk. A critical factor for the development of such multicenter studies is the need for a static phantom to standardize HRMRI image quality between sequences and centers. In this report, we described our first effort to align HRMRI protocols and equipment among different sites for comprehensive assessment of ICAD plaques. Our HRMRI ICAD phantom, based on actual HRMRI images of a basilar stenosis, is a durable model that allows for highly reproducible images and is therefore a promising tool for quality control and sequence implementation at multiple sites. Our manufacturing technique results in phantoms with very similar imaging characteristics, allowing for multiple phantoms to be developed for reproducibility between sites. Combined with standardized image post-processing to quantify image quality, lumen geometry, and plaque characteristics, our reproducible phantom offers a platform for collaborative development and validation of dedicated ICAD imaging protocols. We did observe a stable drift in contrast of the phantom plaque components over time that although reproducible suggest further improvements in phantom stability are desired or the inclusion of a contrast standard for normalization should be used. Desired characteristics of an ICAD phantom include: (1) reproducibility and structural stability; (2) physiologically relevant size and geometry; and (3) distinguishable HRMRI characteristics between plaque components. In this work, 3D printing provides geometrically accurate ICAD phantoms based on patient data for HRMRI imaging. The challenges of developing a HRMRI phantom include selection of materials that can provide luminal detail and allow for insertion of additional plaque components. PVA is a polymer that has been extensively studied due to its numerous desirable characteristics, such as biocompatibility and aggregating gel formation. These properties make PVA a good material for various biomedical applications, such as an MR imaging phantom. PVA hydrogel was adopted to mimic the vessel wall in this study because it is elastic and has high water content, enabling the creation of phantoms in a range of different shapes. Unlike silicone vascular replicas that have no MR signal, PVA traps water and its composition is easily altered to adjust MR imaging characteristics. PVA hydrogel is a well known crystallization-induced physical gel, and is composed of crystalline regions (junction zones) and amorphous regions (long flexible chains). Crystallites act as junction points, and the crystallinity determines the physical properties of the PVA hydrogel. During the freeze–thaw process, freezing causes phase separation, which is followed by crystallization of the PVA chains. Phase separation is retarded by addition of DMSO. Under such condition, post-gelatin crystallization is allowed to proceed and a homogeneous gel structure is formed. As a result, the mechanical properties of the PVA hydrogel are improved. The improvement in mechanical properties of the PVA hydrogel prevents the PVA vessel wall from collapsing during HRMRI imaging. An additional benefit of using an appropriate ratio of PVA and DMSO is that a gel can be formed with MR relaxation times comparable with those of human tissue. Our work has limitations, which include the lack of all ICAD plaque components in the phantom, specifically intra-plaque hemorrhage and calcification that are seen in atherosclerotic disease. However, in future generations of ICAD phantoms, these components can be included in the model. Another limitation is that the phantom was scanned in an artificially static environment without intraluminal flow. Circulation of a blood analogue and an MR compatible cardiac pulse duplicator could be incorporated into future models to generate realistic flow waveforms to account for the impact of flow on HRMRI ICAD imaging. However, this additional complexity may limit adoption of the phantom in application. Also, we limited the imaging evaluation of the phantom plaque components to T2 weighted MRI. Pre- and post-contrast T1 images should be assessed in future analyses of phantom model performance. In future studies, our analysis methodology with intensity and geometry based measurements obtained from additional MRI sequences after image co-registration could be applied. Finally, we did not capture data on the true value of the phantom; specifically, the hours spent imaging the phantom and adjusting the MR technique to generate acceptable images for analysis. In certain centers, the phantom was used to optimize image acquisition in advance of implementing HRMRI in clinical studies. These metrics would have demonstrated the clear pragmatic advantages of such a phantom. We have described a strategy to develop a segmented template image in an unbiased reference coordinate system to quantitatively compare geometric properties and image intensity characteristics among model images acquired at different sites and scanners. Our registration approach followed techniques established in computational anatomy to obtain unbiased templates representing average anatomical shape and acquisition dependent intensity variation. 17 , 21 Image quality assessment was achieved using public domain implementations of established image analysis algorithms and required little user interaction. HRMRI lumen diameters could be compared against the values obtained with detailed cone beam CT imaging. Within this framework, CNR measurements can be used to define lower bounds on acceptable image quality. Conclusions A technique has been described to manufacture realistic ICAD phantoms. The resulting phantom is reproducible and structurally stable, which enables efficient assessment of the image quality obtained from HRMRI sequences using various MRI manufacturers at various sites. Acknowledgments We acknowledge the contributions of Dr. Wenhong Ding from Zhejiang University and Drs. Weihai Xu, Ming-li Li, and Yannan Yu from Peking Union Medical College for image acquisitions of the phantom during the course of the CHAMPION study.
A collaboration between stroke neurologists at the Medical University of South Carolina (MUSC) and bioengineers at the University of Massachusetts has led to the creation of a realistic, 3D-printed phantom of a stenotic intracranial artery that is being used to standardize protocols for high-resolution MRI, also known as vessel-wall MRI, at a network of U.S. and Chinese institutions, according to an article published online March 9, 2017 by the Journal of NeuroInterventional Surgery. High-resolution or vessel-wall MRI has been used to study the plaque components in vessels in the brain for more than ten years and has the potential to elucidate the underlying pathology of intracranial atherosclerotic disease (ICAD), the leading cause of stroke worldwide, as well as to gauge patient risk and inform clinical trials of new therapies. However, progress has been stymied by the lack of standardization in high-resolution MRI protocols, which poses an obstacle to multicenter trials. "There is a lot of exciting research that is possible with high-resolution MRI techniques, but it has much less opportunity to affect patient care if it can't be systematically distributed to multiple sites and multiple populations," says Tanya N. Turan, M.D., director of the MUSC Stroke Division and senior author of the article. To overcome this obstacle, Turan worked with bioengineers at the University of Massachusetts to produce a phantom of a stenotic intracranial vessel using imaging sequences obtained from a single patient with ICAD at MUSC. The 3-D printed ICAD phantom mimics both the stenotic vessel and its plaque components, including the fibrous cap and the lipid core. The phantom is being shared with collaborating institutions so that it can be used to standardize high-resolution MRI protocols. The imaging data presented in the Journal of NeuroInterventional Surgery article demonstrate the feasibility of using the phantom for standardization and were obtained from six U.S. and two Chinese sites. Frontal maximum intensity projection image from time of flight MR showing a focal stenosis of the basilar artery (A, top panel, arrow). Single axial slice of high resolution vessel wall MRI showing intracranial atherosclerosis of the basilar artery (A, bottom panel, arrow). The resulting patient specific virtual phantom of atherosclerotic plaque (B) with fibrous cap (C, top panel) and lipid core (C, bottom panel). Credit: Reproduced from Development of a high resolution MRI intracranial atherosclerosis imaging phantom, Chueh et al, published online on March 9, 2017 by the Journal of NeuroInterventional Surgery with permission from BMJ Publishing Group Ltd. Producing the phantom was a major step in the right direction for standardizing high-resolution MRI ICAD protocols. However, several more years may be necessary to complete the process. The next major challenge for these investigators will be establishing parameters for MRI machines from a variety of manufacturers. So far, MRI parameters have been established for Siemens and GE systems but work is still under way on Philips systems. The phantom is also being shared with sites in China, where the burden of intracranial stenosis is especially high. Turan is collaborating with Weihai Xu, M.D., of Peking Union Medical College, the lead Chinese site, to collect additional data to assess interrater reliability among the participating institutions. Once high-resolution MRI protocols have been standardized and good interrater reliability demonstrated, the international team plans to conduct a prospective observational trial to examine risk prediction at participating centers, which would more quickly meet the required patient enrollment than would a trial conducted in the U.S. alone. "We're only going to be able to advance the field more quickly if we work together," says Turan. "The phantom gives us the tool to be able to work together."
10.1136/neurintsurg-2016-012974
Chemistry
New method developed for producing some metals
Huayi Yin et al. Electrolysis of a molten semiconductor, Nature Communications (2016). DOI: 10.1038/ncomms12584 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms12584
https://phys.org/news/2016-08-method-metals.html
Abstract Metals cannot be extracted by electrolysis of transition-metal sulfides because as liquids they are semiconductors, which exhibit high levels of electronic conduction and metal dissolution. Herein by introduction of a distinct secondary electrolyte, we reveal a high-throughput electro-desulfurization process that directly converts semiconducting molten stibnite (Sb 2 S 3 ) into pure (99.9%) liquid antimony and sulfur vapour. At the bottom of the cell liquid antimony pools beneath cathodically polarized molten stibnite. At the top of the cell sulfur issues from a carbon anode immersed in an immiscible secondary molten salt electrolyte disposed above molten stibnite, thereby blocking electronic shorting across the cell. As opposed to conventional extraction practices, direct sulfide electrolysis completely avoids generation of problematic fugitive emissions (CO 2 , CO and SO 2 ), significantly reduces energy consumption, increases productivity in a single-step process (lower capital and operating costs) and is broadly applicable to a host of electronically conductive transition-metal chalcogenides. Introduction Direct electrochemical reduction of ores can improve metal recovery by simplifying the process, reducing energy consumption, as well as capital and operating costs, and offering cleaner, sustainable extraction pathways. The earliest example is aluminium production, thanks to the invention in 1886 of the Hall–Héroult process, which displaced the once conventional pyrometallurgical route and turned aluminium from a precious metal costing more than silver into a common structural material 1 . More recently, as an alternative to the established pyrometallurgical processes for titanium metal production (Hunter and Kroll), Fray, Farthing and Chen presented the idea of direct electrolytic reduction of solid titanium dioxide, the so-called FFC process 2 . On the horizon is molten oxide electrolysis, which has been shown by Sadoway et al. to produce liquid iron and by-product oxygen 3 , 4 . Analogous progress with electrolytic reduction of molten sulfides has been obstructed by their high melting temperature, which results in high vapour pressure, the lack of a practical inert anode, and the high degree of electronic conductivity and metal solubility in these melts 5 , 6 , 7 , 8 , 9 , 10 , which results in unacceptably high levels of cell shorting with attendant high energy consumption and low Faradaic efficiency 11 , 12 . Attempts to suppress electronic shorting across the cell typically resort to diluting the semiconducting compound in an ionic melt 13 , 14 with the intention of decreasing the electronic conductivity of the feedstock by lowering the concentration of solvated electrons or by creating trapping centres that lower electronic mobility 8 . Yet dilution unfavourably decreases mass transport with resultant loss in cell productivity. In some instances candidate additives can compete for electro-extraction with the element of interest and at high current density co-deposit, which renders the product unmarketable without further purification. In a radical departure from ionic dilution, here we instead adopt a different strategy, one that does not attempt to reduce electronic conduction of the feedstock but rather inhibits electron access to one of the cell’s electrodes by deployment of a discrete electron-blocking secondary molten electrolyte so as to enable direct metal recovery from ores that were previously deemed electrochemically irreducible. By way of example we demonstrate the production of high-purity liquid antimony via direct electrolysis of the molten semiconductor, antimony sulfide (Sb 2 S 3 ), derived from its predominant ore (mineral stibnite), in stark contrast to the more complicated traditional pyrometallurgical and hydrometallurgical extraction pathways ( Supplementary Note 1 ). Furthermore, since all reactants and products are fluid, we have the scalable conditions for a simple, continuous and high-throughput industrial process. Easy metal recovery is further enabled by the density ranking of Sb, Sb 2 S 3 and the secondary molten halide electrolyte, which self-segregate into three immiscible layers. Results Layout of electrolysis cell As seen in Fig. 1 , at the bottom of the cell higher density liquid antimony pools beneath cathodically polarized molten stibnite. At the top of the cell, sulfur issues from a carbon anode immersed in the electron-blocking secondary halide electrolyte, a separate layer distinct from the underlying molten stibnite feedstock. We observe high metal production rates and total conversion of the feedstock. Complete fluidity enables ready collection of products. Figure 1: Schematic of three-layered electrolysis cell. Schematic illustrating the use of an electron-blocking secondary electrolyte for direct electrolytic conversion of molten semiconducting Sb 2 S 3 into liquid Sb and sulfur vapour. Full size image Selection of electrolyte According to the phase diagrams 15 , 16 , Na 2 S and K 2 S are highly soluble in molten NaCl–KCl. This was confirmed experimentally in a transparent fused quartz cell charged with molten NaCl 50.6mol% –KCl 49.1% and 2 wt% Na 2 S. As shown in Supplementary Fig. 1 , molten NaCl–KCl–Na 2 S was observed to be uniform, red in colour and devoid of any precipitates, indicative of a single-phase liquid. In contrast, Sb 2 S 3 has only limited solubility in molten NaCl–KCl–Na 2 S at 700 °C (ref. 17 ). On this basis, we chose Na 2 S dissolved in molten NaCl–KCl to serve as a secondary electrolyte that conducts sulfide ion while obstructing the flow of electrons. Halides have previously been used in the FFC process for the reduction of solid-state feedstocks, including sulfides 18 , 19 . For example, solid MoS 2 was electrolysed in molten CaCl 2 to produce Mo (ref. 20 ). However, it was found that slow S 2– transport throttled productivity and caused CaS contamination. In another instance, tungsten powder was prepared by electrochemical reduction of solid WS 2 in molten NaCl–KCl (ref. 21 ). While these results demonstrate electrolytic reduction of a sulfide compound at moderate temperatures, conversion of solid feedstock into solid metal product is necessarily confined to three-phase conductor/insulator/electrolyte reaction sites, which impedes throughput rendering the process challenging at industrial scale for its low space-time yield 22 , 23 . Herein we overcome these limitations by resorting to an all-fluid system: molten semiconducting feedstock converting to liquid metal product and sulfur vapour by-product, the latter mediated by a molten halide electrolyte doped to render it a sulfide-ion conductor. While Hoar and Ward demonstrated the electrolysis of molten copper sulfide in complicated bicameral laboratory cells containing a cathode and an anode both composed of molten copper sulfide and connected by molten barium chloride chosen for its low vapour pressure at 1,150 °C (ref. 24 ), the absence of alkali sulfide dissolved in the barium chloride led to generation of cuprous ions at the anode with attendant precipitation of Cu 2 S. Indeed, the authors admit that while twin copper sulfide electrodes can be made to work in small, laboratory-scale cells, ‘large-scale cells would obviously present formidable development problems, not the least being methods for feeding molten white metal into the electrode compartments.’ The cell proposed herein obviates the need for bicameral feeding, and since the molten antimony sulfide naturally lies between antimony metal and the molten chloride electrolyte, periodic charging of feedstock and periodic tapping of metal product is more straightforward. No antimony ions are introduced into the molten chloride electrolyte, which for us acts as a sulfide ion conductor and pathway to the anode. Put another way, this cell exploits the electro-desulfurization concept of FFC while achieving scalable throughput thanks to the liquidity of both feedstock and product metal. Electrolysis Here in a three-electrode set-up at 700 °C, we demonstrate the conversion of molten Sb 2 S 3 to liquid Sb and sulfur vapour by the action of electric current at constant applied potential. As shown in Fig. 2a , a sample of 1 g Sb 2 S 3 (molten Sb 2 S 3 layer of thickness ∼ 1 cm) was entirely electrolysed in <49 min. The half reactions can be written as, Figure 2: Image and X-ray diffraction analysis of cathode product. Cross-sectional images of ( a ) the Sb 2 S 3 electrode and ( b ) the electrolytic Sb. X-ray diffraction patterns of ( c ) the Sb 2 S 3 feedstock and ( d ) the electrolytic Sb product. Scale bar, 1 cm. Full size image After operation, the cell was sectioned, and a metal bead was observed at the bottom of the graphite electrode ( Fig. 2b ), confirming the production of high-density liquid metal at the cathode. Remarkably, Sb 2 S 3 was fully converted to elemental Sb (X-ray diffraction in Fig. 2c,d ), and the purity of the extracted metal exceeded 99.9% (energy-dispersive X-ray spectroscopy (EDS) in Supplementary Fig. 2 ). Significantly, no elemental sodium, potassium, nor halide or sulfide compounds were found in the metal product, suggesting that the electrolytically extracted Sb needs no further treatment. As is the case with all electrolytic reduction operations, high-purity product is predicated on high-purity feedstock; no metal refining can be expected in cells designed for primary extraction. In parallel, in the vicinity of the anode a yellow condensate was observed on the fused quartz cell wall ( Fig. 3a ) and identified by EDS ( Supplementary Fig. 3 ) and X-ray diffraction ( Fig. 3b ) to be high-purity, orthorhombic sulfur. This is consistent with the condensation of sulfur vapour following its evolution on the graphite anode. In Fig. 3e–g the presence of elemental Si and O corresponds to the fused quartz substrate on which the sulfur had deposited. Figure 3: Analysis of anode product. ( a ) Image of the anode product generated on the side wall of the fused quartz cell. ( b ) X-ray diffraction patterns of the yellow anode product (orthorhombic sulfur in the database: PDF No. 00-008-0247). ( c ) Scanning electron microscopy image of the yellow anode product. ( d – g ) EDS element mappings of the yellow anode product: ( d ) the overlap mapping of sulfur, silicon and oxygen; ( e ) the red is sulfur; ( f ) the green is silicon; and ( g ) and the blue is oxygen. The scale of a is 2 cm and c – f is 500 μm. Full size image To determine the operational envelope (extraction rate and cell voltage) relative to the secondary electrolyte’s electrochemical window, the potential of the anode (counter electrode) was monitored in situ during potentiostatic electrolysis. Sulfur evolution is expected to occur at 1.55 V (versus Na + /Na) while undesirable chlorine evolution is expected to occur at potentials above 3.3 V (versus Na + /Na, Supplementary Table 1 ), which in our experimental set-up ( Supplementary Fig. 4 ) is achieved at a current density of 550 mA cm −2 . Accordingly, galvanostatic electrolysis was conducted at 500 mA cm −2 . As shown in Fig. 4a , in the first 10 s, a sharp rise in cell voltage was observed. This is principally attributed to polarization at the anode (increase in potential from 2.2 to 2.8 V versus Na + /Na) on which sulfur vapour evolves. At the cathode, polarization is minimal, consistent with fast charge-transfer kinetics and rapid mass transport associated with electrodeposition of liquid metal from molten salt. Over time, as feedstock is depleted, cathode potential predictably decreases (becomes more negative) and cell voltage increases. Figure 4: Voltage time traces and cathode product of galvanostatic electrolysis. ( a ) Cathode, anode and cell voltage time traces during galvanostatic electrolysis at 500 mA cm −2 . ( b ) EDS spectrum of the obtained Sb; inset is the image of the electrolytic bead of Sb. The scale bar of the inset is 1.5 cm. Full size image After galvanostatic electrolysis, a bead of high-purity Sb was observed at the bottom of the graphite container ( Fig. 4b ). On visual inspection, the anodic graphite rod revealed no signs of erosion despite service for a complete week ( Supplementary Fig. 5 ). The voltage recorded at the anode during galvanostatic electrolysis is in agreement with cyclic voltammetry on graphite showing that oxidation occurs at potentials exceeding 2.2 V ( Supplementary Fig. 6 ). By comparison of the mass of the electrolytic Sb to the integrated current during the course of galvanostatic electrolysis at the high constant current density of 500 mA cm −2 , the Faradaic current efficiency is determined to be 88% with an energy consumption of 1.5 kWh per kg Sb. Discussion Here in a cell comprising an electron-blocking secondary molten NaCl–KCl–Na 2 S electrolyte, efficient direct electrolysis of molten semiconducting Sb 2 S 3 has been shown to produce in a single-step high-purity liquid Sb and S vapour at a high rate of 500 mA cm −2 without fugitive gas emissions, starting with feedstock derived from antimony’s predominant ore, and achieving a low energy consumption of 1.5 kWh per kg Sb. It has not escaped our notice that the immiscible secondary electrolyte approach could well be adapted for use with other molten semiconductors, not only sulfides, but even transition-metal oxides 25 . The combination of eutectic mixes of ores to decrease melting temperature of the charge, of alloying of metals to allow for liquid metal collection and of design of specific secondary electrolytes to vitiate electronic shorting while facilitating transport of the chalcogenide ion, could pave the way for the recovery of other elements by sustainable, modern electrometallurgical means. The key to exploitation of this new approach is the dual recognition of the semiconducting nature of molten transition-metal chalcogenides and their immiscibility with both liquid transition metals and molten alkali-metal halides, which can be doped to be chalcogenide-ion conductors. Methods Electrolysis set-up and materials A three-electrode set-up was sealed in a stainless-steel test vessel heated in a tube furnace. The working electrode consisted of a graphite cup (inner diameter (ID): 1 cm) containing 1 g Sb 2 S 3 pre-melted by an induction smelter in an argon-filled glove box. The Ag/AgCl reference electrode was fabricated by loading 1.5 g LiCl 59.2 mol% –KCl 40.8% (99.9% purity) with 2 wt% AgCl into a closed-one-end mullite tube (ID=4 mm), and inserting a Ag wire to serve as the current lead. The counter electrode was composed of a graphite rod (6 mm outer diameter, >99.999%) connected to a tungsten wire. The electrolyte was composed of 500 g NaCl 50.6mol% –KCl 49.1% (99.9% purity) with 2 wt% Na 2 S contained in an alumina crucible (ID 6 cm, height 12 cm). In an inert atmosphere the cell was assembled and transferred to a stainless-steel test vessel. The assembly was then initially held under vacuum at 250 °C for 12 h to remove residual moisture. The test vessel was then refilled with high-purity Ar gas and kept under constant flow while the temperature was ramped up to 700 °C. Electrochemistry and characterization Cyclic voltammetry was initially conducted with a tungsten working electrode to calibrate the Ag/AgCl reference electrode. Electrochemical behaviour of graphite was characterized with a graphite rod working electrode and the aforementioned reference and counter electrodes in the NaCl–KCl–Na 2 S melt. Potentiostatic and galvanostatic electrolyses were conducted using the three-electrode set-up described above. All electrochemical measurements were conducted by an electrochemical workstation (Auto lab PGSTAT302N, Metrohom AG), and the potential between the working and counter electrodes was monitored by a four-electrode battery testing system (Maccor 4300). Throughout this manuscript, all potential values are expressed relative to Na + /Na (−2.2 V versus Ag/AgCl) unless otherwise noted. When the electrolysis was terminated, the electrodes were lifted out of the melt and cooled down in the argon protected headspace of the stainless steel test vessel. The working electrode graphite cup was then cross-sectioned and characterized by X-ray diffraction (Panalytical X’pert Pro Multipurpose Diffractiometer with Cu Kα radiation) and scanning electron microscopy (JEOL 6610LV) fitted with energy dispersive spectrometer (EDS, IXRF system, Model 55i). Two-electrode fused quartz cell A demonstration cell made of a closed-one-end fused quartz tube (ID 1.5 cm) was used to observe the evolution of the gas product from the anode. A unit of 3 g Sb 2 S 3 together with 15 g NaCl–KCl–Na 2 S were introduced into the fused quartz cell to be heated by natural gas and oxygen flame. Two graphite rods of diameter 3 and 6 mm served as cathode and anode, respectively. Galvanostatic electrolysis at 500 mA cm −2 was conducted between the two graphite electrodes. During the electrolysis, the demo cell was under argon flow. Data availability The authors declare that the data supporting the findings of this study are available within the article and its Supplementary Information files. Additional information How to cite this article: Yin, H. et al. Electrolysis of a molten semiconductor. Nat. Commun. 7:12584 doi: 10.1038/ncomms12584 (2016).
The MIT researchers were trying to develop a new battery, but it didn't work out that way. Instead, thanks to an unexpected finding in their lab tests, what they discovered was a whole new way of producing the metal antimony—and potentially a new way of smelting other metals, as well. The discovery could lead to metal-production systems that are much less expensive and that virtually eliminate the greenhouse gas emissions associated with most traditional metal smelting. Although antimony itself is not a widely used metal, the same principles may also be applied to producing much more abundant and economically important metals such as copper and nickel, the researchers say. The surprising finding is reported this week in the journal Nature Communications, in a paper by Donald Sadoway, the John F. Elliott Professor of Materials Chemistry; postdoc Huayi Yin; and visiting scholar Brice Chung. "We were trying to develop a different electrochemistry for a battery," Sadoway explains, as an extension of the variety of chemical formulations for the all-liquid, high temperature storage batteries that his lab has been developing for several years. The different parts of these batteries are composed of molten metals or salts that have different densities and thus inherently form separate layers, much as oil floats on top of water. "We wanted to investigate the utility of putting a second electrolyte between the positive and negative electrodes" of the liquid battery, Sadoway says. Unexpected results But the experiment didn't go quite as planned. "We found that when we went to charge this putative battery, we were in fact producing liquid antimony instead of charging the battery," Sadoway says. Then, the quest was on to figure out what had just happened. The material they were using, antimony sulfide, is a molten semiconductor, which normally would not allow for the kind of electrolytic process that is used to produce aluminum and some other metals through the application of an electric current. "Antimony sulfide is a very good conductor of electrons," Sadoway says. "But if you want to do electrolysis, you only want an ionic conductor"—that is, a material that is good at conducting molecules that have a net electric charge. But by adding another layer on top of the molten semiconductor, one that is a very good ionic conductor, it turned out the electrolysis process worked very well in this "battery," separating the metal out of the sulfide compound to form a pool of 99.9 percent pure antimony at the bottom of their cell, while pure sulfur gas accumulated at the top, where it could be collected for use as a chemical feedstock. In typical smelting processes, the sulfur would immediately bond with oxygen in the air to form sulfur dioxide, a significant air pollutant and the major cause of acid rain. But instead this contained process provides highly purified metal without the need to worry about scrubbing out the polluting gas. Simple, efficient process Electrolysis is much more efficient than traditional heat-based smelting methods, because it is a single-step continuous process, Sadoway explains. The discovery of that process is what transformed aluminum, more than a century ago, from a precious metal more valuable than silver into a widely used inexpensive commodity. If the process could be applied to other common industrial metals such as copper, it would have the potential to significantly lower prices as well as reduce the air pollution and greenhouse gas emissions associated with traditional production. "The thing that made this such an exciting finding," Sadoway says, "is that we could imagine doing the same for copper and nickel, metals that are used in large quantities." It made sense to start with antimony because it has a much lower melting point—just 631 degrees Celsius—compared to copper's 1,085 C. Though the higher melting temperatures of other metals add complication to designing an overall production system, the underlying physical principles are the same, and so such systems should eventually be feasible, he says. "Antimony was a good test vehicle for the idea, but we could imagine doing something similar for much more common metals," Sadoway says. And while this demonstration used an ore that is a sulfide (metal combined with sulfur), "we see no reason why this approach couldn't be generalized to oxide feedstocks," which represent the other major category of metal ores. Such a process would produce pure oxygen as the secondary product, instead of sulfur. Ultimately, if steel could be produced by such a process, it could have a major impact, because "steel-making is the number one source of anthropogenic carbon dioxide," the main greenhouse gas, Sadoway says. But that will be a more difficult process to develop because of iron's high melting point of about 1,540 C. "This paper demonstrates a novel approach to produce transition metals by direct electrolysis of their sulfides," says John Hryn, an Institute Senior Fellow at Northwestern University and a senior advisor at Argonne National Laboratory, who was not involved in this work. He praised the MIT team's use of a second electrolyte in the cell to counter the effects of electron conduction, "which has previously stymied efficient high-volume production of transition metals by electrolysis. This seminal paper should usher in a new environmentally sound methodology for extraction of metals from sulfide ores." Hryn adds that although this demonstration used one specific metal, "The primary value of using antimony is that it can be a demonstration metal for other transition-metal recovery by electrolysis." In addition, he says, "The potential goes beyond the production of transition metals by electrolysis. The value is the approach used to control electronic conduction in an electrolytic cell, which has value beyond metal production."
10.1038/ncomms12584
Medicine
Virtual reality could be used to treat autism
Ambika Bansal et al, Movement-Contingent Time Flow in Virtual Reality Causes Temporal Recalibration, Scientific Reports (2019). DOI: 10.1038/s41598-019-40870-6 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-40870-6
https://medicalxpress.com/news/2019-03-virtual-reality-autism.html
Abstract Virtual reality (VR) provides a valuable research tool for studying what occurs when sensorimotor feedback loops are manipulated. Here we measured whether exposure to a novel temporal relationship between action and sensory reaction in VR causes recalibration of time perception. We asked 31 participants to perform time perception tasks where the interval of a moving probe was reproduced using continuous or discrete motor methods. These time perception tasks were completed pre- and post-exposure to dynamic VR content in a block-counterbalanced order. One group of participants experienced a standard VR task (“normal-time”), while another group had their real-world movements coupled to the flow of time in the virtual space (“movement contingent time-flow; MCTF”). We expected this novel action-perception relationship to affect continuous motor time perception performance, but not discrete motor time perception. The results indicated duration-dependent recalibration specific to a motor task involving continuous movement such that the probe intervals were under-estimated by approximately 15% following exposure to VR with the MCTF manipulation. Control tasks in VR and non-VR settings produced similar results to those of the normal-time VR group, confirming the specificity of the MCTF manipulation. The findings provide valuable insights into the potential impact of VR on sensorimotor recalibration. Understanding this process will be valuable for the development and implementation of rehabilitation practices. Introduction The ability to estimate the passage of time with precision is fundamental to our ability to perceive and interact with the world. One key characteristic of time perception is that it is highly plastic, supporting the ability to adapt to changing environmental conditions 1 , 2 , 3 , 4 . Although the plasticity of time perception has been well established, several open questions remain regarding the strength, persistence, and specificity of temporal recalibration effects. For instance, do these recalibration effects generalize across different modes of duration estimation (e.g., continuous and discrete motor reproduction)? Answering such questions will improve the understanding of the computational and neural basis for time perception, which has gained significant interest of late partially due to the appeal of preventing or reversing maladaptive changes to time perception with age 5 , 6 , 7 , 8 . Computational Perspectives of Time Perception Time perception is a multifaceted construct, and several theoretical perspectives have been proposed in the temporal processing literature (for review, see 9 ). The dominant model of temporal processing involves a central internal clock 10 , 11 , 12 , 13 . In this model, time perception is represented by the subjective count of pulses that accumulate within a given interval. There is evidence 14 that these mechanisms, theorised to underlie the conscious perception of time, also drive the timing of motor performance (also see 15 ). Research suggests that the neural pacing signal of the internal clock system, rather than being static, can be modulated by sensory inputs from external stimuli. A supra-modal system contrasts with the idea of dedicated mechanisms for each sense, although it is possible that a central mechanism subserves supra-second intervals, while sub-second intervals are processed by modality-specific systems 12 , 16 . Others have proposed models that do not require pulse intervals for timekeeping, suggesting instead that time is encoded in spatial 17 , 18 or temporal 19 patterns of neuronal firing. These state-dependent network models have gained recognition on the basis of support from psychophysical studies of temporal processing and learning mechanisms 20 , 21 , 22 . While each model has proven useful, there is a lack of compelling in vivo physiological evidence for the existence of a pacing signal or state-dependent representations of timing 23 . Neurophysiology of Time Perception Describing the processes underlying time perception at the neural level has been a major challenge in the field. Although still a matter of debate, the brain structures thought to be involved in time perception include the cerebellum, prefrontal cortex, basal ganglia, and supplementary motor area 16 , 24 , 25 . Recent transcranial magnetic stimulation (TMS) evidence has supported the role of the cerebellum and prefrontal cortex in time perception. TMS used to disrupt cerebellar activity impairs timing of sub-second durations 26 , 27 , 28 , whereas inhibitory TMS over prefrontal cortex impairs timing of supra-second durations 27 , 29 . It has been proposed that the prefrontal cortex operates in a feedback role by using the sensory information following an action to update temporal expectations 30 , whereas the cerebellum plays a feed-forward role in making temporal predictions prior to an action 16 . Basal ganglia activity is associated with the encoding of temporal processing and representation of stimulus duration, which is demonstrated by both the behavioural data of Parkinson’s patients with basal ganglia dysfunction 31 and by functional magnetic resonance imaging (fMRI) studies 24 , 32 , 33 , 34 . These neuroanatomical studies in conjunction with non-human pharmacological studies have supported the idea that timekeeping is modulated by dopamine neurotransmission, specifically at the D2 receptor 16 , 35 , 36 . Dopamine antagonists (e.g., neuroleptics) decrease subjective estimations of event duration 37 , whereas dopamine agonists (e.g., methamphetamine) lead to an increase in duration estimations 38 , 39 . Recent evidence also suggests a prominent role for GABA in time perception; magnetic resonance imaging of the rat cortex shows that elevated GABA levels correspond to underestimation in the perceived duration of sub-second intervals, possibly due to diminished awareness of visual stimuli 37 , 40 . Evidence of Temporal Recalibration Neurophysiology studies not only provide insight into the neural mechanism of the internal clock, but they also support the idea that time perception can be manipulated. The speed of this internal clock can be increased or decreased depending on the drug administered, which can lead to behavioural changes 38 , 41 . Another phenomenon that has been shown to speed up the internal clock is the click train effect, whereby listening to a train of clicks (e.g., 5 sec of clicks at 5 clicks/sec) induces a 10% decrease in the perceived duration of subsequent intervals 2 , 14 , 42 . In line with pharmacological studies, the click train is thought to speed up the internal clock by increasing arousal levels acting on the calibration unit. The stimulus preceded by the train of clicks is therefore perceived as shorter than the one preceded by silence. These studies provide evidence that temporal recalibration can occur, at least at the sub-second scale. Additionally, evidence shows that motor reproduction of interval timing is similarly affected by click trains, suggesting that a common temporal oscillator may underlie both conscious time perception and motor performance 14 , 15 . Changes in temporal processing have been reported following repeated exposure to temporal misalignments in multimodal cues, in the form of a shift in the estimated simultaneity of post-training stimuli 4 , 43 , 44 , 45 . Temporal recalibration effects in response to novel temporal correlations between motor performance and sensory feedback have also been described. Latency between action and visual feedback leads to predictable and persistent behavioral adaptation aftereffects 3 , 46 , 47 , 48 . Rohde and colleagues 3 observed perceptual learning effects when a lag was introduced between hand and cursor movement in a manual tracking task. Adaptation to visuo-motor latency revealed a decrease in motor error with time, and large aftereffects in motor timing following adaptation. As in previous studies 47 , 48 , the recalibration effect was found to generalize between motor timing and perceptual measures (e.g., simultaneity judgments). Perceptual learning as a Framework for Temporal Recalibration Most studies that have examined the plasticity of time perception have adopted a perceptual learning approach. Theories such as sensorimotor contingency theory 49 , 50 highlight the role of the action-perception loop in achieving perceptual learning. This theory describes sensorimotor contingencies as the role our actions play on the sensory inputs we receive: perceptual qualia emerge from learned relationships between action and the incoming sensory data produced as a consequence of the action. These sensorimotor contingencies are implicitly learned over time and shape perception 51 . Bompas and O’Regan 52 provide an elegant example of how sensorimotor contingencies are learned by artificially coupling eye movement and color changes. By exposing participants to specific colors upon saccades to the right or left over an extended duration, they produced a predictable and persistent change in the perceived color of a neutral patch that was contingent on saccade direction. If the qualia of perception are determined by environmental interaction, it is conceivable that even high-level perceptual qualia such as time can be altered by inducing a novel sensorimotor contingency Applications of Plasticity in Time Perception Manipulations of time perception are of practical interest. Several neurological conditions are associated with deficits in the perception of timing, including ADHD 53 , autism 54 , and schizophrenia 55 , and these deficits can be reduced through timekeeping training 56 , 57 (for review, see 58 ). This decrement in time perception has also been seen in the normal aging process 6 , 8 . Due to the appeal of preventing or reversing these maladaptive changes, the utility of interventions that influence time perception has been widely sought after. Although several studies have investigated the plasticity of time perception in the framework of perceptual learning, studies have not examined whether exposure to a novel relationship between action (e.g., moving the body) and sensory feedback (e.g., the speed and duration of environmental events) can produce changes in time perception at the supra-second time-scale. Study Objectives The objective of this study was to investigate if temporal recalibration is induced by exposure to a novel sensorimotor contingency between movement and the speed of events. This question can only be investigated using a naturalistic setting that affords re-learning of normal action-perception loops. In order to provide a naturalistic setting in a controlled environment, we chose to administer the movement-contingent task using virtual reality (VR) technology. VR, which uses sensory stimulation devices to simulate an interactive environment, has the unique ability to dissociate the natural link between perception and action 59 . We attempted to induce a novel sensorimotor contingency by coupling the speed and duration of visual events to the bodily movement of the participant. We term this ‘movement-contingent time-flow’ (MCTF). In this manipulation, if the participant moved their hands or head, the speed of events in their surrounding virtual environment was normal. However, if the participant stopped moving, the speed of events slowed down. With exposure to this manipulation, we expected participants to adapt their perception of time such that when they were static, the probe durations were perceived as longer. To test if temporal recalibration had occurred, we conducted pre- and post-exposure time perception tasks. The psychophysical tasks required participants to observe a probe moving in a circle and then reproduce the duration, speed, and trajectory of the probe. We assessed both continuous motor and discrete motor time perception tasks. We also measured the effects of VR without the MCTF mechanic, to assess if exposure to VR alone affected time perception. As an additional control, we conducted a non-VR control task for a subset of participants in order to determine whether physical activity alone results in temporal recalibration. Hypothesis The present study tests the hypothesis that adaptation to a novel relationship between action and the perceived timing of events in virtual reality results in recalibration of time perception. We predicted time would be perceived as slower when participants were static following exposure to VR with the MCTF manipulation, due to the novel relationship between movement and event speed acquired during the task. As such, we predicted different pattern of data across the continuous motor and discrete motor tasks. For the continuous motor task, participants were static when observing the probe, and were moving when reproducing the probe. Perceived durations were predicted to be longer when the probe was observed (no movement), and shorter when the probe was reproduced (movement). This difference was not predicted for the discrete motor task, where the ‘observe’ and ‘reproduce’ phases of the trials were both static (see Table 1 ). Table 1 Predictions for continuous motor and discrete motor time perception tasks following adaptation to VR movement contingent time-flow. Full size table We expected to find no difference between pre- and post-adaptation estimates for participants who were exposed to VR with no MCTF manipulation, or by participants who simply performed a dynamic motor coordination task (a ball-toss) instead of the VR task. Methods Participants Thirty-four students from the University of Waterloo participated in the study (the final dataset included data from 31 participants due to our exclusion criteria, as described in Results; 18 females, 13 males; age in years M = 21.1, SD = 2.4). All participants reported having normal or corrected-to-normal vision and reported no sensory, musculoskeletal, or neurological disorders. Protocols were approved by the University of Waterloo Research Ethics Committee and were carried out in accordance with the Declaration of Helsinki. All participants gave informed written consent, but all were naïve to the hypotheses of the research. Participants either volunteered their time or were remunerated $10 per hour for their participation. With respect to participants whose data comprised our final dataset, prior experience with VR was generally low – nine participants had used VR before, but all of those had only used it on one occasion. In addition, with respect to average experience with video games, the final dataset consisted of the following distribution of participants: 1 reported playing ≥15 hours of video games/week, 3 reported playing 5–14 hours per week, 9 reported playing video games for ≤5 hours/week, and 18 participants reported that they typically did not play video games. Apparatus The equipment used for the time perception tasks was a laptop (R590, Samsung, 1366 × 768 resolution, 60 Hz refresh rate). The laptop was positioned at the participant’s eye level and the approximate distance from the screen to their eyes was 60 cm (visual angle was ~18 × 32 ° ), although a chin rest was not used. The time perception task was run using Matlab R2016a with the Psychophysics toolbox 60 . The mouse used in this task was a USB mouse (Apple Inc.), which was positioned on a mouse mat with a gel wrist rest. The VR environment was presented via a head mounted display (Rift CV1, Oculus VR; 90 Hz refresh rate, 1080 × 1200 resolution per eye) with hand-held controllers (Touch, Oculus VR). The headset and controllers were motion tracked by a combination of the inertial (accelerometer/gyroscope) and optical (3 x infrared Oculus cameras) sensors that were included with the system. Movement of the head was translated into motion of the observer viewpoint in the VR task. The system ran on a custom built workstation with a high-end graphics board (GTX 1070, NVIDIA). The software packaged with the head mounted display was used to calibrate the capture space (2.41 × 2.41 m) and the inter-pupillary distance of the headset for each participant. The equipment used for the non-VR control task included 20 foam-rubber balls, three buckets, and a pair of tight fitting goggles. The foam-rubber balls were 5.08 cm in diameter and consisted of both white and colored balls. The target buckets were 21.59 cm wide and 21.59 cm tall. The goggles, which were used in order to replicate the narrow visual field of the head-mounted display, produced a field-of-view of approximately 90 × 50 ° (horizontal x vertical). Time perception tasks In the time perception tasks, participants were first presented with a fixation cross for 500 ms. Participants were instructed to fixate on the cross throughout the task. Participants were presented with a blue circular probe located at approximately 3.5 ° eccentricity from the central fixation cross. The probe then rotated around the fixation cross for a given duration and velocity before disappearing. Next, participants were prompted to reproduce the timing of the probe using one of two methods. The first method required continuous motor reproduction of the probe’s spatiotemporal trajectory by moving a white circle using a mouse (the continuous motor task), and the second method required a button press to signal the start and end of the perceived probe duration (the discrete motor task). All responses were recorded at the moment of button-press (i.e., not on button-release). The two tasks are described in detail below (also see Fig. 1 ). Figure 1 Progression of the continuous motor and discrete motor reproduction time perception tasks. Full size image Continuous motor reproduction task: To reproduce the speed and duration of the probe movement, participants were instructed to move their hand in a circle while holding a mouse. The white circle that appeared after the probe disappeared moved in a one-to-one manner relative to the hand movement of the observer. Participants were required to press the mouse button to indicate duration onset and offset. Supplementary Movie 1 depicts four example trials from this condition. Discrete motor reproduction task: To reproduce the duration of the probe movement, participants first pressed the mouse button indicating duration onset, then waited until the perceived duration had elapsed, and finally pressed the button again to indicate duration offset. Virtual reality tasks The VR content used in the experiment was an off-the-shelf consumer game (Robo Recall, Epic Games; Fig. 2 ). This was a first-person action game wherein the user took the role of a robot whose task was to destroy other robots that had been let loose in a realistic city environment. Participants were instructed to play the game by shooting the target robots until the time limit was reached. Points towards the participants’ scores were received upon performing one of several actions, such as destroying an enemy, avoiding enemy projectiles, and chaining together multiple enemy takedowns. All points were computed by the in-game scoring system and the experimenter recorded the total score after termination of the VR block. Figure 2 ( A ) Depiction of the setup used for the VR task. ( B ) Screenshot of the VR content (Robo Recall, Epic Games, NC, USA). Full size image Depending on their group assignment, participants experienced one of two versions of this game. The manipulated factor between these groups was whether the speed of events in the game was either normal (VR control, n = 13 following exclusions: see results) or modified (VR MCTF, n = 12 following exclusions: see results) as a function of the movement of the player’s hands (tracked by the Touch Controllers) and head (tracked by the Oculus Rift headset). In order to achieve the latter, we implemented a user-made modification of the game (MGS Studios, Berkshire, UK). In this modification, the movement of the participants was coupled to the speed of events occurring in their surroundings. If the head and hands were stationary, event speed was decreased by a factor of 8 compared to control conditions (e.g., a ball that would normally hit the ground after 1 s of falling would instead take 8 s to fall). Conversely, movements of 100 cm/s or greater were associated with normal event speed (i.e., no decrease in the speed of events). Participant movement speeds between 0 cm/s and 100 cm/s were linearly and inversely related to event speed; for example, 70 cm/s movement caused event speed to decrease by a factor of 2.4 (30% of 8), and 30 cm/s movement reduced event speed by a factor of 5.6 (70% of 8). The program did not combine the speed of multiple sensors (e.g., moving both hands at 40 cm/s resulted only in a factor of 4 decrease in event speed). In the VR control condition, participants were exposed to the same experience, but without the movement-time coupling. Non-virtual reality control task In addition to the two VR groups (MCTF and control), we conducted a non-VR control task with a subset of participants ( n = 6) in order to decouple the effect of VR exposure from the possible effects of physical activity on time perception, as durations tend to be overestimated following physical activity 61 , 62 . The objective of the task was to throw foam balls into the correct color of bucket in order to accrue points (Fig. 3 ). The experimenter began the task by throwing the balls one at a time to the participant, who was instructed to toss the white balls into the white bucket, and the colored balls into the green bucket (correctly doing so gained the participant 1 point). Participants were also informed that they could throw any of the balls into the red bucket, thus earning 4 points. Participants were instructed to accrue the highest possible points tally. Once all of the balls in the bag were tossed, participants walked around the area to collect the balls that did not reach the buckets, while the experimenter counted the ones that did. Once the balls were returned to the experimenter, the next round of throws commenced. This process continued until 10 minutes had elapsed. Figure 3 ( A ) Depiction of the non-VR control task. ( B ) Schematic overview of the non-VR control task from a bird’s-eye view. P = Participant; E = Experimenter. Objects not presented to-scale. Full size image Design and procedure Participants first completed two time perception tasks (continuous motor and discrete motor) to establish pre-adaptation measures (details below). Next, participants were exposed to the VR task and a post-adaptation time perception task (e.g., continuous motor). Finally, participants were exposed to the VR task a second time and then completed another post-adaptation time perception task (e.g., discrete motor). Participants each had a predetermined group assignment and order of time perception tasks that were counterbalanced across participants. Each time perception task contained 15 unique trials, which consisted of 3 angular velocities (25, 75, 125 °/ s), and 5 different durations that spanned a logarithmic space (0.5, 0.9, 1.7, 3.2, 6.0 seconds). Each trial was repeated 5 times, resulting in a total of 75 trials per task. The order of the trials was randomized. The order in which the continuous motor and discrete tasks were performed was counterbalanced across participants. Each time perception task required approximately 10 minutes to complete. Upon performing the time perception tasks for the first time (pre-adaptation), 4 practice trials were completed. The 4 practice trials were conducted at 0.5 and 6 seconds for both 25 and 125 °/ s. Participants were randomly assigned to one of the two VR conditions: MCTF VR or control VR. In both conditions, the VR exposure occurred in two phases and exposure lasted for 10 minutes in each phase. An additional group of participants were assigned to complete the non-VR control task. Results We assessed if participants correctly performed the time perception tasks by measuring the correlation between the ‘actual’ and ‘perceived’ durations across trials for each individual (conditions and trial blocks were pooled). The data from one participant were excluded from further analysis due to a non-significant correlation ( p > 0.05), and data from two additional participants were excluded due to errors with data recording (excluded from VR MCTF n = 2; excluded from VR control n = 1; none excluded from non-VR control). For the remaining 31 participants the mean Pearson r (148) score for the correlation between actual and perceived durations was high, both in the continuous motor task ( M = 0.89 ± 0.09 SD ), and in the discrete motor task ( M = 0.89 ± 0.08 SD ; all p s < 0.001). Analysis of participants’ performance in the VR task revealed that there was no significant difference between scores in the first and second VR blocks (paired samples t -tests; VR Control, t (12) = 1.31, p = 0.21; VR MCTF, t (11) = 0.81, p = 0.43), nor was there a difference between the VR MCTF and VR Control groups (independent samples t -tests, block 1, t (23) = 1.27, p = 0.22; block 2, t (23) = 1.85, p = 0.08). Informal observation of participants during the task indicated that participants in the VR MCTF group used several strategies in order to optimise performance in the task. The majority of participants demonstrated a ‘freezing’ strategy during the task: the participant would adopt and maintain a specific posture for 2–3 seconds and observe the environment while time was moving slowly; following this, the participant would break out of this posture in order to act (e.g., shooting an enemy). While the majority of participants in the VR MCTF group adopted this strategy, it was used infrequently, and did not appear to result in measurable differences in performance (i.e., total point scores). Main effects Raw scores for temporal duration estimates are plotted below for the discrete (Fig. 4 ) and continuous (Fig. 5 ) time perception tasks. These scores were separated by block and were used to compute ‘adaptation scores’: the difference in duration estimates between pre- and post-adaptation (see Fig. 6 ). As such, negative adaptation scores indicate that post-adaptation estimates were shorter than pre-adaptation estimates, and positive adaptation scores indicate that estimates were longer in the post-adaptation block. Figure 4 Tukey boxplots for duration estimates in the discrete motor time perception task, plotted on a log-log scale. Data are split by condition (VR control, non-VR control, VR MCTF), and trial velocity. Individual points are participant averages. Error bars indicate the farthest points within 1.5 x interquartile ranges 68 . Full size image Figure 5 Tukey boxplots for duration estimates in the continuous motor time perception task, plotted on a log-log scale. Data are split by condition (VR control, non-VR control, VR MCTF), and trial velocity. Individual points are participant averages. Error bars indicate the farthest points within 1.5 x interquartile ranges. Full size image Figure 6 Conditions are as in the previous figure, but scores indicate adaptation effects (sec) by task type (continuous motor/discrete motor). Individual points are participant averages. Dotted line indicates zero adaptation. Error bars indicate the farthest points within 1.5 x interquartile ranges. Full size image We assessed differences between pre- and post-adaptation scores using a series of one-sample tests (Wilcoxon signed rank tests) on adaptation scores (post- minus pre-adaptation duration estimates) that were pooled across duration and velocity conditions. The results revealed adaptation effects that were significantly different from zero for the VR MCTF group in the continuous motor time perception task ( M = −0.44 s, SEM = 0.13 s, Wilcoxon’s V = 3, p = 0.002), as shown in Fig. 6 . For the VR MCTF group in the discrete time perception task ( M = 0.12 s, SEM = 0.05 s, Wilcoxon’s V = 64, p = 0.052), and for the VR control group in both continuous ( M = −0.12 s, SEM = 0.13 s, Wilcoxon’s V = 32, p = 0.38) and discrete ( M = 0.01 s, SEM = 0.07 s, Wilcoxon’s V = 47, p = 0.95) tasks, we observed no evidence of significant adaptation. To determine the source of the adaptation effects, we conducted a mixed design ANOVA with the within-subjects factors trial duration (0.5–6.0 sec), trial velocity (25/75/125 ° /s), task type (continuous/discrete motor time perception task), and the between-subjects factor MCTF (MCTF/normal time). Greenhouse-Geisser corrections were applied to account for sphericity assumption violations where appropriate. Adaptation differed significantly as a function of both trial duration ( F (1.63, 37.53) = 14.12, p < 0.001, Generalized Eta Squared (GES) = 0.10) and velocity ( F (1.59, 36.52) = 10.95, p < 0.001, GES = 0.02). Trend analysis indicated that adaptation scores were more negative in the longer duration trials (linear trend t (92) = 6.93, p < 0.001, Cohen’s d = 0.72) and in the slower velocity trials (linear trend t (46) = 4.66, p < 0.001, Cohen’s d = 0.68). Task type also affected adaptation, with more negative adaptation scores in the continuous motor task than the discrete motor task ( t (23) = 3.72, p = 0.001, Cohen’s d = 0.77). No main effect of VR MCTF was observed ( p = 0.38). Interactions There were several significant interactions, including interactions between MCTF and other factors that were relevant to our hypothesis. We observed a significant three-way interaction between MCTF, task type, and trial duration ( F (2.04, 46.85) = 6.25, p = 0.004, GES = 0.02) and a significant two-way interaction between MCTF and task type ( F (1, 23) = 5.58, p = 0.027, GES = 0.03). In addition, task type interacted significantly with both trial duration ( F (2.04, 46.85) = 3.61, p = 0.015, GES = 0.01) and trial velocity ( F (1.80, 41.37) = 3.75, p = 0.04, GES = 0.005). No other interactions were significant ( p s ≥ 0.06). Post-hoc comparisons We conducted least-squares means tests with Tukey adjustments for multiple comparisons 63 to follow up significant interactions and main effects. Examining the three-way interaction, we found adaptation scores in the continuous motor task for the MCTF group were significantly more negative only at trial durations of 3.2 sec ( t (127.04) = 2.51, p = 0.013, Cohen’s d = 0.22) and 6.0 sec ( t (127.04) = 4.50, p < 0.001, Cohen’s d = 0.40) (other p s ≥ 0.23). For trials where the probe duration was 6.0 sec, the magnitude of adaptation was 0.87 sec, which indicates an increase in estimates of probe durations of 14.5%. The interaction between MCTF and task type was due to adaptation scores being significantly more negative for the MCTF group in the continuous motor task ( t (44.66) = 2.21, p = 0.03, Cohen’s d = 0.32) but not the discrete motor task ( p = 0.41). This interaction is depicted in Fig. 7 . Figure 7 Adaptation effects for each group pooled across velocity and duration conditions. Negative adaptation effects indicate lower duration estimates post-adaptation compared to pre-adaptation. Error bars indicate SEM . Full size image Next we examined the interaction between task type and duration/velocity of trials. Compared to the discrete motor task, adaptation scores in the continuous motor task were significantly more negative for all durations ( t s(56.24) ≥ 2.17, p s ≤ 0.03, Cohen’s d s ≥ 0.28) except for 0.5 sec ( p = 0.08). In the continuous motor task, adaptation scores tended to be more negative for low velocity trials than higher velocity trials ( t s(88.62) ≥ 3.05, p s ≤ 0.06, Cohen’s d s ≥ 0.23; although the difference between 75 and 125 ° /s did not reach significance, p = 0.06). Velocity had no effect on adaptation scores in the discrete motor task ( p s ≥ 0.15). We also conducted a post-hoc analysis to assess if the difference in post-adaptation duration estimates between the VR MCTF and VR Control groups was attributable to group differences in spatial error when reproducing the movement of the probe. In order to assess spatial error we computed the absolute distance between the endpoint of the probe and the participant’s movement in each axis (left-right, and up-down; X and Y respectively) for each trial. We averaged these values across trials for each participant, resulting in average endpoint error estimates for X and Y. Exemplar spatiotemporal trajectories are plotted in Fig. 8 . Figure 8 Exemplar trajectories for 5 trials from a single participant at the medium probe velocity (75 ° /s). Probe movement plotted in blue, participant reproduction plotted in yellow. ( A ) Participant and probe displacements in the X and Y-axis for the 5 durations (0.5 to 6.0 s). ( B ) Displacement of the probe and the participant’s reproduction over time in the X-axis of movement for the same trials. In these examples, the participant tended to over-estimate the probe duration, while maintaining a low spatial error in X and Y. Full size image We found no significant difference between the post-adaptation spatial error values for the VR MCTF and VR Control groups in terms of X ( t (24) = 0.28, p = 0.78, Cohen’s d = 0.11), Y ( t (24) = 0.30, p = 0.77, Cohen’s d = 0.12), or the average of the X and Y values ( t (24) = 0.29, p = 0.77, Cohen’s d = 0 .10). The absolute spatial errors demonstrated by the two groups were as follows: VR MCTF, M X = 12.46 mm, SD X = 4.76 mm, M Y = 12.57 mm, SD Y = 7.36 mm; VR Control, M X = 11.90 mm, SD X = 5.34 mm, M Y = 11.72 mm, SD Y = 7.08 mm. Given that the difference between duration estimates in the VR MCTF and VR Control groups was largest in the 6-second trials, we also assessed spatial errors in these trials alone. Again, no difference was observed between the groups with respect to X error ( t (24) = 0.24, p = 0.81, Cohen’s d = 0.09), Y error ( t (24) = 0.20, p = 0.84, Cohen’s d = 0.12), or averages of X and Y errors ( t (24) = 0.24, p = 0.81, Cohen’s d = 0.09) in the 6 second probe trials (absolute spatial errors: VR MCTF, M X = 15.03 mm, SD X = 6.80 mm, M Y = 13.83 mm, SD Y = 9.15 mm; VR Control, M X = 14.43 mm, SD X = 5.99 mm, M Y = 13.17 mm, SD Y = 7.30 mm). Non-virtual reality control task We conducted a similar analysis for data obtained in the non-VR control task. We assessed if there were adaptation effects using one-sample tests (Wilcoxon signed rank tests) on the adaptation scores. For both the continuous ( M = −0.10 s, SEM = 0.14 s, Wilcoxon’s V = 8, p = 0.69) and discrete ( M = 0.12 s, SEM = 0.11 s, Wilcoxon’s V = 16, p = 0.31) tasks, we found no evidence of significant adaptation. The results were overall highly similar to those obtained for the VR control group (see Fig. 6 ). As in the VR control group, we observed a significant effect of velocity on adaptation effects ( F (2, 10) = 4.36, p = 0.043, GES = 0.05) which was driven by significantly more negative adaptation effects in the low velocity (25 ° /sec) trials compared to the high velocity (125 ° /sec) trials ( t (19.87) = 3.22, p = 0.01, Cohen’s d = 0.70) only for the continuous motor task (discrete motor task p s ≥ 0.72). We observed no other significant main effects or interactions ( p s ≥ 0.08). Cybersickness Cybersickness levels were very low across participants. Only 5 participants reported cybersickness levels that were greater than zero, and of those 5 participants, the maximum score reported was 3 (range of 0 of 20), with an average of 2.38 ( SEM = 0.26). Discussion The plasticity of time perception at the sub-second scale is well established in previous literature 2 , 3 , 4 , 42 . Here we examined the potential for inducing a novel relationship between action and perception that would influence the perceived duration of time. In line with predictions, the results revealed significant effects of the manipulation we introduced in VR. These effects emerged only in the continuous motor time-perception task such that the probe intervals were under-estimated by approximately 15% following exposure to the VR MCTF manipulation for the longest duration trials. The lack of adaptation for VR and non-VR control groups supports the conclusion that temporal recalibration was induced by means of a novel sensorimotor contingency between movement and event speed. The sensorimotor contingency theory of perception 49 , 50 , 51 , 64 has been supported by evidence that novel relationships can be experimentally induced between perception and action for several perceptual qualia, including color 52 and musical sound 65 . Consistent with these studies, our results indicate a significant reduction in the reproduced probe duration following adaptation to a movement-contingent VR game. The current results provide the first evidence for a novel contingency between movement and time perception. The findings have implications for our understanding of the embodied nature of temporal processing, and reiterate the potential for temporal recalibration that has been shown previously. This potential is highly relevant for future rehabilitation initiatives that focus on preventing or reversing maladaptive changes in temporal processing, such as those occurring with age. The results revealed no evidence in support of temporal recalibration in participants who completed either the control VR task or control non-VR task. This supports a specific effect on the VR manipulation and suggests that neither VR nor physical activity alone were responsible for the temporal recalibration we observed. Adaptation effects were obtained in the continuous motor psychophysical task, but not the discrete motor task. We interpret this effect in terms of a sensorimotor contingency induced by coupling the speed of events in VR to the movement of the participant. In other words, participants who experienced the manipulation perceived the passage of time to be slower when they are not moving, and vice versa. This is indicated by a significant reduction in duration between the observation of the probe (static observer), and reproducing the movement (moving observer). In the discrete motor task, observing and reproducing the probe required no movement from the participant, and thus no effects of the novel sensorimotor contingency were observed. Results revealed a significant reduction in the reproduced probe duration following exposure to the VR MCTF manipulation of approximately 400 ms on average. However, it should be noted that the majority of this effect was carried by significant adaptation for the longest duration trials, where approximately 0.9 sec adaptation was observed for 6 sec trials (~15% decrease in the reproduced probe duration). Note that while the MCTF manipulation caused event speed to slow by a factor of eight, the reproduced durations were only decreased by a factor of 1.15 in the condition that gave rise to the strongest effect here (6 sec trials). We interpret this result as evidence of a (partial) multiplicative slowdown in the perceived passage of time when the observer was stationary. For short trials (0.5 sec) a multiplicative slowdown would generate small differences between the perceived and reproduced durations, relative to the variability associated with internal/motor noise. At longer duration trials, however, a multiplicative modulation of the reproduced duration would result in larger effects that can be more easily detected among the noise sources. Since spatial error did not differ between the VR Control and VR MCTF groups (overall, and for only the 6 second trials), this leads us to rule out a specific effect of the MCTF manipulation on spatial errors during the reproduction task, and to instead conclude that temporal recalibration occurred for the VR MCTF participants. Several previous studies have documented sub-second temporal recalibration. Vroomen and colleagues exposed observers to audio-visual stimuli that contained latencies of 100–200 ms 4 . Their results revealed a shift in the point of subjective simultaneity between two multisensory cues of approximately 10–15 ms that occurred in the direction of the exposure lag. Rohde and colleagues also reported sub-second temporal adaptation by introducing a 200 ms lag between hand and cursor movement in a manual tracking task 3 . The time course of adaptation to visuomotor latency revealed an adaptation effect in terms of a 30 ms decrease in motor error with time, and large aftereffects in motor timing. The effects we observed here were large relative to these previous studies, and this may be partially due to the exposure delays used in previous studies. For instance, Vroomen and co-workers 4 did not expose participants to larger delays between multisensory cues than 200 ms due to the likelihood that greater delays would extinguish perceptual binding of the cues. Participants were generally very reliable with respect to their responses on the time perception tasks, as we observed strong correlations between the reported and actual durations of the probes for the majority of participants. The results showed that participants tended to over-estimate durations for the shorter trials, especially at the shortest duration trials. This finding aligns with work by others showing overestimation of durations when short intervals (<1 sec) were reproduced 10 , 66 , 67 . However, the same studies identified trends consistent with underestimation for longer durations (5–10 sec), which we did not observe here. While the overestimation at short durations could indicate a perceptual effect (whose origin is unexplained), it might also reflect a methodological constraint on the speed at which participants could make a response. Future replications with more rapid response techniques (e.g., tactile interfaces) may shed light on this apparent overestimation. In the current study, trial durations in the time perception tasks were limited to 6 seconds. Given that the short-duration trials typically revealed no adaptation effects, future replications of this study can benefit from discarding these conditions in favor of assessing whether similar effects emerge in longer duration trials. Another problematic issue inherent to our design was the lack of experimental control involved in the completion of the VR task. We employed an off-the-shelf, high fidelity VR game in our experiment to enhance the perceptual affordances and naturalism of the environment, but this choice meant that each participant had a slightly different experience while completing the task. This likely contributed a source of noise to the adaptation effects, given that some participants might have explored and engaged with the environment less than others. Greater experimental control is required to clarify the inter-individual variability in temporal recalibration effects. In addition, the movement of the participant was determined by the movement of the hands and the head, whereas in future experiments, a more compelling sensorimotor contingency may be induced if full body motion tracking is employed to modulate VR event speed. We also observed differences in the behaviour of participants through informal observations, but since we did not quantify this behaviour (e.g., analysis of head or hand velocity during VR), we are unable to determine if differences in performance strategies across participants in the VR MCTF group contributed to any variance in the adaptation effects. However, analyses of participant performance did not reveal any evidence of a difference in performance outcomes (i.e., total point scores). We intended the task to have a high ecological validity and strong affordances for action, and as such, the VR MCTF manipulation was expected to affect behaviour in overt and subtle ways. However, a full quantification of the behavioural changes produced by this manipulation would be a desirable outcome of future research. According to the verbal self-report measures of comfort, participants experienced very little cybersickness during the experiment, suggesting that the VR task was a comfortable experience. These findings provide evidence that VR can be used to induce temporal recalibration, which may contribute to a clinical intervention in preventing or reversing the maladaptive changes to time perception, such as those observed in aging, Parkinson’s disease, schizophrenia, autism, and ADHD (for review, see 54 ). However, the current findings are highly preliminary, and show an effect that is specific to a motor task involving continuous movement and may not be beneficial for rehabilitation purposes. Due to our specific time-flow manipulation, participants observed a reduction in the reproduced probe durations when stationary compared to when moving; in practice, it may be more useful to slow down the perceived passage of time when an individual is moving, thus enabling better control of the body (e.g., during a fall when establishing stable balance requires relocating the centre of gravity). It remains to be seen if adaptation effects opposite to those we obtained here can be produced. Several other extensions, refinements, and replications will be required before a similar paradigm is used in a clinical setting. How does the brain represent the temporal recalibration effects observed here? Although our experiment was not designed to test mechanistic theories of time perception, our findings are consistent with multiple accounts, including the internal clock 11 . For instance, a movement-dependent increase in the number of pulses emitted by the internal oscillator per unit of time, or modulation of the gating of these pulses by the calibration unit (as in the click train effect 2 , 14 , 42 , would result in the pattern of data obtained. At the same time, the spatial or temporal pattern of neuronal activity may have been modulated by adaptation 17 , 18 , 19 . The transfer of adaptation from a VR setting to a simple psychophysics task supports adaptation of a central mechanism for time perception, in line with other evidence 13 . However, future neuroimaging studies are needed to provide insight into the neural basis of sensorimotor contingency acquisition for temporal processing. In summary, here we exposed participants to a VR experience where movement and event-speed were experimentally coupled, and we observed evidence that a novel relationship between action and event-speed caused recalibration of time perception in a psychophysical task. The sensorimotor contingency between movement and the speed of events induced temporal recalibration that did not emerge in control VR and non-VR conditions. This study provides further evidence of the flexibility of time perception, and indicates that sensorimotor contingency theory offers a useful framework for studying high-level perceptual qualia such as time perception. The utility of VR for modulating time perception is also evident from these results, although future refinements are needed before the practical relevance of these findings can be established. Data Availability The data that support the findings of this study will be made freely available upon publication on the Open Science Framework: Bansal, A., Weech, S., & Barnett-Cowan, M. (2018, September 10). Movement-Contingent Time Flow in Virtual Reality Causes Temporal Recalibration. Retrieved from osf.io/nt2jh.
Playing games in virtual reality (VR) could be a key tool in treating people with neurological disorders such as autism, schizophrenia and Parkinson's disease. The technology, according to a recent study from the University of Waterloo, could help individuals with these neurological conditions shift their perceptions of time, which their conditions lead them to perceive differently. "The ability to estimate the passage of time with precision is fundamental to our ability to interact with the world," says co-author Séamas Weech, post-doctoral fellow in Kinesiology. "For some individuals, however, the internal clock is maladjusted, causing timing deficiencies that affect perception and action. "Studies like ours help us to understand how these deficiencies might be acquired, and how to recalibrate time perception in the brain." The UWaterloo study involved 18 females and 13 males with normal vision and no sensory, musculoskeletal or neurological disorders. The researchers used a virtual reality game, Robo Recall, to create a natural setting in which to encourage re-calibration of time perception. The key manipulation of the study was that the researchers coupled the speed and duration of visual events to the participant's body movements. The researchers measured participants' time perception abilities before and after they were exposed to the dynamic VR task. Some participants also completed non-VR time-perception tasks, such as throwing a ball, to use as a control comparison. The researchers measured the actual and perceived durations of a moving probe in the time perception tasks. They discovered that the virtual reality manipulation was associated with significant reductions in the participants' estimates of time, by around 15 percent. "This study adds valuable proof that the perception of time is flexible, and that VR offers a potentially valuable tool for recalibrating time in the brain," says Weech. "It offers a compelling application for rehabilitation initiatives that focus on how time perception breaks down in certain populations." Weech adds, however, that while the effects were strong during the current study, more research is needed to find out how long the effects last, and whether these signals are observable in the brain. "For developing clinical applications, we need to know whether these effects are stable for minutes, days, or weeks afterward. A longitudinal study would provide the answer to this question." "Virtual reality technology has matured dramatically," says Michael Barnett-Cowan, neuroscience professor in the Department of Kinesiology and senior author of the paper. "VR now convincingly changes our experience of space and time, enabling basic research in perception to inform our understanding of how the brains of normal, injured, aged and diseased populations work and how they can be treated to perform optimally." The article, "Movement-Contingent Time Flow in Virtual Reality Causes Temporal Recalibration" was written by Ambika Bansal, Séamas Weech and Michael Barnett-Cowan, and published in Scientific Reports.
10.1038/s41598-019-40870-6
Physics
Researchers achieve first ever acceleration of electrons in a proton-driven plasma wave
E. Adli et al. Acceleration of electrons in the plasma wakefield of a proton bunch, Nature (2018). DOI: 10.1038/s41586-018-0485-4 Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0485-4
https://phys.org/news/2018-08-electrons-proton-driven-plasma.html
Abstract High-energy particle accelerators have been crucial in providing a deeper understanding of fundamental particles and the forces that govern their interactions. To increase the energy of the particles or to reduce the size of the accelerator, new acceleration schemes need to be developed. Plasma wakefield acceleration 1 , 2 , 3 , 4 , 5 , in which the electrons in a plasma are excited, leading to strong electric fields (so called ‘wakefields’), is one such promising acceleration technique. Experiments have shown that an intense laser pulse 6 , 7 , 8 , 9 or electron bunch 10 , 11 traversing a plasma can drive electric fields of tens of gigavolts per metre and above—well beyond those achieved in conventional radio-frequency accelerators (about 0.1 gigavolt per metre). However, the low stored energy of laser pulses and electron bunches means that multiple acceleration stages are needed to reach very high particle energies 5 , 12 . The use of proton bunches is compelling because they have the potential to drive wakefields and to accelerate electrons to high energy in a single acceleration stage 13 . Long, thin proton bunches can be used because they undergo a process called self-modulation 14 , 15 , 16 , a particle–plasma interaction that splits the bunch longitudinally into a series of high-density microbunches, which then act resonantly to create large wakefields. The Advanced Wakefield (AWAKE) experiment at CERN 17 , 18 , 19 uses high-intensity proton bunches—in which each proton has an energy of 400 gigaelectronvolts, resulting in a total bunch energy of 19 kilojoules—to drive a wakefield in a ten-metre-long plasma. Electron bunches are then injected into this wakefield. Here we present measurements of electrons accelerated up to two gigaelectronvolts at the AWAKE experiment, in a demonstration of proton-driven plasma wakefield acceleration. Measurements were conducted under various plasma conditions and the acceleration was found to be consistent and reliable. The potential for this scheme to produce very high-energy electron bunches in a single accelerating stage 20 means that our results are an important step towards the development of future high-energy particle accelerators 21 , 22 . Main The layout of the AWAKE experiment is shown in Fig. 1 . A proton bunch from CERN’s Super Proton Synchrotron (SPS) accelerator co-propagates with a laser pulse (green), which creates a plasma (yellow) in a column of rubidium vapour (pink) and seeds the modulation of the proton bunch into microbunches (Fig. 1 ; red, bottom images). The protons have an energy of 400 GeV and the root-mean-square (r.m.s.) bunch length is 6–8 cm 18 . The bunch is focused to a transverse size of approximately 200 μm (r.m.s.) at the entrance of the vapour source, with the bunch population varying shot-to-shot in the range N p ≈ (2.5–3.1) × 10 11 protons per bunch. Proton extraction occurs every 15–30 s. The laser pulse used to singly ionize the rubidium in the vapour source 23 , 24 is 120 fs long with a central wavelength of 780 nm and a maximum energy of 450 mJ 25 . The pulse is focused to a waist of approximately 1 mm (full-width at half-maximum, FWHM) inside the rubidium vapour source, five times the transverse size of the proton bunch. The rubidium vapour source (Fig. 1 ; centre) has a length of 10 m and diameter of 4 cm, with rubidium flasks at each end. The rubidium vapour density and hence the plasma density n pe can be varied in the range 10 14 –10 15 cm −3 by heating the rubidium flasks to temperatures of 160–210 °C. This density range corresponds to a plasma wavelength of 1.1–3.3 mm, as detailed in Methods. A gradient in the plasma density can be introduced by heating the rubidium flasks to different temperatures. Heating the downstream (Fig. 1 ; right side) flask to a higher temperature than the upstream (left side) flask creates a positive density gradient, and vice versa. Gradients in plasma density have been shown in simulation to produce large increases in the maximum energy attainable by the injected electrons 26 . The effect of density gradients here is different from that for short drivers 27 . In addition to keeping the wake travelling at the speed of light at the witness position, the gradient prevents destruction of the bunches at the final stage of self-modulation 28 , thus increasing the wakefield amplitude at the downstream part of the plasma cell. The rubidium vapour density is monitored constantly by an interferometer-based diagnostic 29 . Fig. 1: Layout of the AWAKE experiment. The proton bunch and laser pulse propagate from left to right across the image, through a 10-m column of rubidium (Rb) vapour. This laser pulse (green, bottom images) singly ionizes the rubidium to form a plasma (yellow), which then interacts with the proton bunch (red, bottom left image). This interaction modulates the long proton bunch into a series of microbunches (bottom right image), which drive a strong wakefield in the plasma. These microbunches are millimetre-scale in the longitudinal direction ( ξ ) and submillimetre-scale in the transverse ( x ) direction. The self-modulation of the proton bunch is measured in imaging stations 1 and 2 and the optical and coherent transition radiation (OTR, CTR) diagnostics. The rubidium (pink) is supplied by two flasks at each end of the vapour source. The density is controlled by changing the temperature in these flasks and a gradient may be introduced by changing their relative temperature. Electrons (blue), generated using a radio-frequency source, propagate a short distance behind the laser pulse and are injected into the wakefield by crossing at an angle. Some of these electrons are captured in the wakefield and accelerated to high energies. The accelerated electron bunches are focused and separated from the protons by the quadrupoles and dipole magnet of the spectrometer (grey, right). These electrons interact with a scintillating screen, creating a bright intensity spot (top right image), allowing them to be imaged and their energy inferred from their position. Full size image The self-modulation of the proton bunch into microbunches (Fig. 1 ; red, bottom right image) is measured using optical and coherent transition radiation diagnostics (Fig. 1 ; purple) 30 . However, these diagnostics have a destructive effect on the accelerated electron bunch and cannot be used during electron acceleration experiments. The second beam-imaging station (Fig. 1 ; orange, right) is used instead, providing an indirect measurement of the self-modulation by measuring the transversely defocused protons 31 . These protons are expelled from the central propagation axis by transverse electric fields that are present only when the proton bunch undergoes modulation in the plasma. Electron bunches with a charge of 656 ± 14 pC (where the uncertainty is the r.m.s.) are produced and accelerated to 18.84 ± 0.05 MeV (where the uncertainty is the standard error of the mean) in a radio-frequency structure upstream of the vapour source 32 . These electrons are then transported along a beam line before being injected into the vapour source. Magnets along the beam line are used to control the injection angle and focal point of the electrons. For the results presented here, the electrons enter the plasma with a small vertical offset with respect to the proton bunch and a 200-ps delay with respect to the ionizing laser pulse (Fig. 1 , bottom left). The beams cross approximately 2 m into the vapour source at a crossing angle of 1.2–2 mrad. Simulations show that electrons are captured in larger numbers and accelerated to higher energies when injected off-axis rather than collinearly with the proton bunch 17 . The normalized emittance of the witness electron beam at injection is approximately 11–14 mm mrad and its focal point is close to the entrance of the vapour source. The delay of 200 ps corresponds to approximately 25 proton microbunches resonantly driving the wakefield at n pe = 2 × 10 14 cm −3 and 50 microbunches at n pe = 7 × 10 14 cm −3 . A magnetic electron spectrometer (Fig. 1 , right) enables measurement of the accelerated electron bunch 33 . Two quadrupole magnets are located 4.48 m and 4.98 m downstream of the exit iris of the vapour source and focus the witness beam vertically and horizontally, respectively, to more easily identify a signal. These are followed by a 1-m-long C-shaped electromagnetic dipole with a maximum magnetic field of approximately 1.4 T. A large triangular vacuum chamber sits in the cavity of the dipole. This chamber is designed to keep accelerated electron bunches under vacuum while the magnetic field of the dipole induces an energy-dependent horizontal deflection in the bunch. Electrons within a specific energy range then exit this vacuum chamber through a 2-mm-thick aluminium window and are incident on a 0.5-mm-thick gadolinium oxysulfide (Gd 2 O 2 S:Tb) scintillator screen (Fig. 1 ; blue, right) attached to the exterior surface of the vacuum chamber. The proton bunch is not greatly affected by the spectrometer magnets, owing to its high momentum, and continues to the beam dump. The scintillating screen is 997 mm wide and 62 mm high with semi-circular ends. Light emitted from the scintillator screen is transported over a distance of 17 m via three highly reflective optical-grade mirrors to an intensified charge-coupled device (CCD) camera fitted with a lens with a focal length of 400 mm. The camera and the final mirror of this optical line are housed in a dark room, which reduces ambient light incident on the camera to negligible values. The energy of the accelerated electrons is inferred from their horizontal position in the plane of the scintillator. The relationship between this position and the energy of the electron is dependent on the strength of the dipole, which can be varied from approximately 0.1 T to 1.4 T. This position–energy relationship has been simulated using the Beam Delivery Simulation (BDSIM) code 34 . The simulation tracks electrons of various energies through the spectrometer using measured and simulated magnetic-field maps for the spectrometer dipole, as well as the relevant distances between components. The accuracy of the magnetic-field maps, the precision of the distance measurements and the 1.5-mm resolution of the optical system lead to an energy uncertainty of approximately 2%. The overall uncertainty, however, is dominated by the emittance of the accelerated electrons, and can be larger than 10%. The use of the focusing quadrupoles limits this uncertainty to approximately 5% for electrons near to the focused energy. Owing to the difficulty of propagating an electron beam of well-known intensity to the spectrometer at AWAKE, the charge response of the scintillator is calculated using data acquired at CERN’s Linear Electron Accelerator for Research (CLEAR) facility. This calibration is performed by placing the scintillator and vacuum window next to a beam charge monitor on the CLEAR beam line and measuring the scintillator signal. The response of the scintillator is found to depend linearly on charge over the range 1–50 pC. The response is also found to be independent of position and of energies in the range 100–180 MeV, to within the measurement uncertainty. This charge response is then recalculated for the optical system of the spectrometer at AWAKE by imaging a well-known light source at both locations. A response of (6.9 ± 2.1) × 10 6 CCD counts per incident picocoulomb of charge, given the acquisition settings used at AWAKE, is determined. The large 1 σ uncertainty is due to different triggering conditions at CLEAR and AWAKE and systematic uncertainties in the calibration results. Reliable acceleration of electrons relies on reproducible self-modulation of the proton beam. As well as the observation of the transverse expansion of the proton bunch, the optical and coherent transition radiation diagnostics showed clear microbunching of the beam. The proton microbunches were observed to be separated by the plasma wavelength (inferred from the measured rubidium vapour density, see Methods ) for all parameter ranges investigated; they were also reproducible and stable in phase relative to the seeding. The detailed study of the self-modulation process will be the subject of separate AWAKE publications. The data presented here were collected in May 2018. In Fig. 2a we show an image of the scintillator from an electron acceleration event at a plasma density of 1.8 × 10 14 cm −3 , with a measured density difference of +5.3% ± 0.3% over 10 m in the direction of propagation of the proton bunch. This image has been background-subtracted and corrected for vignetting and electron-angle effects (Methods). The quadrupoles of the spectrometer were focusing at an energy of approximately 700 MeV during this event, creating a substantial reduction in the vertical spread of the beam. In Fig. 2b we show a projection obtained by integrating over a central region of the scintillator. A 1 σ uncertainty band, which comes from the background subtraction, is shown around zero. The peak in this figure has a high signal-to-noise ratio, which provides clear evidence of accelerated electrons. In both the image and the projection, the charge density is calculated using the central value of 6.9 × 10 6 CCD counts per picocoulomb. The asymmetric shape of the peak is due to the nonlinear position–energy relationship induced in the electron bunch by the magnetic field; when re-binned in energy, the signal peak is approximately Gaussian. Accounting for the systematic uncertainties described earlier, the observed peak has a mean of 800 ± 40 MeV, a FWHM of 137.3 ± 13.7 MeV and a total charge of 0.249 ± 0.074 pC. The amount of charge captured is expected to increase considerably 17 as the emittance of the injected electron bunch is reduced and its geometric overlap with the wakefield is improved. Fig. 2: Signal of accelerated electrons. a , An image of the scintillator (with horizontal distance x and vertical distance y ) with background subtraction and geometric corrections applied is shown, with an electron signal clearly visible. The intensity of the image is given in charge Q per unit area (d 2 Q /d x d y ), calculated using the central value from the calibration of the scintillator. b , A projection of the image in a is obtained by integrating vertically over the charge observed in the central region of the image. A 1 σ uncertainty band from the background subtraction is shown in orange around zero. Both the image ( a ) and the projection ( b ) are binned in space, as shown on the top axis, but the central value from the position–energy conversion is indicated at various points on the bottom axis. The electron signal is clearly visible above the noise, with a peak intensity at an energy of E ≈ 800 MeV. Full size image The stability and reliability of the electron acceleration is evidenced by Fig. 3 , which shows projections from many consecutive electron-injection events. Each row in this plot is the background-subtracted projection from a single event, with the colour representing the signal intensity. The events correspond to a 2-h running period during which the quadrupoles were varied to focus over a range of approximately 460–620 MeV. Other parameters, such as the proton-bunch population, were not deliberately changed but vary naturally on a shot-to-shot basis. Despite the quadrupole scan and the natural fluctuations in the beam parameters, the plot still shows consistent and reproducible acceleration of electron bunches to approximately 600 MeV. The plasma density for these events is 1.8 × 10 14 cm −3 , with no density gradient. This lack of gradient is the cause of the difference in energy between the event in Fig. 2 and the events in Fig. 3 . Fig. 3: Background-subtracted projections of consecutive electron-injection events. Each projection (event) is a vertical integration over the central region of a background-subtracted spectrometer camera image. Brighter colours indicate regions of high charge density d Q /d x , corresponding to accelerated electrons. The quadrupoles of the spectrometer were varied to focus at energies of 460–620 MeV over the duration of the dataset. No other parameters were varied deliberately. The consistent peak around energy E ≈ 600 MeV demonstrates the stability and reliability of the electron acceleration. Full size image The energy gain achievable by introducing a more optimal gradient is demonstrated in Fig. 4 , which shows the peak energy achieved at different plasma densities with and without a gradient. The density gradients chosen are those that are observed to maximize the peak energy for a given plasma density. At 1.8 × 10 14 cm −3 the density difference was approximately +5.3% ± 0.3% over 10 m, whereas at 3.9 × 10 14 cm −3 and 6.6 × 10 14 cm −3 it fell to +2.5% ± 0.3% and +2.2% ± 0.1%, respectively. Given the precise control of the longitudinal plasma density, small density gradients can have a substantial effect on the acceleration because the electrons are injected tens of microbunches behind the ionizing laser pulse 26 . The charge of the observed electron bunches decreases at higher plasma densities, owing in part to the smaller transverse size of the wakefield. In addition, the quadrupoles of the spectrometer have a maximum focusing energy of 1.3 GeV, which makes bunches accelerated to higher energies than this harder to detect above the background noise. Fig. 4: Measurement of the highest peak energies μ E achieved at different plasma densities n pe , with and without a gradient in the plasma density. The error bars arise from the position–energy conversion. The gradients chosen are those that were observed to maximize the energy gain. Acceleration to 2.0 ± 0.1 GeV is achieved with a plasma density of 6.6 × 10 14 cm −3 with a density difference of +2.2% ± 0.1% over 10 m. Full size image The energies shown in Fig. 4 are determined by binning the pixel data in energy and fitting a Gaussian over the electron signal region; the peak energy μ E is the mean of this Gaussian. The observed energy spread of each bunch is determined by the width of this Gaussian and is approximately 10% of the peak energy. The peak energy increases with density, reaching 2.0 ± 0.1 GeV for n pe = 6.6 × 10 14 cm −3 in the presence of a density gradient, at which point the charge capture is much lower. The energies of the accelerated electrons are within the range of values originally predicted by particle-in-cell and fluid code simulations of the AWAKE experiment 17 , 18 , 26 . Future data-collection runs will address the effect of the electron-bunch delay, injection angle and other parameters on the accelerated energy and charge capture. These studies will help to determine what sets the limit on the energy gain. In summary, we have demonstrated proton-driven plasma wakefield acceleration. The strong electric fields, generated by a series of proton microbunches, were sampled with a bunch of electrons. These electrons were accelerated up to 2 GeV in approximately 10 m of plasma and measured using a magnetic spectrometer. This technique has the potential to accelerate electrons to the teraelectronvolt scale in a single accelerating stage. Although still in the early stages of its programme, the AWAKE experiment is an important step towards realizing new high-energy particle physics experiments. Methods Plasma generation A CentAurus Ti:sapphire laser system is used to ionize the rubidium in the vapour source. The rubidium is confined by expansion chambers at the ends of the source with 10-mm-diameter irises through which rubidium flows constantly and condensates on the expansion walls. By the relation λ pe = 2π c [ ε 0 m e /( n pe e 2 )] 1/2 , where c is the speed of light, ε 0 is the permittivity of free space, m e is the electron mass and e is the electron charge, the available density range of n pe = 10 14 –10 15 cm −3 corresponds to a plasma wavelength of λ pe ≈ 1.1–3.3 mm. The uniformity of the vapour density is ensured by flowing a heat-exchanging fluid around a concentric tube surrounding the source at a temperature stabilized to ±0.05 °C. Longitudinal density differences of between −10% and +10% over 10 m may be implemented, and controlled at the 1% level. The motion of the (heavy) rubidium ions can be neglected during the transit of the proton bunch because they are singly ionized 35 . Witness electron beam Production of the witness electron beam is initiated by illuminating a Cs 2 Te cathode by using a frequency-tripled laser pulse derived from the ionizing laser. Electron bunches with a charge of 656 ± 14 pC are produced and accelerated to an energy of 5.5 MeV in a 2.5 cell radio-frequency gun and are subsequently accelerated up to 18.84 ± 0.05 MeV using a 30 cell travelling wave structure. These electrons are then transported along an 18-m beam line before being injected into the vapour source. The focal point and crossing angle of the witness beam can be controlled via a combination of quadrupole and kicker magnets along this beam line. Background subtraction The large distance between the camera and the proton beam line means that background noise generated by radiation directly incident on the CCD is minimal. The scintillator of the spectrometer, however, is subject to considerable background radiation. The rise and decay of the scintillator signal occur on timescales longer than 1 μs and, as such, the scintillator photons captured by the camera are produced by an indivisible combination of background radiation and accelerated electrons. The majority of this background radiation is due to the passage of the proton bunch and comes from two main sources: a 0.2-mm-thick aluminium window located 43 m upstream of the spectrometer between AWAKE and the SPS transfer line, and a 0.6-mm-thick aluminium iris at the downstream end of the vapour source. The inner radius of this iris is 5 mm, leading to negligible interaction with the standard SPS proton bunch. However, protons that are defocused during self-modulation, such as those measured at the downstream imaging station, can interact with the iris, creating a substantial background. The strength of the transverse fields in the plasma and hence the number of protons that are defocused is strongly dependent on the plasma density. Consequently, the background generated by the defocused protons is more substantial at higher plasma densities, such as the AWAKE baseline density of 7 × 10 14 cm −3 . At this density, the radiative flux on the scintillator due to the iris is much higher than that from the thin window. Conversely, at a lower plasma density, such as 2 × 10 14 cm −3 , the radiation from the iris disappears completely and the remaining incident radiation is produced almost entirely by the interaction of the protons with the upstream window. Owing to the variable nature of the radiation incident on the scintillator, background subtraction is a multistep process. A background data sample with the electron beam off at a plasma density of 1.8 × 10 14 cm −3 is taken, such that the background has two key components: one due to the camera readout and ambient light in the experimental area, and another, N p -dependent background caused by the proton bunch passing through the thin window. For each pixel imaging the scintillator, a linear function of N p is defined by a χ 2 minimization fit to the background data sample, giving an N p -dependent mean background image. For each signal event, a region of the scintillator is chosen where no accelerated electrons are expected, typically the lowest-energy part, and the background is rescaled by the ratio of the sums over this region in the signal event and the N p -scaled background image. At higher plasma densities, a further step is needed to subtract the background from the iris. This background falls rapidly with increasing distance from the beam line and therefore depends on the horizontal position in the plane of the scintillator. A new region where the expected number of accelerated electrons is small is chosen, this time along the top and bottom edges of the scintillator. The mean of each column of pixels in this region is calculated and then subtracted from each pixel in the central region of that same column, leaving only the signal. The semi-circular ends of the scintillator reduce the effectiveness of this technique at the highest and lowest energies. Signal extraction To obtain an accurate estimate of the electron-bunch charge, the background-subtracted signal is corrected for two effects that vary across the horizontal plane of the scintillator. One effect comes from the variation in the horizontal angle of incidence of the electron on the scintillator. This angle is determined by the same tracking simulation used to define the position–energy relationship, and introduces a cosine correction to the signal owing to the variation in the path length of the electron through the scintillator. The second effect is vignetting, which occurs as result of the finite size of the optics of the spectrometer and the angular emission profile of the scintillator photons. A lamp that mimics this emission profile is scanned across the horizontal plane of the scintillator and the vignetting correction is determined by measuring its relative brightness. The increase in radiation accompanying the electron bunch, owing to its longer path length through the vacuum window at larger incident angles, is negligible and therefore does not require an additional correction factor. Data reporting No statistical methods were used to predetermine sample size. Data availability The datasets generated and analysed during this study are available from the corresponding author on reasonable request. The software code used in the analysis and to produce Figs. 2 – 4 is available from the corresponding author on reasonable request.
Early in the morning on Saturday, 26 May 2018, the AWAKE collaboration at CERN successfully accelerated electrons for the first time using a wakefield generated by protons zipping through a plasma. A paper describing this important result was published in the journal Nature today. The electrons were accelerated by a factor of around 100 over a length of 10 metres: they were externally injected into AWAKE at an energy of around 19 MeV (million electronvolts) and attained an energy of almost 2 GeV (billion electronvolts). Although still at a very early stage of development, the use of plasma wakefields could drastically reduce the sizes, and therefore the costs, of the accelerators needed to achieve the high-energy collisions that physicists use to probe the fundamental laws of nature. The first demonstration of electron acceleration in AWAKE comes only five years after CERN approved the project in 2013 and is an important first step towards realising this vision. AWAKE, which stands for "Advanced WAKEfield Experiment", is a proof-of-principle R&D project investigating the use of protons to drive plasma wakefields for accelerating electrons to higher energies than can be achieved using conventional technologies. Traditional accelerators use what are known as radio-frequency (RF) cavities to kick the particle beams to higher energies. This involves alternating the electrical polarity of positively and negatively charged zones within the RF cavity, with the combination of attraction and repulsion accelerating the particles within the cavity. By contrast, in wakefield accelerators, the particles get accelerated by "surfing" on top of the plasma wave (or wakefield) that contains similar zones of positive and negative charges. Plasma wakefields themselves are not new ideas; they were first proposed in the late 1970s. "Wakefield accelerators have two different beams: the beam of particles that is the target for the acceleration is known as a 'witness beam', while the beam that generates the wakefield itself is known as the 'drive beam'," explains Allen Caldwell, spokesperson of the AWAKE collaboration. Previous examples of wakefield acceleration have relied on using electrons or lasers for the drive beam. AWAKE is the first experiment to use protons for the drive beam, and CERN provides the perfect opportunity to try the concept. Drive beams of protons penetrate deeper into the plasma than drive beams of electrons and lasers. "Therefore," Caldwell adds, "wakefield accelerators relying on protons for their drive beams can accelerate their witness beams for a greater distance, consequently allowing them to attain higher energies." CERN project leader for AWAKE, Edda Gschwendtner, explains how the experiment accelerated electrons for the first time. Credit: CERN AWAKE gets its drive-protons from the Super Proton Synchrotron (SPS), which is the last accelerator in the chain that delivers protons to the Large Hadron Collider (LHC). Protons from the SPS, travelling with an energy of 400 GeV, are injected into a so-called "plasma cell" of AWAKE, which contains Rubidium gas uniformly heated to around 200 ºC. These protons are accompanied by a laser pulse that transforms the Rubidium gas into a plasma – a special state of ionised gas – by ejecting electrons from the gas atoms. As this drive beam of positively charged protons travels through the plasma, it causes the otherwise-randomly-distributed negatively charged electrons within the plasma to oscillate in a wavelike pattern, much like a ship moving through the water generates oscillations in its wake. Witness-electrons are then injected at an angle into this oscillating plasma at relatively low energies and "ride" the plasma wave to get accelerated. At the other end of the plasma, a dipole magnet bends the incoming electrons onto a detector. "The magnetic field of the dipole can be adjusted so that only electrons with a specific energy go through to the detector and give a signal at a particular location inside it," says Matthew Wing, deputy spokesperson of AWAKE, who is also responsible for this apparatus, known as the electron spectrometer. "This is how we were able to determine that the accelerated electrons reached an energy of up to 2 GeV." The strength at which an accelerator can accelerate a particle beam per unit of length is known as its acceleration gradient and is measured in volts-per-metre (V/m). The greater the acceleration gradient, the more effective the acceleration. The Large Electron-Positron collider (LEP), which operated at CERN between 1989 and 2000, used conventional RF cavities and had a nominal acceleration gradient of 6 MV/m. "By accelerating electrons to 2 GeV in just 10 metres, AWAKE has demonstrated that it can achieve an average gradient of around 200 MV/m," says Edda Gschwendtner, technical coordinator and CERN project leader for AWAKE. Gschwendtner and colleagues are aiming to attain an eventual acceleration gradient of around 1000 MV/m (or 1 GV/m). AWAKE has made rapid progress since its inception. Civil-engineering works for the project began in 2014, and the plasma cell was installed in early 2016 in the tunnel formerly used by part of the CNGS facility at CERN. A few months later, the first drive beams of protons were injected into the plasma cell to commission the experimental apparatus, and a proton-driven wakefield was observed for the first time in late 2016. In late 2017, the electron source, electron beam line and electron spectrometer were installed in the AWAKE facility to complete the preparatory phase. Now that they have demonstrated the ability to accelerate electrons using a proton-driven plasma wakefield, the AWAKE team is looking to the future. "Our next steps include plans for delivering accelerated electrons to a physics experiment and extending the project with a full-fledged physics programme of its own," notes Patric Muggli, physics coordinator for AWAKE. AWAKE will continue testing the wakefield-acceleration of electrons for the rest of 2018, after which the entire accelerator complex at CERN will undergo a two-year shutdown for upgrades and maintenance. Gschwendtner is optimistic: "We are looking forward to obtaining more results from our experiment to demonstrate the scope of plasma wakefields as the basis for future particle accelerators."
10.1038/s41586-018-0485-4
Physics
Squeezed quantum cats
Lo HY, Kienzler D, de Clercq L, Marinelli M, Negnevitsky V, Keitch, BC, Home JP: Spin–motion entanglement and state diagnosis with squeezed oscillator wavepackets. Nature, 21 May 2015, DOI: 10.1038/nature14458 Journal information: Nature
http://dx.doi.org/10.1038/nature14458
https://phys.org/news/2015-05-quantum-cats.html
Abstract Mesoscopic superpositions of distinguishable coherent states provide an analogue of the ‘Schrödinger’s cat’ thought experiment 1 , 2 . For mechanical oscillators these have primarily been realized using coherent wavepackets, for which the distinguishability arises as a result of the spatial separation of the superposed states 3 , 4 , 5 . Here we demonstrate superpositions composed of squeezed wavepackets, which we generate by applying an internal-state-dependent force to a single trapped ion initialized in a squeezed vacuum state with nine decibel reduction in the quadrature variance. This allows us to characterize the initial squeezed wavepacket by monitoring the onset of spin–motion entanglement, and to verify the evolution of the number states of the oscillator as a function of the duration of the force. In both cases we observe clear differences between displacements aligned with the squeezed and anti-squeezed axes. We observe coherent revivals when inverting the state-dependent force after separating the wavepackets by more than 19 times the ground-state root mean squared extent, which corresponds to 56 times the root mean squared extent of the squeezed wavepacket along the displacement direction. Aside from their fundamental nature, these states may be useful for quantum metrology 6 or quantum information processing with continuous variables 7 , 8 , 9 . Main The creation and study of non-classical states of spin systems coupled to a harmonic oscillator have provided fundamental insights into the nature of decoherence and the quantum–classical transition. These states and their control form the basis of experimental developments in quantum information processing and quantum metrology 1 , 2 , 10 . Two of the most commonly considered states of the oscillator are squeezed states and superpositions of coherent states of opposite phase, which are commonly referred to as ‘Schrödinger’s cat’ (SC) states. Squeezed states involve a reduction of the fluctuations in one quadrature of the oscillator below the ground-state uncertainty, which has been used to increase sensitivity in interferometers 11 , 12 . SC states provide a complementary sensitivity to environmental influences by separating the two parts of the state by a large distance in phase space. These states have been created in microwave and optical cavities 2 , 13 , where they are typically not entangled with another system, and also with trapped ions 1 , 3 , 4 , 5 , where all experiments performed have involved entanglement between the oscillator state and the internal electronic states of the ion. SC states have recently been used as sensitive detectors for photon scattering recoil events at the single-photon level 14 . Here we use state-dependent forces (SDFs) to create superpositions of distinct squeezed oscillator wavepackets that are entangled with a pseudo-spin encoded in the electronic states of a single trapped ion. We will refer to these states as squeezed wavepacket entangled states (SWESs) in the rest of the paper. By monitoring the spin evolution as the entanglement with the oscillator increases 15 , 16 , 17 , we are able to observe the squeezed nature of the initial state directly. We obtain a complementary measurement of the initial state by extracting the number-state probability distribution of the displaced-squeezed states that make up the superposition. In both measurements we observe clear differences depending on the force direction. We show that the SWESs are coherent by reversing the effect of the SDF, resulting in recombination of the squeezed wavepackets, which we measure through the revival of the spin coherence. The squeezed vacuum state is defined by the action of the squeezing operator on the motional ground state , where , with r and ϕ s real parameters that define the magnitude and the direction of the squeezing in phase space. To prepare squeezed states of motion in which the variance of the squeezed quadrature is reduced by about 9 dB relative to the ground-state wavepacket we use reservoir engineering, in which a bichromatic light field is used to couple the ion’s motion to the spin states of the ion, which undergo continuous optical pumping. This dissipatively pumps the motional state of the ion into the desired squeezed state, which is the dark state of the dynamics. More details about the reservoir engineering can be found in ref. 18 . This approach provides a robust basis for all experiments described below, typically requiring no recalibration over several hours of taking data. In the ideal case, the optical pumping used in the reservoir engineering results in the ion being pumped to . To create a SWES, we apply a SDF to this squeezed vacuum state by simultaneously driving the red and blue motional sidebands of the spin-flip transition 3 . The resulting interaction Hamiltonian can be written in the Lamb–Dicke approximation (LDA) as where Ω is the strength of the SDF, ϕ D is the relative phase of the two light fields, and with . For an ion prepared in , this Hamiltonian results in displacement of the motional state in phase space by an amount , which is given in units of the root mean squared (r.m.s.) extent of the harmonic oscillator ground state. An ion prepared in will be displaced by the same amount in the opposite direction. In the following equations we use α in place of α(τ) for simplicity. Starting from the state , application of the SDF ideally results in the SWES where we use the notation with the displacement operator . A projective measurement of the spin performed in the basis gives the probability of being as , where gives the overlap between the two displaced motional states, which can be written as where . When Δ ϕ = 0, the SDF is aligned with the squeezed quadrature of the state, whereas for Δ ϕ = π/2, the SDF is aligned with the anti-squeezed quadrature. At displacements for which X gives a measurable signal, monitoring the spin population as a function of the force duration τ for different choices of Δ ϕ allows us to characterize the spatial variation of the initial squeezed wavepacket 15 , 16 , 17 . For values of greater than the wavepacket variance along the direction of the force, the state in equation (2) is a distinct superposition of squeezed wavepackets that have overlap close to zero and are entangled with the internal state. For r = 0 (no squeezing) the state reduces to the familiar SC states that have been produced in previous work 1 , 3 , 4 , 5 . For r > 0, the superposed oscillator states are the displaced-squeezed states 19 , 20 . The experiments use a single trapped 40 Ca + ion, which mechanically oscillates on its axial vibrational mode with a frequency close to = 2.1 MHz. This mode is well resolved from all other modes. We encode a pseudo-spin system in the internal electronic states and . All coherent manipulations, including the squeezed-state preparation and the SDF, make use of the quadrupole transition between these levels at 729 nm, with a Lamb–Dicke parameter of η ≈ 0.05 for the axial mode. This is small enough for the experiments to be well described using the LDA (a discussion of this approximation is given in the Methods) 21 . We apply the SDF directly after the squeezed vacuum state has been prepared by reservoir engineering and the internal state has been prepared in by optical pumping (in the ideal case, the ion is already in the correct state and this step has no effect). Figure 1 shows the results of measuring after applying displacements along the two principal axes of the squeezed state alongside the same measurement made using an ion prepared in the motional ground state. To extract relevant parameters regarding the SDF and the squeezing, we fit the data using , where the parameters A and B account for experimental imperfections such as shot-to-shot fluctuations in the magnetic field (Methods). Fitting the ground-state data with r fixed to zero allows us to extract kHz (here and in the rest of the paper, all errors are given as s.e.m.). We then fix this when performing independent fits to the squeezed-state data for Δ ϕ = 0 and Δ ϕ = π/2. Each of these fits allows us to extract an estimate for the squeezing parameter r . For both the squeezed and anti-squeezed quadratures we obtain consistent values with a mean of r = 1.08 ± 0.03, which for a pure state would correspond to a 9.4 dB reduction in the variance of the squeezed quadrature. The inset of Fig. 1 shows the spin population as a function of the SDF phase ϕ D with the SDF duration fixed to 20 μs. This is also fitted using the same equation described above, and we obtain r = 1.13 ± 0.03. Figure 1: Spin population evolution due to spin–motion entanglement. Projective measurement of the spin in the basis as a function of SDF duration. a , Forces parallel to the squeezed quadrature (red triangles). b , An ion initially prepared in the motional ground state (blue circles). c , Forces parallel to the anti-squeezed quadrature (green squares). The inset shows a scan of the phase of the SDF for an initial squeezed state with the force duration fixed at 20 μs. Each data point is the result of >300 repetitions of the experimental sequence. Results are shown as means ± s.e.m.; the error bars were generated under the assumption that the dominant source of fluctuations was quantum projection noise. PowerPoint slide Full size image The loss of overlap between the two wavepackets indicates that a SWES has been created. To verify that these states are coherent superpositions, we recombine the wavepackets by applying a second ‘return’ SDF pulse for which the phase of both the red and blue sideband laser frequency components is shifted by π relative to the first. This reverses the direction of the force applied to the motional states for both the and spin states. In the ideal case a state displaced to α ( τ 1 ) by a first SDF pulse of duration τ 1 has a final displacement of δ α = α ( τ 1 ) − α ( τ 2 ) after the return pulse of duration τ 2 . For τ 1 = τ 2 , δ α = 0 and the measured probability of finding the spin state in is 1. In the presence of decoherence and imperfect control, the probability with which the ion returns to the state will be reduced. In Fig. 2 we show revivals in the spin coherence for the same initial squeezed vacuum state as was used for the data in Fig. 1 . The data include a range of different τ 1 . For the data for which the force was applied along the squeezed axis of the state (Δ ϕ = 0), partial revival of the coherence is observed for SDF durations up to 250 μs. For τ 1 = 250 μs the maximum separation of the two distinct oscillator wavepackets is , which is 56 times the r.m.s. width of the squeezed wavepacket in phase space. The amplitude of revival of this state is similar to what we observe when applying the SDF to a ground-state cooled ion. The loss of coherence as a function of the displacement duration is consistent with the effects of magnetic-field-induced spin dephasing and motional heating 14 , 22 . When the force is applied along the anti-squeezed quadrature (Δ ϕ = π/2), we observe that the strength of the revival decays more rapidly than for displacements with Δ ϕ = 0. Simulations of the dynamics using a quantum Monte Carlo wavefunction approach including sampling over a magnetic field distribution indicate that this is caused by shot-to-shot fluctuations of the magnetic field (Methods). Figure 2: Revival of the spin coherence. Spin populations as a function of the duration of the second SDF pulse with the spin phase shifted by π relative to the first pulse. a , Forces parallel to the squeezed quadrature. b , Forces parallel to the anti-squeezed quadrature. In all cases an increase in the spin population is seen at the time when the two motional states are overlapped, which corresponds to the time τ 1 used for the first SDF pulse. The value of τ 1 and the corresponding |Δ α | calculated from the measured Rabi frequency are written above the revival of each data set. The fractional error on the mean of each of the estimated is about 3%. The solid lines are fitted curves using the same form as for the fits in Fig. 1 with the overlap function X (δ α , ξ ). The values of r obtained are consistent with the data in Fig. 1 . Results are shown as means ± s.e.m.; the error bars were generated under the assumption that the dominant source of fluctuations was quantum projection noise. PowerPoint slide Full size image We are also able to monitor the number-state distributions of the motional wavepackets as a function of the duration of the SDF. This provides a second measurement of the parameters of the SDF and the initial squeezed wavepacket, which has similarities with the homodyne measurement used in optics 23 , 24 . To do this, we optically pump the spin state into after applying the SDF. This procedure destroys the phase relationship between the two motional wavepackets, resulting in the mixed oscillator state (we estimate that the photon recoil during optical pumping results in a decrease in the fidelity of our experimental state relative to by <3%, which would not be observable in our measurements). The two parts of this mixture have the same number-state distribution, which is that of a displaced-squeezed state 19 , 20 . To extract this distribution, we drive Rabi oscillations on the blue-sideband transition 25 and monitor the subsequent spin population in the basis. Figure 3 shows this evolution for SDF durations of τ = 0, 30, 60 and 120 μs. For τ = 30 and 60 μs, the results from displacements applied parallel to the two principal axes of the squeezed state are shown (Δ ϕ = 0 and π/2). We obtain the number-state probability distribution p ( n ) from the spin state population by fitting the data using a form , where t is the blue-sideband pulse duration, Ω n,n +1 is the Rabi frequency for the transition between the and states, and γ is a phenomenological decay parameter 25 , 26 . The parameter b accounts for gradual pumping of population into the state due to frequency noise on our laser 18 , 27 . It is negligible when p (0) is small. The resulting p ( n ) are then fitted using the theoretical form for the displaced-squeezed states (Methods). The number-state distributions show a clear dependence on the phase of the force, which is also reflected in the spin population evolution. Figure 4 shows the Mandel Q parameters of the experimentally obtained number-state distributions, defined as , in which and are the variance and mean of p ( n ), respectively 28 . The solid lines are the theoretical curves given in ref. 19 for r = 1.08, and are in agreement with our experimental results. For displacements along the short axis of the squeezed state ( Fig. 3 ), the collapse and revival behaviour of the time evolution of P (↓) is reminiscent of the Jaynes–Cummings Hamiltonian applied to a coherent state 29 , but it has more oscillations before the ‘collapse’ for a state of the same . This is surprising because the statistics of the state is not sub-Poissonian. We attribute this to the fact that this distribution is more peaked than that of a coherent state with the same , which is obvious when the two distributions are plotted over one another ( Fig. 3d, f ). The increased variance of the squeezed state then arises from the extra populations at high n , which are too small to make a visible contribution to the Rabi oscillations. For the squeezing parameter in our experiments, sub-Poissonian statistics would be observed only for . For τ = 120 μs we obtain a consistent value of r and only in the case in which we include a fit parameter for scaling of the theoretical probability distribution, obtaining a fitted scaling of 0.81 ± 0.10 (Methods). The reconstruction of the number-state distribution is incomplete because we cannot extract populations with n > 29 as a result of frequency crowding in the dependence of the Jaynes–Cummings dynamics. We therefore do not include these results in Fig. 4 . Measurement techniques made in a squeezed-state basis 18 could avoid this problem; however, these are beyond our current experimental capabilities for states of this size. Figure 3: Evolution of displaced-squeezed-state mixtures. The observed blue-sideband oscillations and the corresponding number-state probability distributions for the SDF applied along the two principal axes of the squeezed state and with different durations. a , Initial squeezed vacuum state. b , d , f , Forces parallel to the squeezed quadrature. c , e , Forces parallel to the anti-squeezed quadrature. For τ = 30 μs the obtained parameters are consistent within statistical errors. For τ = 60 μs the displacement along the anti-squeezed quadrature ( e ) results in a large spread in the number-state probability distribution, with the result that in the fitting r and α are positively correlated; the errors stated do not take account of this. We think that this accounts for the apparent discrepancy between the values of r and α obtained for τ = 60 μs. The dashed green line in the insets of d and f is the Poisson distribution for the same as the created displaced-squeezed-state mixture, which is given by (ref. 19 ). Results are shown as means ± s.e.m.; the error bars were generated under the assumption that the dominant source of fluctuations was quantum projection noise. PowerPoint slide Full size image Figure 4: Mandel Q parameter for the displaced-squeezed states. Results for displacements along the squeezed quadrature (red triangles) and the anti-squeezed quadrature (green squares). All values are calculated from the experimental data given in Fig. 3 , taking the propagation of error into account. The solid lines are theoretical curves for displacements along the squeezed (red) and anti-squeezed (green) quadratures of an initial state with r = 1.08. The values of are obtained from fits to the respective p ( n ) ( Fig. 3 ), with error bars comparable to the size of the symbol. The point at is the squeezed vacuum state. PowerPoint slide Full size image We have generated entangled superposition states between the internal and motional states of a single trapped ion in which the superposed motional wavepackets are of a squeezed Gaussian form. These states present new possibilities both for metrology and for continuous variable quantum information. In an interferometer based on SC states separated by , the interference contrast depends on the final overlap of the recombined wavepackets. Fluctuations in the frequency of the oscillator result in a reduced overlap, but this effect can be improved by a factor if the wavepackets are squeezed in the same direction as the state separation (Methods). In quantum information with continuous variables, the computational basis states are distinguishable because they are separated in phase space by and thus do not overlap 7 , 8 , 9 . The decoherence times of such superpositions typically scale as (ref. 22 ). The use of states squeezed along the displacement direction reduces the required displacement for a given overlap by e r , increasing the resulting coherence time by e 2 r , which is a factor of 9 in our experiments. We therefore expect these states to open up new possibilities for quantum-state engineering and control. Methods Experimental details The experiments make use of a segmented linear Paul trap with an ion–electrode distance of ∼ 185 μm. Motional heating rates from the ground state for a calcium ion in this trap have been measured to be 10 ± 1 quanta s − 1 , and the coherence time for the number-state superposition has been measured to be 32 ± 3 ms. The first step of each experimental run involves cooling all modes of motion of the ion close to the Doppler limit by using laser light at 397 and 866 nm. The laser beam used for coherent control of the two-level pseudo-spin system addresses the narrow-linewidth transition at 729 nm. This transition is resolved by 200 MHz from all other internal state transitions in the applied magnetic field of 119.6 G. The SDFs and the reservoir engineering 18 in our experiment require the application of a bichromatic light field. We generate both frequency components with the use of acousto-optic modulators (AOMs) starting from a single laser stabilized to an ultra-high-finesse optical cavity with a resulting linewidth of <600 Hz (at which point magnetic field fluctuations limit the qubit coherence). We apply pulses of 729 nm laser light with a double-pass AOM to which we apply a single radiofrequency tone, followed by a single-pass AOM to which two radiofrequency tones are applied. After this second AOM, both frequency components are coupled into the same single-mode fibre before delivery to the ion. The double-pass AOM is used to switch the light on and off. Optical pumping to is implemented using a combination of linearly polarized light fields at 854, 397 and 866 nm. The internal state of the ion is read out by state-dependent fluorescence using laser fields at 397 and 866 nm. The 729 nm laser beam enters the trap at 45° to the z axis of the trap, resulting in a Lamb–Dicke parameter of η ≈ 0.05 for the axial mode. For this Lamb–Dicke parameter, we have verified whether for displacements up to the dynamics can be well described with the LDA. We simulate the wavepacket dynamics by using the interaction Hamiltonian with and without LDA. In the simulation we apply the SDF to an ion prepared in . The interaction Hamiltonian for a single trapped ion coupled to a single-frequency laser field can be written as 26 where Ω 0 is the interaction strength, , â and â † are motional annihilation and creation operators, ω z is the vibrational frequency of the ion, ϕ is the phase of the laser, δ = ω l − ω a is the detuning of the laser from the atomic transition, and ‘H.c’ is the Hermitian conjugate of the first term. In the laboratory, the application of the SDF involves simultaneously driving both the blue-sideband and red-sideband transitions resonantly, resulting in the Hamiltonian , where δ = ω z in and δ = −ω z in . Starting from , the evolution of the state cannot be solved analytically. We perform a numerical simulation in which we retain only the resonant terms in the Hamiltonian. Extended Data Fig. 1 shows the quasi-probability distributions in phase space for chosen values of the SDF duration τ . These are compared with results obtained using the LDA. For τ = 60 μs both cases are similar, resulting in ≈ 2.4. For τ = 250 μs the squeezed-state wavepackets are slightly distorted and the displacement is 4% smaller for the full simulation than for the LDA form. Considering the levels of error arising from imperfect control and decoherence for forces of this duration, we do not consider this effect to be significant in our experiments. Simulations for the coherence of SWESs After creating SWESs, we deduce that coherence is retained throughout the creation of the state by applying a second SDF pulse to the ion, which recombines the two separated wavepackets and disentangles the spin from the motion. The revival in the spin coherence is not perfect, because of decoherence and imperfect control in the experiment. One dominant source causing decoherence of the superpositions is spin decoherence due to magnetic field fluctuations. We have performed quantum Monte Carlo wavefunction simulations to investigate the coherence of the SWES in the presence of such a decoherence mechanism. We simulate the effect of a sinusoidal fluctuation of the magnetic field on a timescale that is long compared with the duration of the coherent control sequence, which is consistent with the noise that we observe on our magnetic field coil supply (at 10 and 110 Hz) and from ambient fluctuations due to electronics equipment in the room. The amplitude of these fluctuations is set to 2.2 mG, giving rise to the spin coherence time of 180 μs, which we measured using Ramsey experiments on the spin alone. Because the frequency of fluctuations is slow compared with the sequence length, we fix the field for each run of the simulation but sample its value from a probability distribution derived from a sinusoidal oscillation. In Extended Data Fig. 2 we show the effect of a single shot taken at a fixed qubit-oscillator detuning of 1.5 kHz, and in Extended Data Fig. 3 we show the average over the distribution. In both figures, results are shown for the SDF applied along the two principal axes of the squeezed vacuum state as well as for the motional ground state using force durations of 60 and 120 μs. We also show the results of applying the second SDF pulse, resulting in partial revival of the spin coherence. It can be seen that when the SDF is applied along the anti-squeezed quadrature, the strength of the revival decays more rapidly, and P (↓) oscillates around 0.5. This effect can be seen in the data shown in Fig. 2 . Number-state probability distributions for the displaced-squeezed state For Fig. 3 we characterize the probability distribution for the number states of the oscillator. This is performed by driving the blue-sideband transition and fitting the obtained spin population evolution using where t is the blue-sideband pulse duration, p ( n ) are the number-state probabilities for the motional state we are concerned with, and γ is an empirical decay parameter 25 , 26 . In the results presented here we do not scale this decay parameter with n as was done in ref. 25 . We also fitted the data including such a scaling and saw consistent results. The Rabi frequency coupling to is . For small n , this scales as , but because the states include significant populations at higher n we use the complete form including the generalized Laguerre polynomial . The parameter b in the first term accounts for a gradual pumping of population into the state , which is not involved in the dynamics of the blue-sideband pulse 18 , 27 . This effect is negligible when p (0) is small. After extracting p ( n ) from P (↓), we fit it using the number-state probability distribution for the displaced-squeezed state 30 : where κ is a constant that accounts for the infidelity of the state during the application of SDF, and the H n ( x ) are the Hermite polynomials. The direction of the SDF is aligned along either the squeezing quadrature or the anti-squeezing quadrature of the state. Therefore we set arg( α ) = 0 and fix ϕ s = 0 and π for fitting the data of the short axis and the long axis of the squeezed state, respectively. This allows us to obtain the values of r and | α | for the state we created. For the cases of smaller displacements (from Fig. 3a–e ), we set κ = 1. For the data set of | α | ≈ 4.6 ( Fig. 3f ), κ is a fitting parameter that gives us a value of 0.81 ± 0.1. We note that in this case 4% of the expected population lies above n = 29 but we are unable to extract these populations from our data. The Mandel Q parameter 28 is defined as where and are the mean and variance of the probability distribution. For a displaced-squeezed state these are given in ref. 19 as These forms were used to produce the curves given in Fig. 4 . Applications of SWESs The SWES may offer new possibilities for sensitive measurements that are robust against certain types of noise. An example is illustrated in Extended Data Fig. 4 , in which we compare an interferometry experiment involving the use of a SWES versus a more standard SC state based on coherent states. In both cases the superposed states have a separation of |2 α | obtained using a SDF. For the SWES this force is aligned along the squeezed quadrature of the state. The interferometer is closed by inverting the initial SDF, resulting in a residual displacement that in the ideal case is zero. One form of noise involves shot-to-shot fluctuations in the oscillator frequency. On each run of the experiment, this would result in a small phase shift Δ θ arising between the two superposed motional states. As a result, after the application of the second SDF pulse the residual displacement would be α R = 2 iα sin(Δ θ /2), which corresponds to the states being separated along the P axis in the rotating-frame phase space. The final state of the system would then be with a corresponding state overlap given by X ( α R , ξ ). Therefore the contrast will be higher for the SWES ( Extended Data Fig. 4a ) than for the coherent SC state ( Extended Data Fig. 4b ) by a factor Although in our experiments other sources of noise dominate, in other systems such oscillator dephasing may be more significant.
ETH professor Jonathan Home and his colleagues reach deep into their bag of tricks to create so-called 'squeezed Schrödinger cats.' These quantum systems could be extremely useful for future technologies. Quantum physics is full of fascinating phenomena. Take, for instance, the cat from the famous thought experiment by the physicist Erwin Schrodinger. The cat can be dead and alive at once, since its life depends on the quantum mechanically determined state of a radioactively decaying atom which, in turn, releases toxic gas into the cat's cage. As long as one hasn't measured the state of the atom, one knows nothing about the poor cat's health either - atom and kitty are intimately "entangled" with each other. Equally striking, if less well known, are the so-called squeezed quantum states: Normally, Heisenberg's uncertainty principle means that one cannot measure the values of certain pairs of physical quantities, such as the position and velocity of a quantum particle, with arbitrary precision. Nevertheless, nature allows a barter trade: If the particle has been appropriately prepared, then one of the quantities can be measured a little more exactly if one is willing to accept a less precise knowledge of the other quantity. In this case the preparation of the particle is known as "squeezing" because the uncertainty in one variable is reduced (squeezed). Schrödinger's cat and squeezed quantum states are both important physical phenomena that lie at the heart of promising technologies of the future. Researchers at the ETH were now able successfully to combine both in a single experiment. Squeezing and shifting In their laboratory, Jonathan Home, professor of experimental quantum optics and photonics, and his colleagues catch a single electrically charged calcium ion in a tiny cage made of electric fields. Using laser beams they cool the ion down until it hardly moves inside the cage. Now the researchers reach into their bag of tricks: they "squeeze" the state of motion of the ion by shining laser light on it and by skilfully using the spontaneous decay of its energy states. Eventually the ion's wave function (which corresponds to the probability of finding it at a certain point in space) is literally squashed: now the physicists have a better idea of where the ion is located in space, but the uncertainty in its velocity has increased proportionately. "This state squeezing is an important tool for us", Jonathan Home explains. "Together with a second tool - the so-called state-dependent forces - we are now able to produce a "squeezed Schrödinger cat" ". To that end the ion is once more exposed to laser beams that move it to the left or to the right. The direction of the forces induced by the laser depends on the internal energy state of the ion. This energy state can be represented by an arrow pointing up or down, also called a spin. If the ion is in an energy superposition state composed of "spin up" and "spin down", the force acts both to the left and to the right. In this way, a peculiar situation is created that is similar to Schrödinger's cat: the ion now finds itself in a hybrid state of being on the right (cat is alive) and on the left (cat is dead) at the same time. Only when one measures the spin does the ion decide whether to be on the right or on the left. Stable cats for quantum computers The Schrödinger cat prepared by professor Home and his collaborators is special in that the initial squeezing makes the ion states "left" and "right" particularly easy to distinguish. At the same time, it is also pretty large as the two ion states are far apart. "Even without the squeezing our "cat" is the largest one produced to date", Home points out. "With the squeezing, the states "left" and "right" are even more distinguishable - they are as much as sixty times narrower than the separation between them". All this isn't just about scientific records, however, but also about practical applications. Squeezed Schrödinger cats are particularly stable against certain types of disturbances that would normally cause the cats to lose their quantum properties and become ordinary felines. That stability could, for instance, be exploited in order to realize quantum computers, which use quantum superposition states to do their calculations. Furthermore, ultra-precise measurements could be made less sensitive to unwanted external influences.
10.1038/nature14458
Biology
Study cements age and location of hotly debated skull from early human Homo erectus
Hammond, A.S., Mavuso, S.S., Biernat, M. et al. New hominin remains and revised context from the earliest Homo erectus locality in East Turkana, Kenya. Nat Commun 12, 1939 (2021). doi.org/10.1038/s41467-021-22208-x Journal information: Nature Communications
https://doi.org/10.1038/s41467-021-22208-x
https://phys.org/news/2021-04-cements-age-hotly-debated-skull.html
Abstract The KNM-ER 2598 occipital is among the oldest fossils attributed to Homo erectus but questions have been raised about whether it may derive from a younger horizon. Here we report on efforts to relocate the KNM-ER 2598 locality and investigate its paleontological and geological context. Although located in a different East Turkana collection area (Area 13) than initially reported, the locality is stratigraphically positioned below the KBS Tuff and the outcrops show no evidence of deflation of a younger unit, supporting an age of >1.855 Ma. Newly recovered faunal material consists primarily of C 4 grazers, further confirmed by enamel isotope data. A hominin proximal 3rd metatarsal and partial ilium were discovered <50 m from the reconstructed location where KNM-ER 2598 was originally found but these cannot be associated directly with the occipital. The postcrania are consistent with fossil Homo and may represent the earliest postcrania attributable to Homo erectus . Introduction The KNM-ER 2598 specimen from East Turkana, Kenya is widely recognized as significant because it is one of the oldest fossils attributed to Homo erectus 1 , 2 , 3 , 4 , 5 . KNM-ER 2598 is a thick hominin cranial fragment preserving much of the central occipital bone, including portions of the lambdoidal suture and a distinctive Homo erectus -like occipital torus (Fig. 1 ) 6 . This fossil was collected from the outcrop surface in 1974 and was initially reported as originating from approximately the level of the KBS Tuff in East Turkana collection Area 15 6 . Later work would refine the stratigraphic placement of KNM-ER 2598 to about 4 m below the KBS Tuff and interpret the age as 1.88–1.9 million years (Ma) ago 1 , 7 . Fig. 1: KNM-ER 2598 partial occipital. a Inset image indicates the approximate anatomical location of the KNM-ER 2598 occipital. b Posterior view and right lateral view are shown. Full size image If KNM-ER 2598 is dated to nearly 1.9 Ma, it is the second chronologically oldest specimen with morphological affinities to H. erectus . The DNH 134 neurocranium from Drimolen is the oldest known Homo erectus specimen 8 . DNH 134 is a juvenile individual, which makes a categorical species attribution difficult to establish, but it has features (e.g., a teardrop-shaped superior profile, flat squamosal suture) which strongly favor a H. erectus attribution 8 . DNH 134 was recovered from a deposit with reversed paleomagnetic polarity with an associated uranium-series electron spin resonance (ESR) date of 2.04 Ma 8 , indicating that this specimen was deposited sometime within the C2r.1r reversed subchron. The most recent Geomagnetic Polarity Time Scale (GPTS 2020) associate the C2r.1r chron to the 1.934–2.120 Ma time interval 9 , 10 . Both DNH 134 and KNM-ER 2598 are critically important fossils because they are slightly older than those recovered from Dmanisi in the Republic of Georgia. The Dmanisi hominin fossils may be as old as 1.78 Ma 11 , and occupation of the site appears to extend to 1.85 Ma 11 . The Dmanisi dates, which approach the dates for the African H. erectus specimens, raise the possibility that Homo erectus origins could have occurred in Eurasia rather than on the African continent 4 . Accordingly, KNM-ER 2598 is key to anchoring the earliest evolution and dispersals of Homo erectus , but some authors have raised doubts about the age of KNM-ER 2598 12 , 13 . It has been suggested that the altimetric position of KNM-ER 2598 below the KBS Tuff may have resulted from deflation of a stratigraphically younger horizon (e.g., KBS Member) that is no longer visible on the exposed upper Burgi Member outcrop surface 12 , 13 . Regrettably, few details regarding the provenience of the fossil were offered in the initial publications 6 , 14 . Given the importance of KNM-ER 2598 for placing the early evolution and dispersal of H. erectus within Africa, it has become essential to provide a geochronological context for the KNM-ER 2598 locality. Here we report on new investigations into the KNM-ER 2598 site location, geology, and paleoecology. The geological data presented here support the interpretation that the KNM-ER 2598 occipital derives from the upper Burgi Member of the Koobi Fora Formation, conservatively dating the fossil to >1.855 Ma. Our findings also correct the location of the KNM-ER 2598 locality, demonstrating that it is situated within the boundaries of collection Area 13 rather than Area 15 in East Turkana. We report on a new hominin ilium and metatarsal recovered within close proximity of where KNM-ER 2598 originated. These hominin fossils are consistent with Homo erectus , potentially making these the oldest postcranial fossils attributable to the taxon. Finally, we contextualize the paleohabitat in East Turkana Area 13 through faunal abundance data, isotopic analyses of mammalian dental enamel, and petrographic data. Results Identification of the locality We used field-based reconnaissance combined with historical imagery to identify the KNM-ER 2598 locality (Supplementary Figs. 1 – 3 ). We used Google Earth imagery to approximate the geospatial location of KNM-ER 2598 in geographic coordinates from historical aerial photographic records housed at the National Museums of Kenya (Supplementary Fig. 1 ). The aerial imagery from the 1970s document that KNM-ER 2598 was actually discovered within the boundaries of collection Area 13. Photographs from the 1974–1975 field seasons confirm that collections were taking place in collection Area 13, based on landscape features that are still identifiable (Supplementary Fig. 2 ). Upon physical inspection of the reconstructed location, a large collapsed sandstone cairn was identified at approximately the same coordinates (N 4.26984, E 36.33848, WGS84; altitude 413.5 m above sea level) reconstructed from the aerial imagery. The sandstone cairn (Supplementary Fig. 3 ) is consistent with the markers used in the 1970s, prior to the positioning of cement plinths as hominin markers. Geological context and age Survey of the rediscovered KNM-ER 2598 locality and nearby areas within Area 13 allowed us to identify two distinct aggregates (Burgi and KBS Members) and a tuff horizon (the KBS Tuff) that marks the boundary between the two units (Fig. 2 ). The tuff horizon is missing in some locations due to erosion episodes in the KBS succession. However, the eastward interpolation of the geologic data for this ash level and topography coincides with coordinates for a known KBS Tuff sample (IL02-128 from ref. 15 ; Fig. 3c ). The distinct sandstones associated with the KBS and upper Burgi Members (Figs. 3 – 4 , Supplementary Fig. 4 ) are characterized by variable lithofacies changes, both laterally and vertically in section. In addition, there are consistent lithological features that allow for the lateral correlation and stratigraphic association of the sedimentary sequences. Fig. 2: Stratigraphic sections within the study area and correlations with sections from nearby Area 10 and 130. Note that the KBS Tuff (KBT) occurs in a gravel-silty unit (Section A) occasionally removed by erosion (Section B). In the absence of the tuff, this disconformity marks the boundary between KBS and upper Burgi deposits (Section B). Section locations are provided in the inset medallion map. See Fig. 3 for scale and orientation of inset map, and explanation of data points. Area 10 section is PNG-10.1 from ref. 17 . Area 130 sections are sections 16 and 25 from ref. 67 . Full size image Fig. 3: The KNM-ER 2598 fossil locality. a Detailed map of the promontory that corresponds to upper Burgi Member fossil Cluster-1 in Area 13 (including KNM-ER 2598). Here we identify tuff outcrops using the field ID “KBTX” where X refers to the sequence of outcrops identified. Local exposures of the KBS Tuff (KBT1-KBT3) and the local lithological markers (sandstones KS, UB1, UB2, and UB3) are indicated. The contour interval is 0.5 m between topographic lines. b Litho-stratigraphic column for the sedimentary exposures in the study area. c General view of locations where the upper Burgi fossils were collected. The nearby KBS Tuff location sampled by Gathogo and Brown 15 is indicated (IL02-128). Map was created using ESRI ArcGIS Pro (version 2.1). The source data underlying Fig. 3 are provided as Supplementary Data 1. Full size image Fig. 4: Mineralogy of Burgi Member ( n = 11) and KBS Member ( n = 12) sandstone thin-sections from Area 13. a The UB1 sandstone lamination fabric as seen through the parallel arrangement of minerals with subrounded moderately sorted grains embedded in a calcite cement in cross-polarized light. b KS sandstone occurrences showing poorly sorted, angular grains of an immature sediment with distinct mineralogy as demonstrated by the presence of igneous rock fragments. UB2 sandstone spicules of megascleres sponges with monaxon and triaxon forms in c cross-polarized and d plane-polarized light. e A QFL ternary plot showing percent composition of upper Burgi (UB1, UB2) and KBS (KS) sandstones sampled in Area 13. CC: calcite cement, P: plagioclase, Q: quartz, Qp: polycrystalline quartz, white arrows: examples of sponge spicules. The source data underlying Fig. 4e are provided in the Source data file. Full size image The lower aggregate in the study area is upper Burgi Member and is characterized by alternating beds of mudstones and sandstones. The mudstones contain abundant pedogenic carbonate nodules and fine silt laminae. There are three sandstone beds in this lower aggregate. The lower two beds (UB2 and UB3 in Figs. 2 – 3 ) are laterally restricted and characterized by trough cross-bedding indicative of fluvial deposition. The uppermost sandstone (UB1) is underlain by a laminated siltstone which coarsens upward into a fossiliferous cross-bedded sandstone. The uppermost beds attributed to the Burgi Member consist of an upward-fining sequence of silts and muds. The sandstones in this aggregate (UB1-3) are heterolithic but are petrographically similar (Fig. 4 ). The most common grains are poly- and monocrystalline quartz and feldspars (microcline and plagioclase), with smaller amounts of mica and other silicates. Bioclasts are present in the form of diverse siliceous spicules from freshwater megascleres sponges (Fig. 4 ). New hominin fragments found at the KNM-ER 2598 site (see below) were associated with the UB1 sandstone in the lower aggregate. The upper aggregate in the study area is the KBS Member. The KBS Member sandstone (hereafter KS) is compositionally distinct, coarser, and thicker than those found in the upper Burgi Member (Fig. 4 ). The KS sandstone has an erosive base locally grading into a subangular yellow-red matrix-supported polymictic conglomerate (Fig. 2 ). KS can be differentiated from the underlying UB1-3 both macro- (Supplementary Fig. 4 ) and microscopically (Fig. 4 ). Detailed information on the UB1, UB2, and KS minerology and sandstone microstructure are provided in Supplementary Note 1. Survey revealed two locations in the upper Burgi Member sediments with fossils exposed on the surface (e.g., Cluster-1 and Cluster-2; Fig. 3 ). Cluster-1 (which surrounds the reconstructed KNM-ER 2598 discovery site) is located at the end of a promontory, in a larger depression where a modern rain drainage system eroded the KBS and younger units. Volcanic ash outcrops attributed to the KBS Tuff, and sandstone beds corresponding to upper Burgi and KBS Members, can be traced for several hundred meters on the rim of the depression. Cluster-1 was measured as approximately 3 m below the KBS Tuff, which is consistent with previous interpretations placing this surface at ~4 m below the tuff 1 , 7 . Site deflation was excluded as a possibility by examination of the sandstones and rock debris within 50 m of the cairn. The rocks present on the surface are exclusively associated with those of the upper Burgi Member based on mineralogy (Fig. 4 ) and stratigraphic context (Figs. 2 – 4 ). That is, the overlying KS sandstones (i.e., polymictic subangular sandstones with granules larger than 2 cm), or fragments of KS, are not present at the Cluster-1 location and there is no remnant nor evidence of the sand and clay/silt units that sit between the UB1 sandstone and the KBS Tuff. Cluster-2 is located on an upper Burgi Member deposit adjacent to a third volcanic ash outcrop (KBT3 in Fig. 3 ). The age range of the KBS Tuff (1.876 ± 0.021 Ma 16 ), incorporating the measurement error, is 1.855–1.897 Ma. The fossils from the UB1 sandstone originated below the KBS Tuff and must therefore be older than 1.855 Ma. This estimate should be interpreted as a conservative theoretical upper constraint of the age range given that the UB1 sandstones sit 4 m below the KBS Tuff. In regard to the lower constraint on the age range, the UB1 sandstones and overlying fine-grained sediments of the upper Burgi Member can be correlated to a published lithologically-similar section in neighboring Area 10 17 . This correlation would assign the UB1 unit and the fossils deriving from it to a level above the Borana Tuff. However, the age of the Borana Tuff is as yet unknown. Correlations to other lateral tuff markers in the Shungura Formation are not completely resolved (see the “Discussion” section), allowing only a qualitative assignment in the proximity of the base of the Olduvai Subchron (1.934 Ma), but not excluding a slightly older age. Fossils Hominin cranial vault fragments, a partial ilium, and a proximal 3rd metatarsal (MT3) were collected from the upper Burgi Member of Area 13. Whereas the vertebrate fauna was collected in two fossiliferous clusters (the previously mentioned Cluster-1 and -2; Fig. 3 ), all hominin material was recovered within Cluster-1 (i.e., <50 m of the KNM-ER 2598 cairn; see also Supplementary Data 1 ). All hominin fossils were weathered surface finds that appear to have been sitting exposed on the surface for several years. A direct association with KNM-ER 2598 could not be established. As such, all of the hominin specimens recovered here were issued distinct NMK accession numbers. Five small fragments that are likely to have originated from a hominin cranial vault were recovered (Supplementary Fig. 5 ; KNM-ER 77066, KNM-ER 77067, KNM-ER 77068, KNM-ER 77069, KNM-ER 77070). All 5 of the fragments preserve unclosed suture borders as in KNM-ER 2598 and display the fairly divergent sutural limbs characteristic of Homo erectus crania 3 . However, none of the cranial fragments directly refit with KNM-ER 2598 nor can they be linked definitively vis-à-vis surface texture and coloration. More information on these non-diagnostic fragments is provided in Supplementary Note 2. KNM-ER 77071 is an abraded hominin left proximal MT3 (Fig. 5 ). This specimen was located 29 m from the KNM-ER 2598 cairn. This fragment preserves the proximal shaft and the base, including discernable contact facets for the MT4 laterally and the MT2 medially. The MT4 contact facet is the larger contact facet and is bounded by a deep gulley inferiorly. There is a partial plantar process visible on the plantar aspect of the base. The base is 17.8 mm superoinferiorly in its maximum dimension, with a 10.9 mm superior border and a 5.6 mm inferior border. The metatarsal shaft measures 9.1 mm superoinferiorly by 6.8 mm mediolaterally at the cross-section break. KNM-ER 77071 is hominin-like in having a dorsoplantarly tall base relative to the width 18 , flat base, and intermetatarsal contact facets 19 , and in lacking the medial and lateral indentations for the transmission of intermetatarsal ligaments that are characteristic of apes 20 . The complete MT3 would have been slightly larger and more robust than the OH8 proximal MT3. Like other hominin MT3s, such as H. erectus specimens D2021 21 and KNM-ER 803 22 , KNM-ER 77071 has a single weak dorsal contact for articulation with MT2. There are no characters preserved on this proximal MT3 that morphologically or functionally distinguish it from other Plio-Pleistocene hominins (Fig. 5 ). Fig. 5: Proximal 3rd metatarsal anatomy. Proximal left the third metatarsal KNM-ER 77071 is inset to show lateral, proximal, and medial views. Comparative MT3s for Homo erectus (KNM-ER 803, D2021, D3479), Homo habilis (OH 8), Homo naledi (UW 101-1457 68 ), humans, and chimpanzees are shown below in lateral, proximal, medial, and dorsal views. MT4 facets indicated by solid arrows, and MT2 facets indicated by the dashed arrows. Like the other hominin fossils, KNM-ER 77071 has only a single dorsal MT2 facet on the medial side. Metatarsal models scaled to the same height of the base for visual comparison. UW 101-1457, D2021, and D3479 images are mirrored for consistency. Full size image KNM-ER 77072 is a hominin partial ilium (Fig. 6 ). This specimen was located 40 m from the KNM-ER 2598 cairn. The specimen preserves most of the iliac tuberosity, greater sciatic notch region, and much of the body, although the ala is broken off anteriorly and superiorly. The ilium, as preserved, measures 84.0 mm anteroposteriorly and 55.5 mm superoinferiorly. There is no iliac pillar visible, although this may have been present and more anteriorly situated (Fig. 6 ). The pear-shaped sacroiliac joint in KNM-ER 77072 is well preserved and posteriorly bounded by a deep postauricular groove and a mound of bone on the most dorsal region of the iliac tuberosity. The iliac tuberosity is 20.6-mm thick. The sacroiliac joint measures 43.0 mm superoinferiorly and ~26 mm across the widest portion. The greater sciatic notch is wide and, although incomplete distally, would likely have had a sciatic notch angle >100°. The sciatic notch, as preserved, is 28.7 mm across and ~14-mm deep, and has a shallow appearance. The gluteal surface is well preserved and a line demarcating the boundary between gluteus medius and g. minimus origins can be observed. Fig. 6: Ilium anatomy. a Left partial ilium KNM-ER 77072 shown in medial and lateral views. b Scanned models of contemporaneous ilia from East Turkana show differences in absolute size and iliac pillar configuration. Note that diminutive KNM-ER 5881 has an iliac pillar (red dot indicating iliac pillar base) that is modest in thickness but originates posteriorly relative to the anterior border of the ilium, whereas KNM-ER 3228 has a massive iliac pillar that originates more anteriorly. Upper Burgi specimen detail is also shown in Supplementary Fig. 7 . c Homo erectus ilia available for comparative study. KNM-ER 77072, KNM-ER 1808, KNM-WT 15000, and UA 173/405 share the following features, when preserved: thick dorsal regions of the iliac tuberosity, weak muscle markings on the gluteal surfaces, a moderately thick acetabulosacral buttress, wide and shallow greater sciatic notches (indicated by double-ended arrow), auricular surfaces that are small compared to Homo sapiens , deep postauricular grooves (bold arrow), and a relatively anteriorly-positioned and weakly-developed iliac pillar. KNM-ER 3228, KNM-WT 15000, UA 173, and OH 28 images are mirrored for consistency. KNM-WT 15000 was scanned from a cast that included a reconstructed iliac crest. 1-cm scale is shown below each fossil. Full size image Qualitative comparisons align the morphology of KNM-ER 77072 with genus Homo . The specimen differs from australopiths in robusticity of the ilium, especially the iliac tuberosity 23 and acetabulosacral pillar 24 . KNM-ER 77072 is most easily compared with ilia associated with Homo erectus (i.e., KNM-ER 1808 and KNM-WT 15000) or attributed to the taxon (i.e., UA 173/405, BSN49/P27, OH 28, KNM-ER 3228). Most potential Homo erectus ilia (excluding OH 28 and KNM-ER 3228) share the following features with KNM-ER 77072: dorsally thick iliac tuberosities, weakly developed gluteal muscle markings, a moderately thick acetabulosacral buttress, shallow and wide sciatic notches, auricular surfaces that appear fairly small, and a relatively anterior origin of the weakly-developed iliac pillar (if preserved). KNM-ER 77072 is similar to UA 173 in possessing a deep postauricular groove and thick dorsal portion of the iliac tuberosity 23 . There are no pelves formally associated to Homo habilis or Homo rudolfensis . However, the diminutive KNM-ER 5881 pelvis (Fig. 6 , Supplementary Fig. 7 ) has been suggested to belong to a “non- erectus ” species of Homo 25 . Although incomplete, KNM-ER 5881 preserves an iliac pillar that originates quite posteriorly but is directed anteriorly 25 , reflecting a different pelvic geometry from KNM-ER 77072. Thirty-two non-hominin taxonomically identifiable fossils were collected from the upper Burgi Member of Area 13 (Fig. 7a, b ), including bovids ( n = 17), equids ( n = 4), suids ( n = 3), cercopithecoids ( n = 3), a proboscidean ( n = 1), a hippopotamid ( n = 1), a rhinocerotid ( n = 1), a giraffid ( n = 1), and a snake (i.e., Serpentes ; n = 1). The fossil identifications suggest that the fauna represents primarily C 4 grazers, which was confirmed by isotopic analysis from 17 enamel samples. Carbon isotope data for the enamel samples range from −1.8 to +2.5‰ (median = +1.4‰; Table S2 , Fig. 7c ). Oxygen isotope values (Fig. 7d ) range from −3.5 to +3.6‰ (median = +0.1‰). The single hippopotamid had the most depleted value and an alcelaphin bovid had the most enriched value. Fig. 7: Non-hominin fossils recovered in Area 13 from 2017 to 2019. Thirty-two taxonomically identifiable vertebrate fossils were collected ( a , b ). Miscellaneous bovids are those specimens for which a specific tribe could not be determined. Antilopini and Aepycerotini cannot consistently be differentiated based on isolated molars and are pooled together. c , d New and existing upper Burgi Member enamel isotopic data. New enamel isotopic data from Area 13 (red triangles) are not incorporated into boxplots. Median values are represented by vertical lines within the box, the edges of the boxes represent quartile ranges, horizontal dashed lines represent the range and outlier values are plotted as circles outside of the range. Raw δ 13 C and δ 18 O, presented as circles superimposed on the box plots, are consistent with open habitats that had C 4 -dominated diets and locally arid conditions, and with the upper Burgi isotopic signatures more broadly. Silhouettes are from , with attribution to A. Venter, Herbert H. T. Prins, David A. Balfour, and Rob Slotow (vectorized by T. Michael Keesey) for Reducini, Alcelaphini, Tragelaphini, and miscellaneous bovid ( ). The source data underlying Fig. 7 are provided in the Source data file. Full size image Discussion Our fieldwork found a discrepancy between the historical records of collecting areas in East Turkana and the modern, formalized boundaries of these areas. The East Turkana collection areas are principally defined by landscape features (e.g., ephemeral streams) and have been well-mapped (e.g., Supplementary Fig. 1 ) since the mid-1980s 26 . The KNM-ER 2598 coordinates, determined through aligning 1970s aerial photos with modern satellite imagery, are located in what is defined as Area 13. This finding contradicts the original publication 6 , which listed Area 15 as the location where KNM-ER 2598 was collected. Historical records are lacking but photographs from the 1974–1975 field seasons confirmed that collections were taking place in the location that is currently designated as Area 13 (Supplementary Fig. 2 ). Area 15 is now more widely recognized as being principally composed of Lonyumun Member sediments (~4.0–4.3 Ma) 15 , 27 , and is therefore unlikely to preserve Early Pleistocene sediments and fossils. In contrast, Area 13 has recently produced a number of other early Homo specimens attributed to the upper Burgi and KBS Members 17 , 28 , 29 . One of these, a Homo habilis dentition (KNM-ER 64060), originated <1.5 km from this location and is dated to ~2.0 Ma 17 , documenting the close temporal and geographic proximity of early H. erectus and H. habilis in East Turkana. The discrepancy in East Turkana collections recorded as Area 15 versus Area 13 almost certainly extends to other vertebrate fossils recovered in the 1970s. The publicly-available Turkana Basin Database 30 records 44 distinct specimen numbers for Pleistocene fossils from collection Area 15. These fossils are likely to derive from other collection areas based on our current understanding of the collection area boundaries and geology contained within these regions. To our knowledge, the only documented vertebrate fossils which originate in Area 13 are those reported in this study. Further investigations that combine archival information and modern reconnaissance are needed to establish the provenience of the 1970s fauna reported from Area 15. The most straightforward interpretation of the geological data presented here is that the new hominin fossils, and presumably KNM-ER 2598, weathered out of the UB1 sandstone and lay exposed on the surface until they were discovered. We found no evidence of deflation at the KNM-ER 2598 cairn location and surrounding Cluster-1 area. The sandstones and sandstone fragments found in Cluster-1 have distinctive sedimentary features (i.e., trough and planar cross-bedding) aligning them with UB1 and, furthermore, are petrographically distinctive from the younger sandstones (KS) overlying the KBS Tuff. A mixture of surface sandstones was found about 200 m away from the KNM-ER 2598 cairn location, in locations where torrential runoff moves the KS sandstones into drainage areas overlying Burgi Member outcrops (e.g., “modern KBS debris” shown in Fig. 3 ). Although we cannot completely exclude the possibility that these hominin fossils are derived from younger sediments, the hypothesis that the fossils on the Cluster-1 surface could result from deflation of a younger sedimentary package that has subsequently eroded away is not supported by any of our observations. The KBS Tuff (1.876 ± 0.021 Ma 16 ) acts as an upper (=minimum) age constraint to KNM-ER 2598 and the fossils described here. The maximum age of these fossils is currently unresolved. Lacking lower (=maximum) age constraints in our study area, we correlated the UB1 sandstone and overlying fine-grained sediments of the upper Burgi Member with a nearby lithologically-similar sedimentary succession 17 , which tentatively aligns the UB1 unit (and associated fossils) to a level above the Borana Tuff. This discontinuous tuff occurs within a complex stratigraphic succession and past correlations resulted in conflicting, often contradictory, stratigraphic positions 31 , 32 and association with both normal and reverse paleomagnetic polarity intervals 17 , 33 , 34 , 35 . A conservative approach 31 correlated the Borana Tuff with “an unnamed tuff in the upper G Member” of the Shungura Formation. This assignment does not exclude a reverse polarity (C2r.1r, beginning of the Olduvai Subchron) for the tuff 35 that would expand the range of the UB1. If these East Turkana fossils were deposited in the C2r.1r chron, they would be contemporaneous with the oldest Homo erectus specimen from Drimolen (DNH 134). We consider this scenario unlikely given that DNH 134 is constrained within the reversed subchron (1.934–2.120 Ma) and these East Turkana fossils lay just a few meters below the KBS tuff (1.876 ± 0.021 Ma 16 ), likely toward the base of the Olduvai normal subchron (1.934 Ma 9 , 10 ). The maximum age constraint could eventually be resolved if the magnetic polarity of the UB sandstones and of the Borana Tuff are clarified. The newly recovered hominin metatarsal and ilium are consistent with genus Homo . The flat MT3 base and flat contact facet for articulation with the MT4 suggest limited mobility at both the tarsometatarsal joint and lateral intermetatarsal joint, indicating that a transverse arch is present in the foot of KNM-ER 77071. However, KNM-ER 77071 lacks an MT3 plantar facet for contact with the MT2, as is also the case for other Homo specimens (i.e., OH 8, D2021, KNM-ER 803). The lack of a plantar facet on the MT3 has been proposed to have allowed moderate cuneo-metatarsal mobility 21 . It may also reflect a slightly lower apex of the transverse arch than in Homo sapiens , whereby the MT2 and MT3 would only make contact along the dorsal portion of their bases. This may suggest a slightly more mobile transverse arch in earlier Homo than in Homo sapiens , but testing this idea is contingent on the discovery of more complete hominin feet. The KNM-ER 77072 partial ilium adds to the growing list of pelvic specimens that are likely attributed to Homo erectus (Fig. 6 ). KNM-ER 77072 is similar in size and morphology to KNM-ER 1808, KNM-WT 15000, and UA 173—specimens which most workers agree are Homo erectus . KNM-ER 77072 also appears generally consistent with the morphology described for the probable Homo erectus pelvis from Gona (BSN49/P27) 36 . These pelves share, among other features, dorsally thick iliac tuberosities, weak muscle markings on the gluteal surfaces, shallow and wide sciatic notches, and fairly small auricular surfaces. Wide and shallow greater sciatic notches are seen in earlier taxa like Australopithecus , so this cannot be excluded as a plesiomorphic character (as opposed to indicating female sex or increased obstetric demands; see also ref. 36 ). These fossils collectively point to an ilium that is only moderately robust in Homo erectus , posing a challenge for including the absolutely large and robust specimens from eastern Africa (i.e., OH 28, KNM-ER 3228) within the Homo erectus hypodigm. The link between these fossils and Homo erectus has always been tenuous; OH 28 is assumed to be Homo erectus based on association with a Homo erectus -like femoral diaphysis 37 , and KNM-ER 3228 has been aligned with Homo erectus 38 , 39 based on similarities with OH 28 (but has also been considered a candidate for Homo rudolfensis 40 ; see below). Additional pelves associated with Homo erectus craniodental material, ideally sampling multiple individuals from a single locality such as Dmanisi, would clarify the range of morphological variation we can accept in this taxon. KNM-ER 77072 also confirms that diversity in pelvic morphology is present in genus Homo at ~2 Ma, hinting at postcranial differences that may accompany the taxonomic diversity present in East Turkana. There are at least three species of Homo in East Turkana during the brief interval from 1.85 to 1.95 Ma: Homo habilis , Homo rudolfensis , and Homo erectus 41 . There are also now three morphologically distinct pelvic specimens from this same interval: KNM-ER 5881 25 , KNM-ER 3228 38 , 42 , and KNM-ER 77072 (Fig. 6 , Supplementary Fig. 7 ). The KNM-ER 5881 ilium (~1.9 Ma) differs morphologically from other fossils and is associated with a femur that is similar to Homo habilis (i.e., OH 62) in cross-section 25 . The femur cross-sectional shape, along with the pelvic morphology, has led to the conclusion that KNM-ER 5881 is likely attributed to either Homo habilis or Homo rudolfensis (for which postcranial morphology is entirely unknown) 25 . Notably, KNM-ER 5881 has a posteriorly originating iliac pillar, whereas there is no evidence of the iliac pillar in what is preserved of KNM-ER 77072. This could mean that a discernable iliac pillar was not present in KNM-ER 77072 or, more likely, that it was weakly developed and more anteriorly positioned in this specimen. Whereas KNM-ER 5881 is diminutive, KNM-ER 3228 (~1.95 Ma) is from a large and heavily-muscled individual with a marked gluteal surface concavity. It would require Gorilla -like or Pongo -like levels of body size dimorphism for KNM-ER 5881 and KNM-ER 3228 to represent male and female individuals of the same species 25 . Moreover, it appears that all three of these contemporaneous fossils sample individuals of different body size based on the lower ilium length and the iliac tuberosity size (Supplementary Fig. 7 ). Finally, although greater sciatic notch angle is sexually dimorphic in modern humans (but not necessarily pre-human hominins 43 ), KNM-ER 3228 apparently represents the first example of a narrow (male or ‘masculine’) sciatic notch in the fossil record, a condition not clearly detected again until late Homo . If KNM-ER 77072 is Homo erectus , the disparities in size and morphology among these contemporaneous specimens would lend support to the idea that KNM-ER 5881 and KNM-ER 3228 derive from species other than Homo erectus . The alternative possibility is that some (or all) of these specimens belong to a single, postcranially variable species of Homo . The pelvic diversity seen in the Turkana Basin during this narrow time interval suggests there was substantial selective pressure operating on pelvic and hip function fairly early in the evolution of Homo . It is possible that the hominin specimens described here come from a single individual given the spatial proximity in which they were all recovered. None of the newly recovered hominin cranial fragments directly refit with KNM-ER 2598, although they are morphologically consistent with KNM-ER 2598 and other Homo erectus specimens (Supplementary Figs. 5 – 6 ). The possibility that the newly recovered hominin fragments reported here are associated with KNM-ER 2598 is even more plausible when considering that our large surface survey of the upper Burgi Member deposits produced only a few dozen taxonomically identifiable fossils. Multiple lines of evidence suggest that this locality was near a well-watered and open, grassy environment. The non-hominin fossil taxa recovered in Area 13 are primarily hypsodont C 4 grazers with essentially no mixed feeders. Carbon isotope values fall within the range, or in some cases slightly outside the range, of those reported for the same taxa from the upper Burgi Member (Fig. 7c ; refs. 44 , 45 , 46 , 47 ). The oxygen isotope values are generally more positive than other upper Burgi Member samples, including a single hippopotamid, which potentially indicates more evaporated local source waters or more arid conditions than other upper Burgi Member locations (Fig. 7c ). The presence of sponges in the UB1 sandstone indicates a long-term, stable body of water at this locality 48 . The non-hominin fossil taxa and the limited suite of isotopic data represent a fauna associated with open habitats that had C 4 -dominated diets. Existing paleosol carbonate data indicate significant variation in woody cover during the upper Burgi Member (between ~10 and 65%) 45 , 49 , 50 , 51 . It is possible that the new enamel data reflect a more open subset of this heterogeneous environment. In summary, new investigations at the KNM-ER 2598 locality have produced several key findings. The newly discovered hominin postcranial elements include a partial ilium and a proximal metatarsal. Although neither element can be definitively assigned to the same individual as KNM-ER 2598, the ilium and metatarsal are morphologically consistent with Homo erectus . The KNM-ER 2598 locality is located in a different East Turkana collection area than initially reported in the 1970s, which may have resulted in incorrect interpretations of both the hominin and faunal material over the last few decades. The new fauna consists primarily of C 4 grazers and suggests a fairly open paleoenvironment. This study confirms that the location where KNM-ER 2598 was discovered is associated with distinct sandstones that are exclusively associated with the upper Burgi Member. The KNM-ER 2598 site is stratigraphically positioned ~3 m below the KBS Tuff, requiring that the fossils from this location are >1.855 Ma. The KNM-ER 2598 occipital, as well as the new ilium and metatarsal reported here, are among the oldest fossil specimens likely attributable to Homo erectus . Methods Identification of the locality East Turkana fossil locations in the 1970s were originally recorded on aerial photographs that continue to serve as records of where fossils were collected (Supplementary Fig. 1 ). The aerial photographs that documented the KNM-ER 2598 find were captured in 1970 by Hunting Surveys, Ltd. and were housed at the National Museums of Kenya at the time of study. Fossil finds were marked on the photographs with pinpricks and with corresponding field numbers recorded on the back of the aerial photo. We used Google Earth imagery to approximate the geospatial location of KNM-ER 2598 in geographic coordinates (Supplementary Fig. 1 ). Photographs from the 1974–1975 field seasons when KNM-ER 2598 was recovered were provided by Tim White (University of California-Berkeley). These photos support our assertion that this locality, which was relocated from aerial imagery, corresponds to the 1974–1975 field campaign locations (Supplementary Fig. 2 ). Geological context First, the locality was surveyed for exposed outcrops and previously described sections were investigated 15 . Two main fossil clusters were found, one at the KNM-ER 2598 locality (Cluster-1; Fig. 3 ) and at a second location nearby (Cluster-2). Volcanic ash demarcates the boundary between the upper Burgi and KBS stratigraphic units and serves as a reference to identify lithological marker horizons in both units (Fig. 3 ). These lithological elements were then described both in outcrop and in thin sections and subsequently assigned to particular facies. Geological specimen description and logging of stratigraphic section were adapted from standard techniques 52 , 53 . Descriptions included grain size, color, bed shape, lateral variation in the bed, sedimentary structures, and bed-top and -bottom interactions, as well as any post-depositional features. Thicknesses were measured with Jacob’s staff, which was also used to note prominent stratigraphic boundaries and document lithology across the study area. Structural measurements (strike, dip, and dip direction) were also recorded on bed surfaces to supplement mapping efforts in the study area and note any stratigraphic distortions. Outcrops of tuff and sandstone marker beds (both thickness and horizontal extent) were recorded for geological mapping following ref. 26 . The geographic coordinates of the outcrops were acquired using GPS systems. Outcrops and boundaries were then plotted on a topographic map extracted from a digital surface model, acquired from Apollo Mapping WorldDEM. We calculated the average bedding by interpolating data from three separate locations with precise geographic coordinates (i.e., 3-point problem). The Burgi-KBS boundary was extended beyond our observation points by intersecting the topography with the averaged bedding plane (221/08). After regional lithological descriptions were completed, microstratigraphic work was conducted to better contextualize the KNM-ER 2598 locality. Microstratigraphy was aimed at defining the provenance of surface sandstones and placing these units within the overall geology of the area. We also noted post-depositional features associated with the locality which includes colluvial wash and deflation. In situ sandstones from the upper Burgi and KBS members were collected as reference material during stratigraphic descriptions to compare with the KNM-ER 2598 locality sandstones. Export of geological samples was conducted through a material transfer agreement between the NMK, The George Washington University, and the University of the Witwatersrand, and authorized by the Kenyan Department of Mining. First, surface sandstones were described from petrographic thin section 54 . Twelve KBS Member and 11 Burgi Member samples were prepared for thin section at the University of the Witwatersrand School of Geosciences Rock Cutting Laboratory. The surface sandstones were matched with sandstones of known stratigraphic location to determine their provenance. This sourcing assisted in establishing the relative proximity and movement of these surface sandstones, enabling the study of post-depositional conditions such as overwash and deflation. Second, the mineralogical composition of the sandstones was characterized using QFL framework minerals (i.e., quartz, feldspar, other lithological fragments). Medium to coarse-grained channel sandstones from different parts of the area and at varying stratigraphic levels were sampled and point counted using the Gazzi–Dickinson method 55 , 56 . This involved selecting 350 random points in a single thin section and determining the mineralogy according to the QFL system. These mineralogical proportions were counted and converted into percentages before being plotted as a QFL ternary diagram to show mineralogical differences between Burgi and KBS sandstones (Fig. 4 ). Fossils Fossils were collected with the oversight and authority of the National Museums of Kenya (NMK) as mandated by Kenyan law. An Excavation and Exploration license (reference number NMK/GVT/2) was obtained from the Ministry of Sports, Culture, and Heritage through the NMK. The PI (Hammond) retained permits from the Kenyan National Commission for Science, Innovation, and Technology (NACOSTI; permits P/17/46866/17343 effective July 26, 2017; P/18/46866/25344 effective October 18, 2018) for field research. We performed standard surface surveys in collection Area 13 over the course of three field seasons in 2017–2019. Surveys occurred in fossiliferous locations identified as Cluster-1 and Cluster-2 in Fig. 3 . Additional intensive surface crawls were conducted within 50 m of the KNM-ER 2598 cairn. Vertebrate fossils were collected if they were identifiable as primate, mammalian cranial elements, horncores, mammalian teeth that were at least 50% complete, astragali, and long bones that were at least 50% complete and/or preserved at least one articular surface, and snake vertebrae. All fossils were found either sitting on the surface or partially embedded in sediments. Sixteen of the fossil teeth were well-preserved enough to sample for stable carbon and oxygen isotopes. Seventeen samples total were collected, with two samples collected from an alcelaphine specimen represented by both upper and lower molars. Enamel sampling was performed at the National Museums of Kenya. Two to four milligrams of enamel powder was collected from cracks and breaks in each tooth using a high-speed drill, following protocols published elsewhere 44 , 57 , 58 . Export of the enamel powder for isotopic analysis was authorized by the NMK through a Material Transfer Agreement. Isotope data were analyzed as described in ref. 59 at the Stable Isotope Laboratory at Lamont-Doherty Earth Observatory. Data were quantitatively compared to published upper Burgi Member isotopic data 44 , 45 , 46 , 47 , 58 , 60 , 61 , 62 , 63 , 64 using non-parametric Kruskal–Wallis comparisons in R (Supplementary Table 2 ). We use −11.9 and +3.4‰ as C 3 and C 4 endmembers to calculate the percent C 4 diet from the tooth enamel data. These values are based on an atmospheric δ 13 C value of −6.5‰ and biosynthetic fractionation factors for C 3 and C 4 plants 65 . C 3 -dominated diets are those that include 0–25% C 4 vegetation (−12 to −8‰) mixed diets are >25% to <75% C 4 vegetation (>−8‰ to −0.5‰) and C 4 -dominated diets are those with >75% C 4 vegetation (>−0.5‰). The hominin metatarsal (KNM-ER 77071) and ilium (KNM-ER 77072) recovered in Cluster-1 were qualitatively compared with available hominin comparative material. Limited quantitative measures could be extracted due to the fragmentary nature of the fossils. All quantitative measures reported here were collected with Mitutoyo digital calipers. The cranial vault fragments were quantitatively compared to published data 66 in two analyses (Supplementary Note 2, Supplementary Fig. 5). First, the proportion of diploe relative to inner and outer bone layers for hominin parietal and frontal bones were compared via ternary plot comparison. Second, the absolute thickness of hominin frontal and parietal bones were compared among taxa by boxplot. The new fossils from Area 13 are housed at the NMK. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All data supporting the findings of this study are available within the paper, its supplementary information files, or as an upload to a data-sharing repository. Figure 3 source data in Google Earth format (.KMZ) are provided in Supplementary Data 1 . Restrictions apply to the availability of hominin scan data figured herein, but these data are available from the corresponding author with the permission of the authorizing third party (museum or individual). Source data are provided with this paper.
A new study verifies the age and origin of one of the oldest specimens of Homo erectus—a very successful early human who roamed the world for nearly 2 million years. In doing so, the researchers also found two new specimens at the site—likely the earliest pieces of the Homo erectus skeleton yet discovered. Details are published today in the journal Nature Communications. "Homo erectus is the first hominin that we know about that has a body plan more like our own and seemed to be on its way to being more human-like," said Ashley Hammond, an assistant curator in the American Museum of Natural History's Division of Anthropology and the lead author of the new study. "It had longer lower limbs than upper limbs, a torso shaped more like ours, a larger cranial capacity than earlier hominins, and is associated with a tool industry—it's a faster, smarter hominin than Australopithecus and earliest Homo." In 1974, scientists at the East Turkana site in Kenya found one of the oldest pieces of evidence for H. erectus: a small skull fragment that dates to 1.9 million years. The East Turkana specimen is only surpassed in age by a 2-million-year-old skull specimen in South Africa. But there was pushback within the field, with some researchers arguing that the East Turkana specimen could have come from a younger fossil deposit and was possibly moved by water or wind to the spot where it was found. To pinpoint the locality, the researchers relied on archival materials and geological surveys. University of Witwatersrand geologist Silindokuhle Mavuso (left) and study lead author Ashley Hammond at the East Turkana site in Kenya Credit: A. Hammond/AMNH "It was 100 percent detective work," said Dan Palcu, a geoscientist at the University of São Paulo and Utrecht University who coordinated the geological work. "Imagine the reinvestigation of a 'cold case' in a detective movie. We had to go through hundreds of pages from old reports and published research, reassessing the initial evidence and searching for new clues. We also had to use satellite data and aerial imagery to find out where the fossils were discovered, recreate the 'scene,' and place it in a larger context to find the right clues for determining the age of the fossils." Although located in a different East Turkana collection area than initially reported, the skull specimen was found in a location that had no evidence of a younger fossil outcrop that may have washed there. This supports the original age given to the fossil. Within 50 meters of this reconstructed location, the researchers found two new hominin specimens: a partial pelvis and a foot bone. Although the researchers say they could be from the same individual, there's no way to prove that after the fossils have been separated for so long. But they might be the earliest postcrania—"below the head"—specimens yet discovered for H. erectus. Students from the Koobi Fora Field School surveying the East Turkana site in Kenya. Credit: A. Hammond/AMNH The scientists also collected fossilized teeth from other kinds of vertebrates, mostly mammals, from the area. From the enamel, they collected and analyzed isotope data to paint a better picture of the environment in which the H. erectus individual lived. "Our new carbon isotope data from fossil enamel tell us that the mammals found in association with the Homo fossils in the area were all grazing on grasses," said Kevin Uno, a paleoecologist at Columbia University's Lamont-Doherty Earth Observatory. "The enamel oxygen isotope data suggest it was a relatively arid habitat based on comparisons to other enamel data from this area." The work suggests that this early H. erectus was found in a paleoenvironment that included primarily grazers that prefer open environments to forest areas and was near a stable body of water, as documented by freshwater sponges preserved in the rocks. Key to the field work driving this study were the students and staff from the Koobi Fora Field School, which provides undergraduate and graduate students with on-the-ground experience in paleoanthropology. The school is run through a collaboration between The George Washington University and the National Museums of Kenya, and with instructors from institutions from around North America, Europe, and Africa. "This kind of renewed collaboration not only sheds new light on verifing the age and origin of Homo erectus but also promotes the National Museums of Kenya's heritage stewardship in research and training," said Emmanuel Ndiema, the head of archaeology at the National Museums of Kenya.
doi.org/10.1038/s41467-021-22208-x
Medicine
Study highlights the role of astrocytes in the formation of remote memories
Adi Kol et al. Astrocytes contribute to remote memory formation by modulating hippocampal–cortical communication during learning, Nature Neuroscience (2020). DOI: 10.1038/s41593-020-0679-6 Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/s41593-020-0679-6
https://medicalxpress.com/news/2020-09-highlights-role-astrocytes-formation-remote.html
Abstract Remote memories depend on coordinated activity in the hippocampus and frontal cortices, but the timeline of these interactions is debated. Astrocytes sense and modify neuronal activity, but their role in remote memory is scarcely explored. We expressed the G i -coupled designer receptor hM4Di in CA1 astrocytes and discovered that astrocytic manipulation during learning specifically impaired remote, but not recent, memory recall and decreased activity in the anterior cingulate cortex (ACC) during retrieval. We revealed massive recruitment of ACC-projecting CA1 neurons during memory acquisition, which was accompanied by the activation of ACC neurons. Astrocytic G i activation disrupted CA3 to CA1 communication in vivo and reduced the downstream response in the ACC. In behaving mice, it induced a projection-specific inhibition of CA1-to-ACC neurons during learning, which consequently prevented ACC recruitment. Finally, direct inhibition of CA1-to-ACC-projecting neurons spared recent and impaired remote memory. Our findings suggest that remote memory acquisition involves projection-specific functions of astrocytes in regulating CA1-to-ACC neuronal communication. Main Remote memories, weeks to decades long, continuously guide our behavior and are critically important to any organism, as the longevity of a memory is tightly connected to its significance. The ongoing interaction between the hippocampus and frontal cortical regions has been repeatedly shown to transform during the transition from recent (days long) to remote memory 1 , 2 , 3 . However, the exact time at which each region is recruited, the duration for which it remains relevant to memory function and the interactions between these regions are still debated. Astrocytes are no longer considered to merely provide homeostatic support to neurons and to encapsulate synapses, as pioneering research has shown that they can sense and modify synaptic activity as an integral part of the ‘tripartite synapse’ 4 . Interestingly, astrocytes exhibit extraordinary specificity in their effects on neuronal circuits 5 at several levels. First, astrocytes differentially affect neurons based on their genetic identity. For example, astrocytes in the striatum selectively respond to, and modulate, the input onto two populations of medium spiny neurons expressing either D1 or D2 dopamine receptors 6 . Similarly, astrocytes differentially modulate the effects of specific inhibitory cell types but not others in the same brain region 7 , 8 , 9 , 10 and selectively affect different inputs to the hippocampus 11 . Second, astrocytes exert neurotransmitter-specific effects on neuronal circuits. For instance, astrocytic activation in the central amygdala specifically depresses excitatory inputs and enhances inhibitory inputs 12 . Finally, astrocytes exhibit task-specific effects in vivo; that is, astrocytic stimulation selectively increases neuronal activity when coupled with memory acquisition, but not in the absence of learning 13 . An intriguing open question is whether astrocytes can also differentially affect neurons based on their distant projection target. The integration of novel chemogenetic and optogenetic tools in astrocyte research allows real-time reversible manipulation of these cells at the population level combined with electrophysiological and behavioral measurements. Such tools were used in brain slices to activate intracellular pathways in astrocytes and showed their ability to selectively modulate the activity of neighboring neurons in the amygdala and striatum 12 , 14 and to induce de novo long-term potentiation in the hippocampus 13 , 15 . Importantly, the reversibility of chemogenetic and optogenetic tools enables careful dissection of the effect of astrocytes during different memory stages in behaving animals 16 , 17 . The recruitment of intracellular signaling pathways in astrocytes using such tools is starting to shed light on their complex involvement in memory processes. For example, G q activation in the CA1 during acquisition (but not during recall) results in enhanced recent memory 13 , 15 , whereas G s activation results in recent memory impairment 18 . These findings point to the importance of astrocytes in memory processes, specifically at the time of learning. To explore the role of astrocytes in memory acquisition, we used the designer receptor hM4Di to activate the G i pathway in these cells. We show that this astrocytic modulation in the CA1 during learning results in a specific impairment in remote (but not recent) memory recall, which is accompanied by decreased activity in the ACC at the time of retrieval. In vivo, G i activation in astrocytes disrupts synaptic transmission from the CA3 to the CA1 and reduces downstream recruitment of the ACC. Finally, we reveal a dramatic recruitment of CA1 neurons projecting to the ACC during memory acquisition and a projection-specific inhibition of this population by G i pathway activation in CA1 astrocytes. Indeed, when we directly inhibited only CA1-to-ACC projecting neurons, recent retrieval remains intact, whereas remote memory is impaired. Results G i pathway activation in CA1 astrocytes specifically impairs the acquisition of remote memory To specifically modulate the activity of the G i pathway in CA1 astrocytes, we employed an adeno-associated virus vector (AAV8) encoding the designer receptor exclusively activated by designer drugs (DREADD) hM4Di fused to mCherry under the control of the astrocytic GFAP promoter. Stereotactic delivery of this AAV8-GFAP::hM4Di–mCherry vector resulted in CA1-specific expression that was restricted to astrocytic outer membranes (Fig. 1a,b ), with high penetrance (>85% of GFAP cells expressed hM4Di), and the promoter provided almost complete specificity (>95% hM4Di-positive cells were also GFAP-positive) (Extended Data Fig. 1a,b ). Co-staining with the neuronal marker NeuN showed <1% overlap with hM4Di expression (Extended Data Fig. 1c,d ). Fig. 1: Astrocytic G i pathway activation in the CA1 during learning specifically impairs remote contextual memory. a , Bilateral double injection of AAV8-GFAP::hM4Di–mCherry resulted in the selective expression of hM4Di in the CA1. Scale bar, 200 µm. b , hM4Di (red) was expressed in the astrocytic membrane around the soma and in the distal processes. Scale bar, 50 µm. c , Representative images (left) and quantification of c-Fos expression. CNO administration in vivo to mice expressing hM4Di (red) in CA1 astrocytes resulted in a significant increase in c-Fos expression (green) in these astrocytes compared with saline-injected controls (* P < 0.00005, n = 2–4 mice, 6–15 slices per groups). Scale bar, 50 μm. d , Representative images of hM4Di–mCherry and GCaMP6f co-expression in CA1 astrocytes. e , Scheme of the experiment: astrocytes were imaged three times for 3 min each time before and after the application of ACSF (109 ROIs from 5 mice) or CNO (10 μM; 299 ROIs from 8 mice). f , g , CNO application triggered a decrease in baseline intracellular Ca 2+ levels, as reflected by the mode of fluorescence levels (* P < 0.01) ( f ), and reduced the total size of Ca 2+ events in these cells (* P < 0.005) ( g ), compared with astrocytes treated with ACSF. All ROIs are presented as dots in a scatter plot, and the average change (Δ) following treatment is plotted in the inset. h , i , Mice expressing hM4Di in their CA1 astrocytes were injected with either saline ( n = 7) or CNO ( n = 6) 30 min before FC acquisition. CNO application before training had no effect on baseline freezing before shock administration or on recent contextual freezing on the next day compared with saline-treated controls ( h ). In contrast, CNO application before training resulted in a >50% impairment (* P < 0.05) in contextual freezing in CNO-treated mice tested 20 days later compared with saline-treated controls ( i , left). An even bigger impairment of >68% (* P < 0.005) was observed 45 days later ( i , right). j , Mice expressing hM4Di in their CA1 neurons were injected with either saline ( n = 9) or CNO ( n = 10) 30 min before FC acquisition. CNO application before training had no effect on baseline freezing before shock administration, but resulted in decreased recent contextual freezing on the next day (* P < 0.005) and decreased remote recall 20 days after that (* P < 0.05) compared with saline-treated controls. k , In the NAPR test, astrocytic G i pathway activation by CNO application before a first visit to a new environment had no effect on recent memory, as reflected by a similar decrease (* P < 0.0001) in the exploration between saline-injected mice ( n = 6) and CNO-treated mice ( n = 8). Example exploration traces and the average change (Δ) in exploration following treatment are shown on the right. l , Astrocytic modulation impaired remote recognition of the environment on the second visit, as reflected by a decrease in exploration only in the saline-injected mice ( n = 7) (* P < 0.01) but not in the CNO-treated mice ( n = 6). Example exploration traces and average decrease (Δ) are shown on the right. Data are presented as the mean ± s.e.m. Source data Full size image Recent work has shown that hM4Di activation in astrocytes mimics the response of these cells to GABAergic stimuli 14 , 19 and induces elevated expression of the immediate-early gene cFos in vivo 14 , 19 , 20 . To verify this effect in our hands, mice were injected with clozapine- N -oxide (CNO; 10 mg per kg, intraperitoneally (i.p.)) and brains collected 90-min later and stained for c-Fos. As expected, CNO dramatically increased c-Fos levels in astrocytes of hM4Di-expressing mice compared with saline-injected controls ( t (9) = 16.7, P = 2.2 × 10 −8 ) (Fig. 1c ). As c-Fos is similarly induced by the recruitment of the G q pathway 13 , 20 , it seems to be an unreliable indicator of the nature of astrocytic activity as it only indicates the occurrence of a significant modulation. Thus, to better characterize the effect of G i pathway activation in astrocytes at a time frame more relevant to behavioral experiments (executed tens of minutes after CNO administration), we performed prolonged two-photon imaging in brain slices using Ca 2+ levels as a proxy for astrocytic activity. CA1 astrocytes expressing both hM4Di and GCaMP6f were imaged before and after application of artificial cerebrospinal fluid (ACSF) or CNO (10 μM) (Fig. 1d,e ; Extended Data Fig. 1e–h ). CNO application triggered a moderate decrease in baseline intracellular Ca 2+ levels in hM4Di-expressing astrocytes ( t (395) = 1.8, P = 0.033) (Fig. 1f ) and reduced the total size of Ca 2+ events in these cells ( t (400) =3.5, P = 0.0005) (Fig. 1g ) compared with astrocytes treated with ACSF. We have shown in the past that CNO alone, without expression of designer receptors, has no effect on calcium activity in astrocytes in the same time frame 13 . Thus, we find that the reported initial increase in calcium activity in astrocytes following G i pathway activation 14 , 19 , which is sufficient to induce c-Fos expression in vivo (Fig. 1c ), is accompanied later by a decrease in calcium dynamics as opposed to G q -mediated astrocytic activation, which results in both acute and minutes-long increases in calcium activity 13 . Previous elegant research demonstrated the necessity of normal astrocytic metabolic support to memory and showed that chronic genetic manipulations in astrocytes affect memory acquisition and maintenance 21 . The contribution of astrocytes to remote memory acquisition, however, was never investigated. To address this topic, we took advantage of the temporal flexibility offered by chemogenetic tools, which allows not only cell-type-specific but also memory-stage-specific (for example, during acquisition or recall) reversible modulation of astrocytes. Indeed, we have recently used such techniques to show that activation of the G q pathway in astrocytes enhances recent memory acquisition, but has no effect at the time of memory recall 13 . To test the effect of astrocytic G i pathway modulation on cognitive performance, we first injected mice bilaterally with AAV8-GFAP::hM4Di–mCherry into the dorsal CA1. Three weeks later, CNO (10 mg per kg, i.p.) was administered 30 min before fear conditioning (FC) training, which paired a foot shock with a novel context and an auditory cue. CNO application in GFAP::hM4Di mice had no effect on context exploration (Extended Data Fig. 2a ) or on baseline freezing (Fig. 1h , left) before shock administration. One day later, when CNO was no longer present, mice were placed back in the conditioning context and freezing was quantified. We found no difference in recent memory retrieval between GFAP::hM4Di mice treated with CNO or saline during FC acquisition (Fig. 1h , right). Remarkably, when the same mice were tested in the same context 20 days later, those treated with CNO during conditioning showed a dramatic impairment in memory retrieval ( t (10) = 2.2, P = 0.028) (Fig. 1i , left). This deficiency was still clearly observed 45 days after that, when mice were re-tested in the same context for a third time ( t (11) = 3.5, P = 0.0025) (Fig. 1i , right). The effect of CA1 astrocytic manipulation was unique to the hippocampal-dependent contextual memory task, as no effect was observed when the same mice were tested for auditory-cued memory in a novel context. That is, both groups demonstrated similar freezing in response to the tone 1 day after training ( F (1,11) = 94.2, time main effect P = 9.97 × 10 −7 ) and 20 days later ( F (1,11) = 13.4, time main effect P = 0.004) (Extended Data Fig. 2b,c ). To verify that the observed effects are not the result of minor off-target hM4Di expression in neurons, we then tested the effects of inhibition of CA1 neurons on recent and remote memory recall. We injected mice with an AAV5-CaMKII::hM4Di–mCherry vector to induce hM4Di expression in ~20% of CA1 glutamatergic neurons (Extended Data Fig. 2d ). To test the effect of direct neuronal inhibition on recent and remote memory acquisition, we injected CaMKII::hM4Di mice with CNO (10 mg per kg, i.p.) 30 min before FC acquisition. G i pathway activation in neurons had no effect on the exploration of the conditioning cage before tone and shock administration (Extended Data Fig. 2e ) or on baseline freezing levels (Fig. 1j , left). Mice were then fear-conditioned and tested on the next day. As expected, neuronal inhibition during training resulted in impaired contextual freezing 1 day later ( t (17) = 3, P = 0.004) (Fig. 1j , middle). When the same mice were tested in the same context 20 days later, the memory impairment was still apparent ( t (17) = 1.8, P = 0.046) (Fig. 1j , right). No significant effect on auditory-cued memory in a novel context was observed at either the recent or the remote time points, as both groups demonstrated similar freezing in response to the tone (time main effect F (1,17) = 155.4, P = 5.59 × 10 −10 and F (1,17) = 34.7, P = 1.77 × 10 −5 , respectively) (Extended Data Fig. 2f,g ). Thus, neuronal inhibition during acquisition impairs both recent and remote memory. Previous reports 22 , 23 , 24 have shown effects specific to remote, but not recent, memory in response to neuronal manipulations at the time of recall. Thus, we next tested the necessity of intact astrocytic function during the retrieval of recent and remote memory. CNO administration during recent and remote recall tests of contextual or auditory-cued memory had no effect on freezing levels compared with saline-injected controls (Extended Data Fig. 2h–j ). This finding is similar to our previously reported lack of effect of G q pathway manipulation during memory recall 13 . Thus, normal astrocytic activity is not required during either recent or remote recall, but only during memory acquisition. To further validate the unexpected effect of astrocytic G i pathway activation during acquisition on remote memory in a less stressful task, we employed the non-associative place recognition (NAPR) paradigm. In this task, mice first explore a novel open field and are then expected to display decreased exploration of this now familiar environment after re-exposure to the same arena. Indeed, GFAP::hM4Di mice injected with either saline or CNO during NAPR acquisition showed a marked decrease in exploration following a second exposure to the square environment to which they were exposed 1 day earlier ( F (1,12) = 45.7, no interaction, time main effect P = 2.01 × 10 −5 ) (Fig. 1k ). In a new cohort of GFAP::hM4Di mice, exploration after the second exposure to a round environment that they had originally explored 4 weeks earlier was markedly reduced in mice injected with saline during NAPR acquisition. However, exploration levels in CNO-treated GFAP::hM4Di mice did not decrease (Fig. 1l , left), which suggests that they did not recall their remote experience in this context. These findings were reflected in a significant treatment by time interaction ( F (1,11) = 15.98, P = 0.002), and a post-hoc analysis showed a significant difference between the first and second visit only for the saline-treated group ( P = 0.001). A significant effect was also found for the decrease in exploration between saline-treated mice and CNO-treated mice ( t (11) = −2.8, P = 0.0085) (Fig. 1l , right). To confirm that these mice are still capable of performing NAPR normally when astrocytic activity is intact and to verify the absence of nonspecific long-term effects, we repeated the experiment in a novel trapezoid environment with no CNO administration in the same cohort. The results demonstrated comparable performance between groups ( F (1,11) = 14.89, time main effect P = 0.003, no interaction) (Extended Data Fig. 2k ). To verify that our results did not stem from the CNO application itself, control mice injected with an AAV8-GFAP::eGFP vector were trained in the same behavioral paradigms. CNO administration (10 mg per kg, i.p.) in these mice had no effect on baseline freezing, on recent or remote contextual memory or on performance in the remote NAPR task ( F (1,11) = 58.7, time main effect P = 9.86 × 10 −6 , no interaction) (Extended Data Fig. 3a–d ). Our results show that G i activation in CA1 astrocytes during the acquisition of spatial memory selectively impairs its remote, but not recent, recall, whereas direct neuronal inhibition during acquisition impairs both recent and remote memory. These findings raise two novel hypotheses. First, that the foundation for remote memory is established during acquisition in a parallel process separate to recent memory, and can therefore be independently manipulated. Second, that astrocytes are able to specifically modulate the acquisition of remote memory, with precision not granted by general neuronal inhibition. Both hypotheses were tested below. Astrocytic G i pathway activation during memory acquisition reduces the recruitment of brain regions involved in remote memory during retrieval The transition from recent to remote memory is accompanied by brain-wide reorganization, including the recruitment of frontal cortical regions like the ACC 1 , 2 , 3 , 23 , 25 , 26 , 27 , as indicated by an increased expression of c-Fos 23 , 25 . To gain insight into changes in neuronal activity accompanying the recent and remote retrieval of memories acquired under astrocytic modulation, GFAP::hM4Di mice were injected with saline or CNO before FC acquisition. Brains were then collected 90-min after recent or remote recall, stained for c-Fos and quantified in neurons at the CA1 and the ACC (Fig. 2a ), two areas that are repeatedly implicated in remote memory 2 , 3 . As before, CNO administration to GFAP::hM4Di mice during acquisition had no effect on recent contextual memory (Fig. 2b ), and no changes in c-Fos expression following recent recall in either CA1 or ACC were observed (Fig. 2c–e ). Another cohort of GFAP::hM4Di mice was injected with CNO before acquisition, tested for recent memory 24-h later and then for remote recall 21 days after that. Importantly, we replicated our initial finding that astrocytic modulation during acquisition specifically impaired remote, but not recent, contextual memory ( t (9) = 2.6, P = 0.014) (Fig. 2f ). Impaired remote memory was accompanied by reduced c-Fos expression in both the CA1 ( t (7) = 2.6, P = 0.0175) and the ACC ( t (7) = 2.61, P = 0.0175) regions (Fig. 2g–i ). We also performed the same c-Fos quantification in brains collected after the last recall test from the first behavioral experiment (Fig. 1i ) of mice that were injected with CNO >60 days earlier. In this experiment, impaired remote recall in GFAP::hM4Di mice treated with CNO during conditioning was also accompanied by reduced c-Fos expression in the CA1 and the ACC compared with saline-treated mice ( t (12) = 2.01, P = 0.029 and t (7) =1.97, P = 0.04, respectively) (Extended Data Fig. 4b ). Fig. 2: Astrocytic G i activation during memory acquisition reduces CA1 and ACC activity at the time of remote recall, but does not affect neurogenesis. a , Schematic displaying the areas used for the quantification of active neurons expressing c-Fos in the CA1 and ACC regions. GFAP::hM4Di mice were injected with CNO ( n = 5) or saline ( n = 5) before FC, and then tested the next day. b , c , No changes were observed in recent memory ( b ) or in the number of neurons active during recall in the CA1 or ACC ( c ). d , e , Representative images of hM4Di (red) and c-Fos (green) in the CA1 ( d ) and ACC ( e ). Other GFAP::hM4Di mice were injected with CNO ( n = 5) or saline ( n = 6) before FC, and then tested the next day and again 21 days later. f , Schematic (top) and quantification (bottom) of the experimental protocol. Left: no changes were observed in recent memory. Right: CNO application before training resulted in >50% reduction (* P < 0.05) in contextual freezing 21 days later compared with saline-treated controls (see g for color key). g , Impaired remote recall was accompanied by a reduced number of c-Fos-expressing neurons in the CA1 and the ACC (* P < 0.05 and * P < 0.01, respectively). h , i , Representative images of the CA1 ( h ) and ACC ( i ). j , Schematic of the experiment: GFAP::hM4Di mice were injected with CNO ( n = 5) or saline ( n = 5) together with BrdU before FC, and then tested the next day. k , l , Images (top) and quantification (bottom) showed no changes in stem cell proliferation (BrdU in red) ( k ) or in the number of young, DCx-positive neurons (white) ( l ). m , Schematic of the experiment: GFAP::hM4Di mice were injected with CNO ( n = 5) or saline ( n = 6) and BrdU before FC, and then tested 21 days later. n , o , Images (top) and quantification (bottom) show that no changes in stem cell proliferation and differentiation ( n ) or in the number of young, DCx-positive neurons ( o ). Scale bars, 100 μm ( n , inset) or 100 μm (all other panels). Data are presented as the mean ± s.e.m. Source data Full size image In the same mice we also quantified retrieval-induced c-Fos expression in several additional brain regions known to be involved in memory, such as the dentate gyrus (DG) of the hippocampus, the retrosplenial cortex (RSC) and the basolateral amygdala (BLA). No changes in c-Fos expression in the DG or RSC were observed. BLA c-Fos expression was reduced in GFAP::hM4Di mice treated with CNO ( t (6) = 3, P = 0.011) (Extended Data Fig. 4a,c ). Finally, to exclude any nonspecific effects of CNO itself, we repeated the same experiments in control GFAP::eGFP mice. As before, CNO application alone did not induce a difference in either recent or remote fear memory, and we did not find alterations in c-Fos expression (Extended Data Fig. 4d–k ). Again, we showed that astrocytic G i pathway activation during fear memory acquisition selectively impairs remote recall, but spares recent retrieval. Moreover, 3 weeks after manipulation, this memory deficiency is accompanied by reduced activity not only in the CA1, where astrocytes were modulated, but also in the ACC. This temporal association, however, does not necessarily indicate causality, and two possible explanations can be offered: (1) that astrocytic G i activation induces a long-term process whose consequences are only observed weeks later or (2) that it acutely impairs the acquisition of remote (but not recent) memory. We tested both options below. Modulation of CA1 astrocytes has no effect on hippocampal neurogenesis Our findings of intact recent memory followed by impaired remote memory and reduced hippocampal activity could suggest that astrocytic modulation during acquisition initiates a long-term process that takes weeks to convey its effect. One example of such a process could be hippocampal neurogenesis occurring between recent and remote recall, which has been repeatedly shown to reduce remote memory 28 . Thus, we sought to examine whether astrocytic manipulation induces changes in neurogenesis. To tag newborn cells, we administered 5-bromodeoxyuridine (BrdU; 100 mg per kg, i.p.) together with the CNO or saline injection to GFAP::hM4Di mice 30 min before acquisition, and another dose 2 h after training. Brains from mice tested for recent retrieval were stained for BrdU, thereby tagging the cells added to the DG since the previous day 29 . No changes in proliferation or in the number of cells expressing doublecortine (DCx), a marker of young neurons 3 days to 3 weeks old, were observed (Fig. 2j–l ). Similarly, in brains collected after remote recall, no changes in the survival of cells formed on the day of acquisition 3 weeks previously or their differentiation fate (as determined by co-staining with the neuronal marker NeuN) were observed. Additionally, no change in the number of young neurons born during these 3 weeks, marked by DCx, was observed (Fig. 2m–o ). CNO application in GFAP::eGFP control mice had no effect on neurogenesis 24-h or 21-days later (Extended Data Fig. 4l–q ). To conclude, astrocytic manipulation in the CA1 had no effect on hippocampal neurogenesis; therefore, an alternative mechanism to the selective impairment of remote memory was subsequently investigated. G i pathway activation in CA1 astrocytes prevents the recruitment of the ACC during memory acquisition Our findings showed that remote memory performance and c-Fos levels in the CA1 and the ACC are temporally associated (that is, when remote recall is low, so are c-Fos levels at the time of recall), but it is challenging to conclude which phenomenon underlies the other. Furthermore, the temporal distance between the appearance of these phenotypes and the astrocytic manipulation 3 weeks earlier makes it hard to determine exactly when they were induced. We therefore tested the immediate effects of CA1 astrocytic modulation on neuronal activity during memory acquisition. GFAP::hM4Di mice were injected with saline or CNO before FC acquisition, and brains were collected 90-min later (Fig. 3a ). CNO administration had no effect on foot-shock-induced immediate freezing (Extended Data Fig. 5a ). To control for the effect of astrocytic manipulation on neuronal activity, independent of learning, we manipulated astrocytes in home-caged mice. c-Fos expression was quantified in the CA1, the ACC, the BLA, the DG and the RSC. FC acquisition induced an overall increase in c-Fos expression in the CA1, the ACC and the BLA ( F (1,21) = 8.1, P = 0.01; F (1,17) = 5.07, P = 0.038; F (1,16) = 9.07, P = 0.008, respectively), but not in the DG or the RSC (Fig. 3b–d ; Extended Data Fig. 5b–h ). Astrocytic manipulation in the CA1 did not substantially affect local neuronal c-Fos expression in this region in either home-caged or fear-conditioned mice (Fig. 3b,c ; Extended Data Fig. 5c ). To verify that the increase in c-Fos in the ACC following acquisition does not represent astrocytic activation in this region, we co-stained for c-Fos and GFAP and found only a negligible amount of cFos-expressing astrocytes (Extended Data Fig. 5i ). Fig. 3: Astrocytic G i activation in the CA1 prevents the recruitment of the ACC during memory acquisition and inhibits CA1 to ACC communication. a , Schematic of the experiment: GFAP::hM4Di mice were injected with CNO ( n = 9) or saline ( n = 9) 30 min before FC, and brains were removed 90 min later for c-Fos quantification. b , Fear-conditioned GFAP::hM4Di mice showed increased c-Fos levels in the CA1 compared with home-caged (HC) mice (* P < 0.01), but CNO administration had no effect on either group. cFos levels in the ACC were increased in GFAP::hM4Di that underwent conditioning after being injected with saline (* P < 0.05), but not in CNO-injected mice. Data are presented as the mean ± s.e.m. c , d , Representative images of hM4Di (red) and c-Fos (green) in the CA1 ( c ) and ACC ( d ) of fear-conditioned mice. c-Fos-expressing astrocytes are observed below and above the CA1 pyramidal layer in CNO-treated mice. e , Schematic of the experiment: AAV5-CaMKII::ChR2–eYFP was injected into the CA3 and AAV8-GFAP::hM4Di-mCherry was injected into the CA1. f , Representative image of ChR2–eYFP expressed in the soma of CA3 pyramidal cells. g , Representative image of ChR2-expressing axons (green) in the CA1 stratum radiatum and hM4Di-expressing astrocytes (red) in the CA1. h , Schematic of the experimental set up: light was applied to the CA1 in anesthetized mice. The response to Schaffer collateral optogenetic stimulation was simultaneously recorded in the CA1 and the ACC after saline administration, followed by CNO administration. i , j , The response in the CA1 to Schaffer collateral optogenetic stimulation had a smaller amplitude under G i pathway activation by CNO in CA1 astrocytes ( n = 4 mice; * P < 0.05) ( j ). The average responses ( i ) from one mouse under saline and then under CNO are presented (average in a bold line, s.e.m. in shadow, blue light illumination in semitransparent blue). k , l , A downstream response of CA1 activation by Schaffer collateral optogenetic stimulation was detected in the ACC. The mean absolute value of the complex ACC response was found to have a significantly smaller amplitude under G i pathway activation by CNO in CA1 astrocytes ( n = 5 mice; * P < 0.01) ( l ). The average responses ( k ) from one mouse under saline and then under CNO are presented (average in a bold line, s.e.m. in shadow). Scale bars, 50 μm (applicable to all images). Source data Full size image Surprisingly, G i activation in CA1 astrocytes substantially reduced the learning-induced elevation in c-Fos expression in the ACC, where no direct manipulation took place (Fig. 3b,d ; Extended Data Fig. 5d ). This result was reflected by a significant treatment by behavior interaction ( F (1,17) = 5.04, P < 0.05; FC–saline versus FC–CNO post-hoc, P < 0.05). The effect was specific to the ACC and not observed in other non-manipulated regions such as the BLA, the DG or the RSC (Extended Data Fig. 5e–h ). The finding that astrocytic G i pathway activation in the CA1 prevented the recruitment of the ACC during learning suggests a functional CA1→ACC connection, which can be modulated by hippocampal astrocytes. The existence of a monosynaptic CA1→ACC projection had been demonstrated 30 , and a functional connection was reported using electrical stimulation 31 . To generate synaptic input to the CA1, we expressed channelrhodopsin-2 (ChR2) in the CA3 (Fig. 3e,f ), a major CA1 input source. ChR2-expressing axons from the CA3 were observed in the CA1 stratum radiatum, and hM4Di was concomitantly expressed in CA1 astrocytes (Fig. 3e,g ). Importantly, no fluorescence was detected in the ACC, as there is no direct CA3→ACC projection (Extended Data Fig. 5j,k ). Light was applied to the CA1 in anesthetized mice via a fiber coupled to an electrode recording the neuronal response in the CA1 (Fig. 3h ). A second electrode was placed in the ACC to record the downstream response to CA1 activation (Fig. 3h ; Extended Data Fig. 5j,k ). Recordings were performed after saline administration i.p. and then after CNO administration i.p. Optogenetic stimulation of the Schaffer collaterals induced a local response in the CA1, which was moderately but significantly reduced by CNO injection (paired t (3) = 2.6, P = 0.04) (Fig. 3i,j ). Astrocytic manipulation in the CA1 had a dramatic effect on the downstream response in the ACC to stimulation of the Schaffer collaterals, as reflected by significantly attenuated field excitatory postsynaptic potentials (fEPSPs) following CNO administration (paired t (4) = 3.8, P = 0.01) (Fig. 3k,l ). These results suggest that astrocytic manipulation in the CA1 can indeed modulate functional connectivity from the CA1 to the ACC. We showed that G i pathway activation in CA1 astrocytes during fear memory acquisition prevents the recruitment of the ACC, without having a significant effect on local neuronal activity in the CA1, and that CA1 astrocytes can indeed modulate the functional CA1→ACC connectivity. These findings suggest that astrocytic manipulation selectively blocks the activity of CA1 neurons projecting to the ACC, resulting in a significant effect on ACC activity, but only a moderate influence on total CA1 activity. G i activation in CA1 astrocytes during memory acquisition specifically prevents the recruitment of CA1 neurons projecting to ACC From our findings that G i activation in CA1 astrocytes during learning prevents the recruitment of the ACC, and that CA1 astrocytes are able to modulate CA1→ACC functional connectivity, we drew the hypothesis that astrocytic G i activation can selectively prevent the recruitment of CA1 neurons projecting to the ACC, without similarly affecting other CA1 neurons. To directly test this hypothesis, we tagged these projection neurons, measured their recruitment during memory acquisition and how it is affected by astrocytic G i activation. Mice were bilaterally injected with a retro-AAV inducing the expression of the Cre recombinase in excitatory neurons (AAV-retro-CaMKII::iCre) into the ACC and with a Cre-dependent virus inducing the expression of green fluorescent protein (GFP) (AAV5-ef1α::DIO–GFP) into the CA1. AAV8-GFAP::hM4Di–mCherry was simultaneously injected into the CA1 to allow astrocytic manipulation (Fig. 4a ). Together, these three vectors induced the expression of GFP only in CA1 neurons projecting to the ACC and of hM4Di in hippocampal astrocytes (Fig. 4b,c ). These mice were injected with saline or CNO 30 min before FC acquisition or in their home cage, and brains were collected 90-min later. As in the previous experiment, CNO administration had no effect on immediate freezing following shock administration, and FC acquisition induced an overall increase in c-Fos expression in the CA1 ( F (1,21) = 12.9, P = 0.002). Moreover, astrocytic modulation was not sufficient to significantly reduce c-Fos expression in the CA1 (Fig. 4d,e ). Furthermore, as before, modulation of CA1 astrocytes significantly reduced the learning-induced elevation in c-Fos expression in the ACC ( t (13) = 1.78, P = 0.049) (Fig. 4e ). Fig. 4: G i pathway activation in CA1 astrocytes during memory acquisition specifically prevents the recruitment of CA1 neurons projecting to the ACC. a , Schematic of the experiment: AAV-retro-CaMKII::iCre was injected into the ACC, and AAV5-ef1α::DIO–GFP together with AAV8-GFAP::hM4Di–mCherry were injected into the CA1. b , Together, these three vectors induced the expression of GFP (green) in CA1 neurons projecting to the ACC and hM4Di (red) in CA1 astrocytes. c , GFP-positive axons of CA1 projection neurons are clearly visible in the ACC. d , Mice expressing GFP in ACC-projecting CA1 neurons and hM4Di in their CA1 astrocytes that were injected with CNO ( n = 8) or saline ( n = 7) 30 min before FC showed similar immediate freezing following shock administration. e , Fear-conditioned mice showed increased c-Fos levels in the CA1 compared with home-caged mice (* P < 0.05), with no effect for CNO administration. c-Fos levels in the ACC were increased in mice that underwent conditioning after being injected with saline (* P < 0.05), but not in CNO-injected mice. f , Fear-conditioned mice injected with saline showed a >130% increase in the percent of CA1 cells projecting into the ACC that express c-Fos compared with home-caged mice (* P < 0.05). CNO administration completely abolished the recruitment of these cells during learning (see e for the color key). g , h , Representative images of hM4Di in astrocytes (red), GFP in ACC-projecting CA1 neurons (green) and c-Fos (pink) in the CA1 of saline-injected mice ( g ) or CNO-injected mice ( h ). i , Schematic of the experiment: AAV-retro-CaMKII::iCre was injected into the NAc, and AAV5-ef1α::DIO–GFP together with AAV8-GFAP::hM4Di–mCherry were injected into the CA1. j , Together, these three vectors induced the expression of GFP (green) in CA1 neurons projecting to the NAc, and hM4Di (red) in CA1 astrocytes. k , GFP-positive axons of CA1 projection neurons are clearly visible in the NAc. l , Mice expressing GFP in NAc-projecting CA1 neurons and hM4Di in their CA1 astrocytes that were injected with CNO ( n = 10) or saline ( n = 8) 30 min before FC showed similar immediate freezing following shock administration. m , Fear-conditioned mice showed increased c-Fos levels in the NAc compared with home-caged mice (* P < 0.05), with no effect for CNO administration. n , Fear-conditioned mice injected with either saline or CNO showed a >60% increase in the percent of CA1 cells projecting into the NAc that express c-Fos compared with home-caged mice (* P < 0.05). CNO administration had no effect on the recruitment of these cells during learning (see m for the color key). Scale bars, 50 μm (applicable to all images). Data are presented as the mean ± s.e.m. Source data Full size image When specifically observing the subpopulation of CA1 neurons projecting to the ACC, these cells were dramatically recruited during memory acquisition, and astrocytic modulation significantly reduced the learning-induced cFos elevation in this population (Fig. 4f ). Specifically, in saline-treated mice, >15% of CA1→ACC cells expressed c-Fos following learning, whereas in CNO-treated GFAP::hM4Di mice, <5% CA1→ACC cells were active after learning, a level as low as that of home-caged mice (Fig. 4f–h ; Extended Data Fig. 6a,b ). This effect resulted in a significant treatment by behavior interaction ( F (1,21) = 6.67, P = 0.017; FC–saline versus FC–CNO post-hoc P = 0.001). Finally, to test the specificity of our findings, we similarly tested an additional monosynaptic projection from the CA1 terminating at the nucleus accumbens (NAc). Mice were bilaterally injected with AAV-retro-CaMKII::iCre into the NAc, together with AAV5-ef1α::DIO–GFP and AAV8-GFAP::hM4Di–mCherry into the CA1, to tag CA1 neurons projecting to the NAc, and activated the G i pathway in CA1 astrocytes (Fig. 4i–k ). As in the previous experiment, CNO administration before FC acquisition had no effect on immediate freezing (Fig. 4l ). Activity in the NAc increased following FC ( F (1,22) = 4.37, P = 0.048), but, importantly, modulation of CA1 astrocytes had no effect on c-Fos expression after learning in this region (Fig. 4m ; Extended Data Fig. 6d,f ). When we specifically tested c-Fos expression in the subpopulation of NAc-projecting CA1 neurons, we found that these neurons are only moderately recruited by learning ( F (1,23) = 4.41, P < 0.047), and that astrocytic modulation had no effect on their activity (Fig. 4n ; Extended Data Fig. 6c,e ). To conclude, we found that G i pathway activation in CA1 astrocytes specifically prevents the exceptional recruitment of CA1→ACC-projecting neurons during memory acquisition. The fact that the inhibition of this projection is induced by the same manipulation that specifically impairs remote memory acquisition suggests that the activity of CA1→ACC neurons during memory acquisition is necessary for remote recall. Specific inhibition of CA1 neurons projecting to the ACC impairs the acquisition of remote, but not recent, memory To specifically manipulate CA1→ACC neurons, mice were bilaterally injected with AAV-retro-CaMKII::iCre into the ACC and with a Cre-dependent hM4Di virus (AAV5-ef1α::DIO–hM4Di–mCherry) into the CA1 (Fig. 5a ). Together, these vectors induced the expression of hM4Di–mCherry only in CA1 neurons projecting to the ACC (Fig. 5b,c ). Three weeks later, mice were injected with saline or CNO 30-min before FC acquisition. CNO application in CA1→ACC–hM4Di mice had no effect on the exploration of the conditioning cage before shock administration (Extended Data Fig. 7a ), on baseline freezing before shock delivery or on recent memory (Fig. 5d , left and middle). However, when the same mice were tested in the same context 20 days later, those treated with CNO during conditioning demonstrated impaired remote retrieval ( t (16) = 1.8, P = 0.048) (Fig. 5d , right). The effect of specific CA1→ACC neuron inhibition was unique to the hippocampal-dependent contextual memory task, as no effect was observed when the same mice were tested for auditory-cued memory in a novel context. That is, both groups demonstrated similar freezing in response to the tone 1 day after training and 20 days later ( F (1,16) = 147.8, P = 1.7 × 10 −9 ; F (1,16) = 37.8, P = 1.4 × 10 −5 , time main effect, respectively) (Extended Data Fig. 7b,c ). Fig. 5: Specific inhibition of CA1-to-ACC projection during learning impairs the acquisition of remote, but not recent, memory. a , Schematic of the experiment: AAV-retro-CaMKII::iCre was injected into the ACC, and AAV5-ef1α::DIO–hM4Di–mCherry was injected into the CA1. b , Together, these vectors induced the expression of hM4Di–mCherry (red) in CA1 neurons projecting to the ACC. c , hM4Di–mCherry-positive axons of CA1 projection neurons are clearly visible in the ACC. d , Mice expressing hM4Di in their ACC-projecting CA1 neurons were injected with either saline ( n = 9) or CNO ( n = 9) 30 min before FC acquisition. CNO application before training had no effect on baseline freezing (left) before shock administration or on recent contextual freezing (middle) the next day, but induced a significant decrease (* P < 0.05) 20 days later compared with saline-treated controls (right). e , Active neurons expressing c-Fos were quantified in the CA1 and ACC regions. Impaired remote recall was accompanied by a reduced number of c-Fos-expressing neurons in the CA1 and the ACC (* P < 0.05 for both). f , CNO administration reduced the recruitment of CA1→ACC cells during remote recall (* P < 0.03; see e for the color key). g , h , Representative images of hM4Di (red) and c-Fos (green) in the CA1 ( g ) and ACC ( h ). Scale bars, 100 μm (applicable to all images). Data are presented as the mean ± s.e.m. Source data Full size image Finally, to gain insight into changes in neuronal activity accompanying this impaired remote retrieval of memories acquired under CA1→ACC-projection inhibition, brains were collected 90-min after remote recall and stained for c-Fos. We found that the impaired remote memory was accompanied by reduced c-Fos expression in both the CA1 ( t (15) = −2.2, P = 0.022) and the ACC ( t (14) = −2.4, P = 0.015) regions (Fig. 5e,g,h ). When specifically observing the CA1→ACC neurons manipulated 3 weeks earlier, we found significantly reduced c-Fos expression ( t (14) = −2, P = 0.033) (Fig. 5f,g ). In this experiment, we directly proved the involvement of CA1→ACC neurons in establishing the foundation for remote memory during acquisition, as suggested by the effect of astrocytes on this process. Discussion Recent years have seen a burst in discoveries of hitherto unknown elaborate roles for astrocytes in the modulation of neuronal activity and plasticity 21 . In this work, we showed that these cells can confer specific effects on neurons in their vicinity based on the distant projection target of these neurons. Specifically, astrocytic G i activation during memory acquisition impairs remote, but not recent, memory retrieval. Another novel finding we presented is a massive recruitment of ACC-projecting CA1 neurons during memory acquisition, a process that is specifically inhibited by astrocytic manipulation, thus preventing successful recruitment of the ACC during learning. Finally, we directly inhibited this projection to prove its necessity for the formation of remote memory. Chemogenetic and optogenetic tools, which were originally developed for use in neurons and allow real-time, reversible cell-specific manipulation, are now integrated into astrocyte research. Chemogenetic tools recruit intracellular pathways in astrocytes to induce clear behavioral effects, which greatly vary depending on the modulated cellular mechanism 12 , 13 , 14 , 18 , 32 . For example, in our hands, G q pathway activation in astrocytes (via Hm3Dq) leads to recent memory enhancement 13 , whereas in this work, we report that G i pathway activation has no effect on recent memory and specifically impairs remote memory. In contrast to these clear differences in the downstream physiological and behavioral effects of astrocytic manipulation, the intracellular calcium dynamics recorded in astrocytes in reaction to very different stimuli are very much alike. For example, despite the fact that the G q and G i pathways are endogenously recruited by the administration of different neurotransmitters (for example, G q by acetylcholine and G i by GABA), c-Fos expression in astrocytes is similarly induced by activating either of these pathways 13 , 14 , 19 , 20 (see also Fig. 1c ), making it a good indicator for the occurrence of a modulation but not to its precise nature. Similarly, chemogenetic activation of either the G q or the G i pathway induced an increase in intracellular calcium in astrocytes 13 , 14 , 19 , 20 . However, whereas G q pathway activation results in a long-lasting increase of Ca 2+ activity 13 , we found that the effect of G i pathway activation wanes in time, and on a behaviorally relevant time scale, even decreases slightly. The discrepancy between the clear downstream functional differences of astrocytic modulation by G q and G i DREADDs, and the similarity in calcium responses to these stimuli, may be resolved in the future by advanced imaging and analysis methods to provide insight to the intricacies of calcium signals in these cells 33 . Previous evidence suggests that astrocytes can have projection-specific effects based on either the input source or the output target of their neighboring neurons, but with some caveats. For example, in the central amygdala, astrocytic activation depressed inputs from the BLA and enhanced inputs from the central-lateral amygdala 12 . However, since the former projection is excitatory and the latter inhibitory, this finding could reflect specificity to the secreted neurotransmitter rather than to the projection source. Additionally, astrocytes in the striatum specifically modulate either the direct or indirect pathways 6 . Nonetheless, since the populations of striatal neurons from which these two projections originate differ genetically (expressing either the D1 or D2 dopamine receptors), it is impossible to determine whether the specificity that astrocytes demonstrate stems from surface protein expression in these neurons or their projection target. Similarly, astrocytes in the DG may differentially affect input from the medial perforant path, but the terminals of this pathway differ from the lateral perforant path in their exclusive expression of the GluN3a NMDA subunit 11 . Here, we showed that the differential effects of astrocytic modulation on CA1 pyramidal cells are based exclusively on their projection target. These cells may differ from other CA1 cells in the configuration of input they receive, in their activity pattern and possibly even in hitherto unidentified genetic properties. The leading hypothesis in the memory field is that the hippocampus has a time-limited role in memory in that it is required for acquisition and recent recall, but becomes redundant for remote recall, being replaced by frontal cortices 2 . However, this temporal separation between the hippocampus and frontal cortex is not so rigid. For example, we and others have shown that the hippocampus is still critically involved in the consolidation and retrieval of remote memory (for examples, see refs. 23 , 25 , 34 , 35 , 36 ). Current research now attempts to define the temporal dynamics in different brain regions underlying remote memory 25 , 36 . The evidence regarding the role of frontal cortices during acquisition is mixed. For example, inhibition of medial entorhinal cortex input into the prefrontal cortex (PFC) during acquisition specifically impaired remote memory 37 . Conversely, inhibition of the PFC during acquisition had no effect on remote recall, nor did activation during remote recall of PFC neurons that were active during acquisition 38 . The role of the ACC in remote memory retrieval was repeatedly demonstrated by the finding that ACC inhibition during recall impairs remote but not recent memory in multiple tasks 22 , 23 , 24 , 39 , 40 . However, the time point at which the ACC is recruited to support remote memories was never defined. Here, we showed that the ACC is recruited at the time of initial acquisition, but the significance of this early activity is only revealed at the remote-recall time point. We further demonstrated that there is massive recruitment of ACC-projecting CA1 cells during learning, and that specific inhibition of this projection at this time point by astrocytes prevents the engagement of the ACC during acquisition, which results in impaired remote (but not recent) memory. When nonspecific CA1 inhibition is induced by direct neuronal G i pathway activation, both recent and remote memory are impaired. In this work, we revealed another novel capacity of astrocytes: they affect their neighboring neurons based on their projection target. This finding further expands the repertoire of sophisticated ways by which astrocytes shape neuronal networks and consequently high cognitive function. Methods Mice Male C57BL/6 mice, 6–7 weeks old (Harlan) were group-housed on a 12-h light/dark cycle with ad libitum access to food and water. Experimental protocols were approved by the Hebrew University Animal Care and Use Committee and met guidelines of the National Institutes of Health guide for the Care and Use of Laboratory Animals. Mice were randomly assigned to experimental groups. Virus production The pAAV-CaMKII-eGFP plasmid was made by first replacing the CMV promoter in a pAAV-CMV-eGFP vector with the CaMKII promoter. The pAAV-CaMKII-iCre plasmid was made by replacing the eGFP gene in the above plasmid with the coding region of iCre (Addgene, 51904). Both pAAV-CaMKII-eGFP and pAAV-CaMKII-iCre plasmids were then packaged into an AAV2-retro serotype viral vector. Similarly, the pAAV-EF1-DIO–eGFP (Addgene, 37084) plasmid was used to make the AAV5-EF1-DIO–eGFP viral vector. The above viral vectors were prepared at the ELSC Vector Core Facility (EVCF) at the Hebrew University of Jerusalem. Viral vectors The following dilutions and volumes of vectors were used: AAV8-GFAP:: hM4D(G i )–mCherry (UNC vector core, titer 7 × 10 12 , diluted 1:10 in PBS when injected alone and 1:10 in other vectors when injected with AAV5-EF1α::DIO–GFP, 700 nl per site); AAV8-GFAP::eGFP (EVCF, titer 4.1 × 10 12 , diluted 1:10 in PBS, 700 nl per site); AAV5-CaMKIIa::hChR2 (H134R)–eYFP (UNC vector core, titer 1.2 × 10 12 , 250 nl per site); AAV5-EF1α::DIO–GFP (EVCF, titer 1.1 × 10 13 , 500 nl per site); AAV2-retro-CaMKII-iCre (EVCF, titer 7 × 10 12 , 400 nl per site); AAV5-CaMKII::hM4Di–mCherry (EVCF, titer 1.1 × 10 13 , 500 nl per site); AAV5-EF1α::DIO-hM4D(Gi)–mCherry (EVCF, titer 4.3 × 10 12 , 500 nl per site); and AAV5-GfABC1D::cytoGCaMP6F (Penn Vector Core, titer 6.13 × 10 13 , 400 nl per site). Stereotactic virus injection Mice were anesthetized with isoflurane and their head placed in a stereotactic apparatus (Kopf Instruments). The skull was exposed, and a small craniotomy was performed. To cover the entire dorsal CA1, mice were bilaterally microinjected using the following coordinates (two sites per hemisphere): site 1: anterior–posterior (AP), −1.5 mm from bregma, medial–lateral (ML), ±1 mm, dorsal–ventral (DV), −1.55 mm; site 2: AP, −2.5 mm, ML, ±2 mm, DV, −1.55 mm. For the ACC the following coordinates were used: AP, 0.25 mm, ML, ±0.4 mm, DV, −1.8 mm. For optogenetic activation of the Schaffer collaterals, mice were bilaterally microinjected into the CA3 using the following coordinates: AP, −1.85, ML, ±2.35, DV, −2.25. All microinjections were carried out using a 10-µl syringe and a 34-gauge metal needle (WPI). The injection volume and flow rate (0.1 μl min –1 ) were controlled by an injection pump (WPI). Following each injection, the needle was left in place for an additional 10 min to allow for diffusion of the viral vector away from the needle track, and was then slowly withdrawn. The incision was closed using Vetbond tissue adhesive. For postoperative care, mice were subcutaneously injected with Rimadyl (5 mg per kg). A list of all vectors is provided above. Verification of hM4Di–mCherry expression spread The expression area of hM4Di–mCherry was measured in all GFAP–hM4Di-expressing mice. Mice with no expression were excluded from analysis. The average spread area (×1,000 pixels) was found for the following figures: Fig. 1d,e : 191 ± 46; Fig. 2b–e : 178 ± 18; Fig. 3a–d : 173 ± 30; and Fig. 4a–h : 229 ± 24. No significant differences were detected between the various experiments (one-way analysis of variance (ANOVA), F (3,72) = 2.63, P > 0.05). Ca 2+ imaging in hippocampal slices Coronal hippocampal slices (300 μm) were made from 11–12-week-old mice. Animals were anesthetized with isoflurane and the brain was swiftly removed, mounted frontal-side up and sliced in ice-cold oxygenated low-calcium ACSF (126 mM NaCl, 2.6 mM KCl, 26 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 10 mM glucose, 1 mM MgCl 2 , 0.625 mM CaCl 2 . The pH of the ACSF was set to 7.3 and the osmolarity to 305–320 mOsm. ACSF was oxygenated and pH buffered by constant bubbling with a gas mixture of 95% O 2 /5% CO 2 ) using a vibratome (Campden Instruments). Slices were then incubated for 1 h in a holding chamber with oxygenated normal calcium ACSF (126 mM NaCl, 2.6 mM KCl, 26 mM NaHCO 3 , 1.25 mM NaH 2 PO 4 , 10 mM glucose, 1 mM MgCl 2 , 2 mM CaCl 2 . The pH of the ACSF was set to 7.3 and the osmolarity to 305–320 mOsm. ACSF was oxygenated and pH buffered by constant bubbling with a gas mixture of 95% O 2 /5% CO 2 ) at 35 °C and then stored at 32 °C. Individual slices were transferred to a submerged recording chamber (32 °C), and astrocytes expressing both hM4D(G i )–mCherry and GCaMP6f were selected for imaging. Imaging was performed with a low-power temporal oversampling two-photon microscope (LotosScan2015, Suzhou Institute of Biomedical Engineering and Technology; ). mCherry and GCaMP6f were excited at 920 nm with a Ti:sapphire laser (Vision II, Coherent) and imaged through a ×25, 1.05 numerical aperture (NA) water-immersion objective (Olympus). Red and green fluorescence signals were collected via two different photomultiplier tubes. Full-frame images (600 × 600 pixels) were acquired at 20 frames per second. Image acquisition was performed using a LabView-based software (LotosScan), and images were analyzed using ImageJ (NIH) and Matlab. Astrocytes were imaged three times for 3 min separated by a 1-min interval to determine baseline Ca 2+ levels and activity. CNO or ACSF was then added to the chamber, and imaging (three times for 3 min separated by a 1-min interval) was resumed after a 10-min break. Signal processing and analysis were conducted using ImageJ (NIH) and Matlab. Temporal series were imported into ImageJ, and then astrocytic somas and their main branches were identified by their GCaMP6f and mCherry co-expression, as well as their activity (measured by the standard deviation), and manually segmented as regions of interest (ROIs). To determine the baseline intracellular calcium levels, we calculated the mode for each 3-min imaging epoch per ROI, then averaged the three epochs before and three epochs after the addition of CNO or ACSF. To quantify Ca 2+ events, we computed the integral of the Z score for each 3-min imaging epoch from the fluorescence ( F ) signal in each ROI. The Z score was calculated as ( F − µ )/ σ , where µ and σ are the mean and standard deviation, respectively, defined from the baseline histogram of F (<90th percentile). Negative Z scores were zeroed. We then averaged the Z scores of the three epochs before and three epochs after the addition of CNO or ACSF. Epochs for which the F signal throughout the 3 min had standard deviation values lower than 1 were assigned a Z score of 0 for the entire epoch. ROIs for which no active epochs were detected before and after manipulation were excluded from the analysis. Immunohistochemistry Three weeks post-injection, mice were transcardially perfused with cold PBS followed by 4% paraformaldehyde (PFA) in PBS. The brains were extracted, postfixed overnight in 4% PFA at 4 °C and cryoprotected in 30% sucrose in PBS. Brains were sectioned to a thickness of 40 μm using a sliding freezing microtome (Leica SM 2010R) and preserved in a cryoprotectant solution (25% glycerol and 30% ethylene glycol in PBS). Free-floating sections were washed in PBS, incubated for 1 h in blocking solution (1% BSA and 0.3% Triton X-100 in PBS) and incubated overnight at 4 °C with primary antibodies (see below for a full list of antibodies) in blocking solution. For the c-Fos staining, slices were incubated with the primary antibody for five nights at 4 °C. Sections were then washed with PBS and incubated for 2 h at room temperature with secondary antibodies (see below for a full list of antibodies) in 1% BSA in PBS. Finally, sections were washed in PBS, incubated with 4,6-diamidino-2-phenylindole (DAPI; 1 µg ml –1 ) and mounted on slides with mounting medium (Fluoromount-G, eBioscience). For neurogenesis staining, BrdU (Sigma, 100 mg per kg) was injected i.p. together with the CNO injection and 2 h after the FC training. At 90 min after recent or remote recall, brains were removed and slices prepared as described above. Sections were fixed in 50% formamide and 50% SSC for 2 h in 65 °C, then incubated in 2 N HCl for 30 min at 37 °C and neutralized in boric acid for 10 min. After PBS washes, sections were blocked in 1% BSA with 0.1% Triton-X for 1 h at room temperature. Sections were incubated with anti-BrdU for 48 h at 4 °C. Sections were then washed with PBS and incubated with a secondary antibody for 2 h at room temperature. Antibodies Primary antibodies The following primary antibodies were used: chicken anti-GFAP (Millipore, catalog no. AB5541; diluted 1:500) 13 ; rabbit anti-NeuN (Cell Signaling Technology, catalog no. 12943; diluted 1:400) 13 ; rat anti-BrdU (Bio-Rad, catalog no. OBT0030G; diluted 1:200) 29 ; guinea pig anti-DCx (Millipore, catalog no. AB2253; diluted 1:1,000) 29 ; and rabbit anti c-Fos (Synaptic Systems, catalog no. 226 003; diluted 1:10,000) 13 . Secondary antibodies The following secondary antibodies were used, all from Jackson Laboratories: donkey anti-chicken (conjugated to Alexa Fluor 488, catalog no. 703-545-155; diluted 1:500); donkey anti-rabbit (conjugated to Alexa Fluor 488, catalog no. 711-545-152; diluted 1:500); donkey anti-goat (conjugated to Alexa Fluor 594, catalog no. 705-585-147; diluted 1:400); donkey anti-guinea pig (conjugated to Cy5, catalog no. 706-605-148; diluted 1:400); and donkey anti-rat (conjugated to Cy5, catalog no. 712-175-153; diluted 1:400). Confocal microscopy Confocal fluorescence images were acquired on an Olympus scanning laser microscope (Fluoview FV1000) using ×4 and ×10 air objectives or ×20 and ×40 oil-immersion objectives. Image analysis was performed using either ImageJ (NIH) or Fluoview Viewer v.4.2 (Olympus). Cells were counted in a blinded manner. Behavioral testing The FC apparatus consisted of a conditioning box (18 × 18 × 30 cm), with a grid floor wired to a shock generator surrounded by an acoustic chamber (Ugo Basile) and controlled by EthoVision software (Noldus). Three weeks after injections, mice were placed in the conditioning box for 2 min, and then a pure tone (2.9 kHz) was sounded for 20 s followed by a 2-s foot shock (0.4 mA). This procedure was then repeated, and 30 s after the delivery of the second shock, mice were returned to their home cages. FC was assessed through the continuous measurement of freezing (complete immobility), which is the dominant behavioral fear response. Freezing was automatically measured throughout the testing trial using EthoVision tracking software. To test contextual FC, mice were placed in the original conditioning box, and freezing was measured for 5 min. To test auditory-cued FC, mice were placed in a different context (a cylinder-shaped cage with stripes on the walls and a smooth floor), freezing was measured for 2.5 min and then a 2.9-kHz tone was sounded for 2.5 min, during which conditioned freezing was measured. Mice were tested for recent memory 24 h after acquisition and for remote memory 21 or 28 days later. In one experiment, an additional remote memory test was performed 66 days after acquisition. The NAPR test was conducted in a round plastic arena (54 cm in diameter) or a square or a trapezoid arena with an identical area size (2,290 cm 2 ). Mice were placed in the center of the arena and allowed to freely explore for 5 min. Habituation to the familiar environment (reduced exploration between first and second exposures) was measured using EthoVision tracking software. CNO (Tocris) was dissolved in dimethylsulfoxide (DMSO) and then diluted in 0.9% saline solution to yield a final DMSO concentration of 0.5%. Saline solution for control injections also consisted of 0.5% DMSO. CNO (10 mg per kg) was i.p. injected 30 min before the behavioral assays. In the relevant experiments, BrdU (Sigma, B5002; 100 mg per kg) was injected i.p. together with the CNO or saline and 2 h after the behavioral experiment. In vivo electrophysiology and optogenetics Simultaneous optical stimulation of the Schaffer collaterals and electrical recordings in the CA1 and the ACC were performed as follows. Mice were anesthetized with isoflurane, and an optrode (an extracellular tungsten electrode (1 MΩ, ~125 µm) glued to an optical fiber (200-µm core diameter, 0.39 NA) with the tip of the electrode protruding ~400 µm beyond the fiber end) was used to record local field potentials in the stratum radiatum and to illuminate the Schaffer collaterals. fEPSP recordings were conducted with the optrode initially placed above the dorsal CA1 (AP: −1.6 mm; ML: 1.1 mm; DV: −1.1 mm) and gradually lowered in 0.1-mm increments into the stratum radiatum (−1.55 mm). The optical fiber was coupled to a 473-nm solid-state laser diode (Laserglow Technologies) with ~10 mW of output from the fiber. fEPSP recordings from the ACC were similarly performed using an extracellular tungsten electrode (1 MΩ, ~125 µm) placed over the ACC (AP: 0.25 mm; ML: 0.4 mm; DV: −1.3 mm) and gradually lowered in 0.1-mm increments to 1.8 DV. This electrode was dipped in DiI (1 mg per 1.5 ml in 99% ethanol; Invitrogen) to validate the position of the recording site. To optogenetically activate the Schaffer collaterals, blue light (473 nm) was unilaterally delivered through the optrode. The photostimulation duration was 10 ms, delivered 72 times for each treatment (saline or CNO) every 5 s. Saline or CNO were injected i.p., and recording was started 30 min after each injection. Recordings were carried out using a Multiclamp 700B patch-clamp amplifier (Molecular Devices). Signals were low-pass filtered at 5 kHz, digitized and sampled through an AD converter (Molecular Devices) at 10 kHz and stored for offline analysis using Matlab (Mathworks). CA1 responses to Schaffer collateral stimulation were quantified by calculating the amplitude of the fEPSPs relative to the mean baseline levels, defined as a 200-ms time window before photostimulation. CA1 activation by Schaffer collateral stimulation resulted in a complex downstream activity in the ACC that lasted ~400 ms. Because this signal had both positive and negative peaks, to estimate the overall magnitude of the response, we calculated its mean absolute value over the entire 400-ms period, from the beginning of photostimulation in the CA1. Statistical analysis The results of automatic or blind measurements were analyzed using two-way ANOVA followed by least significance difference post-hoc tests or using Student’s t -test, both one-sided, as applicable. Data distribution was assumed to be normal, but this was not formally tested. No statistical methods were used to predetermine sample sizes, but our sample sizes were similar to those reported in previous publications 13 . Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data used to support the conclusions of this study are publicly available at , and as indicated in the Nature Research Reporting Summary . Source data are provided with this paper. Code availability Analysis codes will be made available to any interested reader.
Memories from a distant past, also known as remote memories, can guide the present and future behavior of humans and other living organisms on Earth. In psychology and neuroscience, the term "remote memories" refers to all memories related to events that took place from a few weeks to decades in the past. Many past studies have explored the neural underpinnings of remote memories or tried to identify brain regions that could be involved in how they are formed and maintained over time. So far, most findings have suggested that the interaction between the hippocampus and frontal cortical brain regions plays a key role in the consolidation of these memories. Past observations suggest that the interaction between these brain regions changes as time goes by and as memories go from being recent (i.e., a few years old) to remote. The exact time when these brain regions become involved in the formation of a memory and for how long they remain important to its endurance, however, is still poorly understood. Astrocytes are star-shaped cells that are known to have several functions, including the regulation of the metabolism, detoxification, tissue repair and providing nutrients to neurons. Recent studies have found that these cells can also change synaptic activity in the brain, thus impacting neuronal circuits at multiple levels. Researchers at the Hebrew University of Jerusalem have recently carried out a study aimed at exploring the role of astrocytes in memory acquisition. In their paper, published in Nature Neuroscience, they present a number of new observations that shed light on the unique contribution of these cells in enabling the formation of remote memories in mice, and potentially also humans. To investigate the role of astrocytes in memory formation, the researchers used a series of chemogenetic and optogenetic techniques. These techniques allow neuroscientists to manipulate astrocytes and other types of brain cells in reversible ways and to observe an animal's behavior when specific cell populations are active or inactive. In their study, the researchers used them to activate specific designer receptor pathways in astrocytes within the mouse brain. They then observed the effects of this activation on the animals' behavior and on their ability to form memories. "We expressed the Gi-coupled designer receptor hM4Di in CA1 astrocytes and discovered that astrocytic manipulation during learning specifically impaired remote, but not recent, memory recall and decreased activity in the anterior cingulate cortex (ACC) during retrieval," the researchers wrote in their paper. The new findings represent a significant step forward in the understanding of astrocytes and their unique functions. Overall, they provide further evidence that astrocytes can shape neuronal networks in intricate ways and affect many cognitive functions, including the acquisition of remote memories. More specifically, the researchers observed that when a mouse was acquiring a new memory, ACC-projecting CA1 neurons were recruited in large numbers and neurons in the anterior cingulate cortex (ACC) were simultaneously activated. When they activated Gi pathways in astrocytes using chemogenetic techniques, however, the communication between CA3 and CA1 neurons was disturbed, which prevented activation in the ACC. As a result of their intervention on astrocytes, the projection typically observed between CA1 and ACC neurons in a mouse brain as the animal is learning something was repressed. This, in turn, appeared to impair the mice's ability to acquire remote memories. The findings suggest that astrocytes play an important role in the formation of remote memories in mice and potentially humans, via their ability to regulate the projection of neurons onto other areas of the brain. In this particular instance, they may control the communication between CA1 neurons and the ACC, which seems to be essential for remote memory formation. "We revealed another capacity of astrocytes: They affect their neighboring neurons based on their projection target," the researchers concluded in their paper. "This finding further expands the repertoire of sophisticated ways by which astrocytes shape neuronal networks and consequently high cognitive function."
10.1038/s41593-020-0679-6
Medicine
Researchers program cancer-fighting cells to resist exhaustion, attack solid tumors in mice
Rachel C. Lynn et al, c-Jun overexpression in CAR T cells induces exhaustion resistance, Nature (2019). DOI: 10.1038/s41586-019-1805-z Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1805-z
https://medicalxpress.com/news/2019-12-cancer-fighting-cells-resist-exhaustion-solid.html
Abstract Chimeric antigen receptor (CAR) T cells mediate anti-tumour effects in a small subset of patients with cancer 1 , 2 , 3 , but dysfunction due to T cell exhaustion is an important barrier to progress 4 , 5 , 6 . To investigate the biology of exhaustion in human T cells expressing CAR receptors, we used a model system with a tonically signaling CAR, which induces hallmark features of exhaustion 6 . Exhaustion was associated with a profound defect in the production of IL-2, along with increased chromatin accessibility of AP-1 transcription factor motifs and overexpression of the bZIP and IRF transcription factors that have been implicated in mediating dysfunction in exhausted T cells 7 , 8 , 9 , 10 . Here we show that CAR T cells engineered to overexpress the canonical AP-1 factor c-Jun have enhanced expansion potential, increased functional capacity, diminished terminal differentiation and improved anti-tumour potency in five different mouse tumour models in vivo. We conclude that a functional deficiency in c-Jun mediates dysfunction in exhausted human T cells, and that engineering CAR T cells to overexpress c-Jun renders them resistant to exhaustion, thereby addressing a major barrier to progress for this emerging class of therapeutic agents. Main CAR-expressing T cells demonstrate impressive response rates in B cell malignancies, but fewer than 50% of patients experience long-term disease control 11 , 12 and CAR T cells have not mediated sustained responses in solid tumours 3 . Several factors limit the efficacy of CAR T cells, including a requirement for high antigen density for optimal CAR function enabling rapid selection of antigen loss or antigen low variants 12 , 13 , 14 , the suppressive tumour microenvironment 15 and intrinsic T cell dysfunction due to T cell exhaustion 6 , 11 , 16 . T cell exhaustion has been increasingly incriminated as a cause of CAR T cell dysfunction 6 , 11 , 16 , 17 , raising the prospect that engineering exhaustion-resistant CAR T cells could improve clinical outcomes. T cell exhaustion is characterized by high expression of inhibitory receptors and widespread transcriptional and epigenetic alterations 4 , 5 , 7 , 18 , 19 , but the mechanisms responsible for impaired function in exhausted T cells are unknown. Blockade of PD-1 can reinvigorate some exhausted T cells 20 but does not restore function fully, and trials using PD-1 blockade in combination with CAR T cells have not demonstrated efficacy 21 . Using a model in which healthy T cells are driven to exhaustion by the expression of a tonically signalling CAR, exhausted human T cells demonstrated widespread epigenomic dysregulation of AP-1 transcription factor-binding motifs and increased expression of the bZIP and IRF transcription factors that have been implicated in the regulation of exhaustion-related genes. Therefore, we tested the hypothesis that dysfunction in this setting resulted from an imbalance between activating and immunoregulatory AP-1–IRF complexes by inducing overexpression of c-Jun—an AP-1 family transcription factor associated with productive T cell activation. Consistent with this hypothesis, overexpression of c-Jun rendered CAR T cells resistant to exhaustion, as demonstrated by enhanced expansion potential in vitro and in vivo, increased functional capacity, diminished terminal differentiation and improved anti-tumour potency in multiple in vivo models. HA-28z CAR rapidly induces T cell exhaustion Exhaustion in human T cells was recently demonstrated after expression of a CAR incorporating the disialoganglioside (GD2)-specific 14g2a scFv, CD3ζ and CD28 signalling domains (GD2-28z), as a result of tonic signalling mediated via antigen-independent aggregation 6 . Here we show that CARs incorporating the 14g2a-E101K scFv, which demonstrate higher affinity for GD2 22 (HA-28z), display a more severe exhaustion phenotype (Extended Data Fig. 1a–c ). In contrast to CD19-28z CAR T cells (without tonic signalling), HA-28z CAR T cells develop profound features of exhaustion, including reduced expansion in culture, increased expression of inhibitory receptors, exaggerated effector differentiation, and diminished IFNγ and markedly decreased IL-2 production after stimulation (Fig. 1a–d , Extended Data Fig. 1d, e ). The functional defects are due to exhaustion-associated dysfunction rather than suboptimal interaction of the HA-28z CAR with its target GD2, because they are also observed in CD19-28z CAR T cells when HA-28z CAR is co-expressed using a bi-cistronic vector (Extended Data Fig. 1f ). Principal component analysis (PCA) of RNA-sequencing (RNA-seq) data demonstrated that the strongest driver of transcriptional variance was the presence of the exhausting HA-28z versus control CD19-28z CAR (Fig. 1e ), although some cell-type-specific differences were observed (Extended Data Fig. 1g ). Fig. 1: HA-28z CAR T cells manifest phenotypic, functional, transcriptional and epigenetic hallmarks of T cell exhaustion. a , Primary T cell expansion. Data are mean ± s.e.m. from n = 10 independent experiments. b , Surface expression of exhaustion-associated markers. c , Surface expression of CD45RA and CD62L to distinguish T memory stem cells (CD45RA + CD62L + ), central memory cells (CD45RA − CD62L + ), and effector memory cells (CD45RA − CD62L − ). d , IL-2 (left) and IFNγ (right) release after 24-h co-culture with CD19 + GD2 + Nalm6-GD2 leukaemia cells. Data are mean ± s.d. from triplicate wells. In b – d , one representative donor (of n = 10 experiments) is shown for each assay. P values determined by unpaired two-tailed t -tests. e , Principal component analysis (PCA) of global transcriptional profiles of naive- and central-memory-derived CD19-28z (CD19) or HA-28z (HA) CAR T cells at days 7, 10 and 14 in culture. PC1 (39.3% variance) separates CD19-28z from HA-28z CAR T cells. f , Gene expression of the top 200 genes driving PC1. Genes of interest in each cluster are listed. g , Differentially accessible chromatin regions (peaks) in CD8 + CD19-28z and HA-28z CAR T cells. Both naive and central memory cell subsets are incorporated for each CAR. h , PCA of ATAC-seq chromatin accessibility in CD19-28z or HA-28z CAR T cells. PC1 (76.9% variance) separates CD19-28z from HA-28z CAR samples. i , Global chromatin accessibility profile of CD4 + and CD8 + CD19-28z and HA-28z CAR T cells derived from naive (N) and central memory (CM) subsets. Top 5,000 peaks. j , Differentially accessible enhancer regions in CD19- and HA-28z CAR T cells in the CTLA4 (top) or IL7R (bottom) loci. Unless noted otherwise, all analyses were done on day 10 of culture. GITR is also known as TNFRSF18 . Source data Full size image The top 200 most differentially expressed genes (Fig. 1f , Supplementary Table 1 ) included activation-associated genes ( IFNG , GZMB and IL2RA ), inhibitory receptors ( LAG3 and CTLA4 ) and inflammatory chemokines or cytokines ( CXCL8 , IL13 and IL1A ), and genes associated with naive and memory T cells ( IL7R , TCF7 , LEF1 and KLF2 ), which overlapped with gene sets described in chronic lymphocytic choriomeningitis virus (LCMV) mouse models 4 (Extended Data Fig. 1h ). Single-cell RNA-seq analysis of GD2-28z versus CD19-28z CAR T cells revealed similar differential gene expression as HA-28z CAR T cells (Extended Data Fig. 2 ). T cell exhaustion is associated with widespread epigenetic changes 18 , 20 . Using ATAC-seq (assay for transposase-accessible chromatin using sequencing) analysis 23 (Fig. 1g , Extended Data Figs. 3 , 4a ), we observed that CD8 + HA-28z CAR T cells displayed more than 20,000 unique differentially accessible chromatin regions (peaks) compared with less than 3,000 unique peaks in CD8 + CD19-28z CAR T cells (false discovery rate (FDR) < 0.1 and log 2 -transformed fold change > 1). Principal component analysis (PCA) revealed HA-28z versus CD19-28z CAR as the strongest driver of differential chromatin states (PC1 variance 79.6%, Fig. 1h ), with weaker but observable differences observed between naive and central memory cells (PC2 variance 7.4%), and CD4 versus CD8 subsets (PC3 variance 6.5%; Extended Data Fig. 4b ). Clustering the top 5,000 differentially accessible regions revealed a similar epigenetic state in HA-28z CAR T cells regardless of the starting subset (Fig. 1i ). HA-28z CAR T cells demonstrated increased chromatin accessibility near exhaustion-associated genes such as CTLA4 , and decreased accessibility near memory-associated genes such as IL7R (Fig. 1j ). Together, these data suggest that tonically signalling CAR T cells are a valid model for the study of human T cell exhaustion. Epigenetic and transcriptional dysregulation of AP-1 Using ChromVAR 24 and transcription factor motif enrichment analysis to identify transcriptional programs associated with the epigenetic changes observed, we discovered that the AP-1–bZIP and bZIP–IRF binding motifs were the most significantly enriched in exhausted CAR T cells (Fig. 2a, b , Extended Data Fig. 4c, d ), with strong enrichment of NF-κB, NFAT and RUNX transcription factor motifs in some clusters, reproducing epigenetic signatures of exhaustion observed in other models 18 , 20 , 25 . Paired RNA-seq analysis across three donors revealed increased bZIP and IRF mRNA in HA-28z versus CD19-28z CAR T cells, most significantly for JUNB , FOSL1 , BATF , BATF3 , ATF3 , ATF4 and IRF4 (Fig. 2c , Extended Data Fig. 4e ). We confirmed increased protein expression of JunB, IRF4 and BATF3, with higher relative levels of the BATF and IRF4 transcription factors than the canonical AP-1 factor c-Jun (Fig. 2d , Extended Data Fig. 4f ). Transcription factors of the AP-1 family form homo- and heterodimers through interactions in the common bZIP domain that compete for binding at DNA elements containing core TGA-G/C-TCA consensus motifs and can complex with IRF transcription factors 7 , 9 . The classic AP-1 heterodimer c-Fos–c-Jun drives transcription of IL2 , whereas complexes containing other AP-1 and IRF family members can antagonize c-Jun activity and/or drive immunoregulatory gene expression in T cells 7 , 8 , 9 , 10 , 26 , 27 , 28 . Co-immunoprecipitation analysis demonstrated increased levels of complexed JunB, BATF, BATF3 and IRF4 in HA-28z CAR T cells (Fig. 2e ) and single-cell RNA-seq analysis of CD19-28z and GD2-28z CAR T cells confirmed that the bZIP family members JUN , JUNB , JUND and ATF4 were among the most differentially expressed and broadly connected in exhausted GD2-28z CAR T cell networks (Fig. 2f , Extended Data Figs. 2 , 4g ). We observed a similar pattern of AP-1 and BATF/IRF4 imbalance in a single-cell gene expression dataset from patients with metastatic melanoma undergoing treatment with immune checkpoint blockade 29 (Extended Data Fig. 4h ). Fig. 2: AP-1 family signature in exhausted CAR T cells. a , Top 25 transcription factor motif deviation scores in day 10 HA-28z (HA) versus CD19-28z (CD19) CAR T cells by chromVAR analysis. b , Top transcription factor (TF) motifs enriched in naive CD8 + HA-28z CAR T cells. c , Bulk RNA-seq expression (fold change HA/CD19) of indicated AP-1–bZIP and IRF family members in CD19-28z (black) and HA-28z (blue) CAR T cells. Data are mean ± s.e.m. from n = 6 samples across three donors. * P < 0.05, ** P < 0.01, *** P < 0.001, two-tailed ratio t -tests (see Supplementary Information for exact P values). d , Increased protein expression of c-Jun, JunB, BATF3 and IRF4 in HA-28z versus CD19-28z CAR T cells at days 7, 10 and 14 of culture determined by immunoblotting. e , Immunoprecipitation of c-Jun and JunB complexes demonstrates that HA-28z CAR T cells contain more c-Jun heterodimers, as well as JunB–IRF4, JunB–BATF and JunB–BATF3 heterodimers, than CD19-28z CAR T cells. f , Correlation network of exhaustion-related transcription factors in naive-derived CD4 + GD2-28z CAR T cells using single-cell RNA-seq analysis. For gel source data, see Supplementary Fig. 1 . Source data Full size image c-Jun overexpression prevents CAR T cell exhaustion We hypothesized that T cell dysfunction in exhausted cells might be due to a relative deficiency in c-Jun–c-Fos AP-1 heterodimers. Indeed, HA-28z CAR T cells overexpressing AP-1 demonstrated increased production of IL-2, which required c-Jun but not c-Fos (Extended Data Fig. 4i-m ), whereas no benefit was observed in CD19-28z CAR T cells. To further investigate c-Jun overexpression in exhausted T cells, we created JUN-P2A-CAR bi-cistronic vectors (Fig. 3a, b ) and demonstrated enhanced c-Jun N-terminal phosphorylation (JNP) only in JUN-HA-28z (Fig. 3c ), consistent with JNK kinase activation via the HA-28z-associated tonic signal 30 . Antigen-stimulated JUN-HA-28z CAR T cells demonstrated remarkably increased production of IL-2 and IFNγ, although no significant differences were observed in JUN-CD19-28z CAR T cells (Fig. 3d, e , Extended Data Fig. 5b, c ). We also observed enhanced functional activity of JUN-HA-28z CAR T cells at the single-cell level in both CD4 + and CD8 + CAR T cells, and c-Jun overexpression increased the frequency of stem-cell-memory and central-memory versus effector and effector-memory subsets in CD4 + and CD8 + populations of exhausted CAR T cells, but not in healthy CAR T cells (Fig. 3f , Extended Data Fig. 5d, e ). Together, the data are consistent with a model in which c-Jun overexpression is functionally more significant in exhausted T cells, which express higher levels of immunomodulatory bZIP and IRF transcription factors. Fig. 3: c-Jun overexpression enhances the function of exhausted CAR T cells. a , JUN-P2A-HA-CAR expression vector. HTM, hinge/transmembrane; ICD, intracellular domain. b , Intracellular c-Jun expression in control (Ctrl) and JUN CAR T cells at day 10 by flow cytometry. Grey denotes isotype control. c , Immunoblot for total c-Jun and phosphorylated c-Jun (p-c-Jun(S73)) in control (Ctrl) and JUN CAR T cells at day 10. d , e , IL-2 ( d ) and IFNγ ( e ) production after 24-h co-culture of control or JUN CD19-28z and HA-28z CAR T cells in response to antigen-positive tumour cells. Data are mean ± s.d. of triplicate wells. P values determined by unpaired two-tailed t -tests. One representative donor. Fold change across n = 8 donors in Extended Data Fig. 5 . f , Left, flow cytometry showing representative expression of CD45RA and CD62L in control and JUN CAR T cells at day 10. Right, relative frequency of effector (E; CD45RA + CD62L − ), stem-cell memory (SCM; CD45RA + CD62L + ), central memory (CM; CD45RA − CD62L + ), and effector memory (EM; CD45RA − CD62L − ) in CD8 + control or JUN-HA-28z CAR T cells ( n = 6 donors from independent experiments). Lines indicate paired samples from the same donor. P values determined by paired two-tailed t -tests. g , On day 39, 1 × 10 6 viable T cells from Extended Data Fig. 5f were re-plated and cultured for 7 days with or without IL-2. h , Control or JUN CD19-28z or CD19-BBz CAR T cells from g were cryopreserved on day 10 and later thawed, rested overnight in IL-2 and 5 × 10 6 cells were then injected intravenously into healthy NSG mice. On day 25 after infusion, peripheral blood T cells were quantified by flow cytometry. Data are mean ± s.e.m. of n = 5 mice per group. P values determined by unpaired two-tailed t -tests. Source data Full size image Overexpression of c-Jun also enhanced long-term proliferative capacity, which is associated with anti-tumour effects in solid tumours 31 , in CAR T cells without tonic signalling (CD19-28z, CD19-BBz) (Extended Data Fig. 5f ). Enhanced proliferation remained IL-2-dependent, as expansion immediately ceased after IL-2 withdrawal (Fig. 3g , Extended Data Fig. 5g ). Expanding CD8 + JUN-CD19-28z CAR T cells displayed diminished exhaustion markers and an increased frequency of cells bearing the stem-cell memory phenotype compared with control CD19-28z CAR T cells (Extended Data Fig. 5h–j ). c-Jun overexpression also increased homeostatic expansion of both CD19-28z and CD19-BBz CAR T cells in tumour-free NSG (NOD–SCID Il2rg -null) mice (Fig. 3h ), which led to accelerated GVHD in the JUN-CD19-BBz CAR-T-cell-treated mice. Together, the data demonstrate that c-Jun overexpression mitigates T cell exhaustion in numerous CARs tested, including those incorporating CD28 or 4-1BB costimulatory domains, and regardless of whether exhaustion is driven by long-term expansion or tonic signalling. Molecular mechanisms of c-Jun in exhaustion To explore the mechanism by which c-Jun overexpression prevents T cell dysfunction, we compared ATAC-seq and RNA-seq results of HA-28z and JUN-HA-28z CAR T cells. Overexpression of c-Jun did not change the epigenetic profile but substantially modulated the transcriptome, with 319 genes differentially expressed in JUN and control HA-28z CAR T cells (Extended Data Fig. 6a–c ), including reduced expression of exhaustion-associated genes and increased expression of memory genes. Using DAVID, we confirmed that genes changed by c-Jun are highly enriched for AP-1 family binding sites (Extended Data Fig. 6d ), which suggests that gene expression changes were mediated by AP-1 family transcription factors. We postulated that c-Jun overexpression could rescue exhausted T cells by direct transcriptional activation of AP-1 target genes and/or by indirectly disrupting immunoregulatory AP-1–IRF transcriptional complexes 7 , 9 that drive exhaustion-associated gene expression (AP-1i) (Extended Data Fig. 6e ). To test these non-mutually-exclusive hypotheses, we first evaluated a panel of c-Jun mutants predicted to be deficient in transcriptional activation (JUN-AA, JUN-Δδ, JUN-ΔTAD), DNA binding (JUN-Δbasic) or dimerization (JUN-ΔLeu, JUN-ΔbZIP) 32 , 33 , 34 (Fig. 4a , Extended Data Fig. 6f ). JUN-AA and JUN-Δδ both equivalently increased IL-2 and IFNγ production compared to wild-type c-Jun in HA-28z CAR T cells (Fig. 4b ), whereas JUN-ΔTAD demonstrated partial rescue in IL-2 production. Conversely, C-terminal mutants (JUN-Δbasic, JUN-ΔLeu and JUN-ΔbZIP), which were unable to bind chromatin (Extended Data Fig. 6g ), did not rescue cytokine production in exhausted HA-28z CAR T cells (Fig. 4b ). Furthermore, c-Jun overexpression substantially decreased levels of AP-1i, as evidenced by diminished mRNA levels (Extended Data Fig. 6h ), reduced total and chromatin-bound JunB, BATF and BATF3 proteins, and reduced JunB–BATF complexes (Extended Data Fig. 7a–c ). Importantly, c-Jun-mediated displacement of JunB, BATF and BATF3 from chromatin and reduced JunB–BATF complexes were dependent on the ability of c-Jun to partner with AP-1 family members (Fig. 4c, d ). Consistent with this, c-Jun and IRF4 chromatin immunoprecipitation followed by high-throughput sequencing (ChIP–seq) analysis identified no novel c-Jun-binding sites after c-Jun overexpression. Instead, the vast majority of sites bound by c-Jun are also bound by IRF4 (and probably BATF), consistent with c-Jun overexpression increasing binding almost exclusively at AP-1–IRF composite elements, including near exhaustion-associated genes regulated by IRF4, and genes associated with increased T cell proliferation and functional activation (Extended Data Fig. 7d–h ). Finally, JunB-knockout, BATF-knockout and especially IRF4-knockout significantly increased IL-2 and IFNγ production in HA-28z CAR T cells (Fig. 4e , Extended Data Fig. 7i, j ). Time-course experiments using a drug regulatable expression model of c-Jun revealed that full rescue required c-Jun overexpression during both T cell expansion and antigen stimulation (Extended Data Fig. 8 ), consistent with a model in which c-Jun overexpression both modulates molecular reprogramming during the development of exhaustion and augments responses during acute stimulation downstream of antigen encounter. Together, the data are consistent with a model in which an overabundance of AP-1–IRF complexes drives the exhaustion transcriptional program and c-Jun overexpression prevents exhaustion by decreasing and/or displacing AP-1i complexes from chromatin. Fig. 4: c-Jun functional rescue of exhaustion requires bZIP dimerization but is independent of transactivation. a , Schematic of c-Jun protein showing N-terminal transactivation domain (TAD) and C-terminal bZIP domain deletion mutants. Red asterisks denote JNP sites at Ser63 and Ser73 mutated to alanine in JUN-AA. b , IL-2 (left) and IFNγ (right) production by control or JUN-HA-28z CAR T cells expressing the indicated c-Jun variant after 24-h stimulation with Nalm6-GD2 (N6-GD2) or 143B target cells. Data are mean ± s.d. of triplicate wells; representative of three independent experiments. c , Immunoblot of indicated AP-1 and IRF proteins in control, JUN-WT or JUN-ΔbZIP HA-28z CAR T cells in soluble or chromatin-bound lysis fractions. d , Immunoblot of indicated AP-1 and IRF proteins in control, JUN-WT or JUN-ΔbZIP HA-28z CAR T cells in total lysate (right) or after JunB immunoprecipitation (IP, left). e , Fold change (FC) in IL-2 (top) and IFNγ (bottom) production in AP-1 or IRF4 CRISPR-knockout (KO) HA-28z CAR T cells after 24-h stimulation with Nalm6-GD2 or 143B target cells. Fold change in cytokine production is normalized to control HA-28z CAR T cells. Data are mean ± s.e.m. of n = 6 independent experiments. P values determined using nonparametric Mann–Whitney U tests. Source data Full size image JUN CAR T cells enhance anti-tumour activity in vivo Using a Nalm6-GD2 + leukaemia model, we confirmed functional superiority of JUN-HA-28z CAR T cells in vivo (Fig. 5a–c ), which required c-Jun dimerization but not transactivation (Extended Data Fig. 7k, l ). In an in vitro model of limiting antigen dilution, JUN-HA-28z CAR T cells produced greater maximal IL-2 and IFNγ and manifested a lower threshold for antigen-induced IL-2 secretion (Fig. 5d, e ). Limiting target antigen expression is increasingly recognized to limit CAR functionality as observed after treatment of CD22-BBz-CAR T cells in patients with relapsed or refractory leukaemia 12 , 13 , 35 . We therefore assessed whether c-Jun overexpression could enhance the capacity to target antigen-low tumour cells. In response to CD22 low leukaemia, JUN-CD22-BBz CAR T cells exhibited increased cytokine production in vitro (Fig. 5f–h ) and markedly increased anti-tumour activity in vivo (Fig. 5i–l ) compared with control CD22-BBz CAR T cells. Similar results were observed in a CD19 low Nalm6 leukaemia model (Extended Data Fig. 9a–f ). Fig. 5: JUN-modified CAR T cells increase in vivo activity against leukaemia and enhance T cell function under suboptimal stimulation. a – c , NSG mice were injected intravenously with 1 × 10 6 Nalm6-GD2 leukaemia cells, and then 3 × 10 6 mock, HA-28z or JUN-HA-28z CAR + T cells were given intravenously on day 3. a , c , Tumour progression was monitored using bioluminescent imaging. Scales are normalized for all time points. D, day. b , JUN-HA-28z CAR T cells induced long-term tumour-free survival. Data are mean ± s.e.m. of n = 5 mice per group. Reproducible in three independent experiments; however, in some experiments, long-term survival was diminished owing to outgrowth of GD2(−) Nalm6 clones. d , e , IL-2 ( d ) and IFNγ( e ) production after 24 h stimulation of control or JUN HA-28z CAR T cells with immobilized 1A7 anti-CAR idiotype antibody. Each curve was fit with nonlinear dose response kinetics to determine half-maximal effective concentration (EC 50 ) values. Smaller graphs (right) highlight antibody concentrations less than 1 μg ml −1 . Data are mean ± s.d. of triplicate wells; representative of two independent experiments. f , JUN-CD22-BBz retroviral vector. g , CD22 surface expression on Nalm6 wild-type (N6-22 WT ), Nalm6-CD22-knockout (N6-22 KO ) and Nalm6-22 KO plus CD22 low (N6-22 low ) cells. h , IL-2 (left) and IFNγ (right) release after co-culture of Nalm6 and Nalm6-22 low cells with control or JUN-CD22-BBz CAR T cells. Data are mean ± s.d. of triplicate wells; representative of three independent experiments. i – l , NSG mice were inoculated with 1 × 10 6 Nalm6-22 low leukaemia cells intravenously. On day 4, 3 × 10 6 mock, control or JUN-CD22-BBz CAR + T cells were transferred intravenously. i , l , Tumour growth was monitored by bioluminescent imaging. j , Mice receiving JUN-CD22-BBz CAR T cells display increased peripheral blood T cells on day 23. k , Long-term survival of CAR-treated mice. Data in i and j are mean ± s.e.m. of n = 5 mice per group; representative of two independent experiments. Unless otherwise noted, P values determined by unpaired two-tailed t -tests. Survival curves were compared using the log-rank Mantel–Cox test. Source data Full size image c-Jun decreases hypofunction within solid tumours c-Jun overexpression also enhanced the functionality of CARs targeting solid tumours. JUN-Her2-BBz CAR T cells prevented 143B osteosarcoma tumour growth in vivo, markedly improved long-term survival, and greatly increased T cell expansion (Extended Data Fig. 9g–i ). Similar results were observed when comparing GD2-BBz and JUN-GD2-BBz CAR T cells against 143B (Extended Data Fig. 9j–n ). c-Jun overexpression increased the frequency of total and CAR + Her2-BBz T cells within tumours (Fig. 6a, b ), reduced expression of exhaustion markers PD-1 and CD39 (Fig. 6c ), and substantially increased cytokine production after ex vivo re-stimulation (Fig. 6d, e , Extended Data Fig. 10a–c ). Single-cell RNA-seq of purified tumour infiltrating JUN-Her2-BBz CAR T cells demonstrated increased frequency of cells within the G2/M and S phases of the cell cycle (Fig. 6f ), a more activated transcriptional program (as measured by IL2RA and CD38 ), and downregulation of numerous exhaustion-associated genes ( PDCD1 , BTLA , TIGIT , CD200 , ENTPD1 and NR4A2 ) (Fig. 6g and Extended Data Fig. 10d–g ). Finally, a small cluster of T cells characterized by high IL7R expression ( IL7R , KLF2 , CD27 , TCF7 and SELL ) was preserved in tumours treated with JUN CAR T cells but not those receiving control Her2-BBz CAR T cells (Extended Data Fig. 10g ), consistent with c-Jun-induced maintenance of a memory-like population capable of self-renewal. Fig. 6: c-Jun overexpression enhances CAR T cell efficacy and decreases hypofunction within solid tumours. NSG mice were inoculated with 1 × 10 6 143B osteosarcoma cells via intramuscular injection, and then 1 × 10 7 mock, Her2-BBz or JUN-Her2-BBz CAR T cells were given intravenously on day 14. a , Tumour growth (monitored by caliper measurements). b – g , On day 28, mice were euthanized and tumour tissue was collected and mechanically dissociated. Single-cell suspensions were labelled for analysis by flow cytometry ( b , c ), re-stimulated with Nalm6-Her2 + target cells and analysed for intracellular cytokine production ( d ), or sorted by FACS to isolate live, human CD45 + tumour-infiltrating lymphocytes (TILs) ( e – g ). b , Left, CD8 + cells as a proportion of total live tumour cells. Right, CAR + cells as a proportion of total live CD8 + cells. c , PD-1 + CD39 + cells as a frequency of total live CD8 + (left) with representative contour plots (right). d , Frequency of indicated cytokine- or CD107a-producing cells after 5-h re-stimulation with Nalm6-Her2 + target cells. Gated on total, live CD8 + T cells (left) with representative contour plots (right). e , IL-2 secretion after 24-h re-stimulation of sorted CD45 + TILs with Nalm6-Her2 + target cells. Data in a – e are mean ± s.e.m. of n = 6–8 mice per group. Unless otherwise noted, P values determined by unpaired two-tailed t -tests. f , Relative frequency of sorted CD45 + TILs in each phase of the cell cycle as determined by single-cell RNA-seq. g , The log 2 -transformed fold change in JUN compared with control Her2-BBz CAR T cells for the indicated transcripts. Source data Full size image Discussion Several lines of evidence implicate exhaustion in limiting the potency of CAR T cells 6 , 11 , 16 , 17 . Using a tonically signalling CAR that can induce the hallmark features of exhaustion in a controlled in vitro culture system, we identified AP-1-related bZIP–IRF families as major factors that drive exhaustion-associated gene expression. We tested the hypothesis that exhaustion-associated dysfunction results from increased levels of AP-1–IRF complexes leading to a functional deficiency in activating AP-1 Fos/Jun heterodimers. Consistent with this model, c-Jun overexpression prevented phenotypic and functional hallmarks of exhaustion and improved anti-tumour control in five tumour models, including the clinically relevant GD2-BBz and Her2-BBz CAR T cells and CD19 CAR T cells subjected to prolonged ex vivo expansion. JUN CAR T cells also demonstrated increased potency when encountering tumour cells with low antigen density. Mechanistically, c-Jun overexpression could work by directly enhancing c-Jun-mediated transcriptional activation of genes such as IL2 , and/or indirectly by disrupting or displacing AP-1i. Substantial orthogonal data are consistent with the indirect displacement model. First, the inability of Fos overexpression to enhance function is consistent with the displacement model, as Fos has not been described to heterodimerize with BATF proteins. Second, c-Jun mutant experiments demonstrated a crucial role for dimerization but not transactivation in the biology observed. Third, we observed a reduction in total and chromatin-bound JunB, BATF and BATF3 after c-Jun overexpression, and could reproduce functional enhancement of exhausted T cells after knockout of IRF4 and JUNB . An indirect model in which c-Jun blocks access of AP-1i complexes to enhancer regions is also consistent with the previous finding that BACH2 protects from terminal effector differentiation by blocking AP-1 sites 36 as terminal effector differentiation is a hallmark of exhaustion in our model and is prevented by c-Jun overexpression. Another related hypothesis suggests that exhaustion results from partner-less NFAT in the absence of AP-1 37 and several recent publications implicated the NFAT-driven transcription factors NR4A 38 , 39 and TOX 40 , 41 , 42 in T cell exhaustion. Overexpression of NR4A1 was shown to displace chromatin-bound c-Jun 38 , suggesting that competition with NR4A family members might also contribute to the effects described here. Future studies are warranted to understand the functional overlap of NFAT, TOX and NR4A in c-Jun-overexpressing CAR T cells. The impressive effects of c-Jun overexpression in several preclinical tumour models raise the prospect of clinical testing of JUN CAR T cells. c-Jun is the cellular homologue of the viral oncogene v-Jun 43 , and c-Jun expression has been described in cancer 44 , 45 . However, c-Jun has not been implicated as an oncogene in mature T cells, which appear to be generally resistant to transformation, and we see no evidence for transformation in these studies. Ras -mediated transformation in rodent models requires JNP 46 , therefore, the JUN-AA mutant, which equally rescues CAR T cell function, could be implemented to mitigate theoretical oncogenic risk. Future work is necessary to determine whether c-Jun overexpression might enhance the risk of other toxicities, including on-target and off-target effects. In summary, our findings highlight the power of a deconstructed model of human T cell exhaustion to interrogate the biology of this complex phenomenon. Using this approach, we discovered a fundamental role for the AP-1–bZIP family in human T cell exhaustion and demonstrate that overexpression of c-Jun renders CAR T cells resistant to exhaustion, enhances their ability to control tumour growth in vivo, and improves the recognition of antigen-low targets, thus addressing major barriers to progress with this class of therapeutic agents. Methods Viral vector construction MSGV retroviral vectors encoding the following CARs were previously described: CD19-28z, CD19-BBz, GD2-28z, GD2-BBz, Her2-BBz and CD22-BBz. To create the HA-28z CAR, a point mutation was introduced into the 14G2a scFv of the GD2-28z CAR plasmid to create the E101K mutation. The ‘4/2NQ’ mutations 47 were introduced into the CH2CH3 domains of the IgG1 spacer region to diminish Fc receptor recognition for in vivo use of HA-28z CAR T cells. Codon-optimized cDNAs encoding c-Jun ( JUN ), c-Fos ( FOS ), and truncated NGFR (tNGFR; NGFR ) were synthesized by IDT and cloned into lentiviral expression vectors to create JUN-P2A-FOS, and JUN and FOS single expression vectors co-expressing tNGFR under the separate PGK promoter. JUN-P2A was then subcloned into the XhoI site of MSGV CAR vectors using the In-Fusion HD cloning kit (Takara) upstream of the CAR leader sequence to create JUN-P2A-CAR retroviral vectors. For JUN-AA, point mutations were introduced to convert Ser63 and Ser73 to Ala. The other JUN mutants were cloned to remove portions of the protein as described in Fig. 4 . The Escherichia coli DHFR-destabilization domain (DD) sequence was inserted upstream of Jun to create JUN-DD fusion constructs. In some cases, GFP cDNA was subcloned upstream of the CAR to create GFP-P2A-CAR vector controls. For bi-cistronic CAR retroviral vectors, the HA-28z or Her2-28z CAR was cloned downstream of a codon-optimized CD19-28z CAR to create CD19-28z-P2A-HA-28z and CD19-28z-P2A-Her2-28z dual CAR expression vectors. Viral vector production Retroviral supernatant was produced in the 293GP packaging cell line as previously described 6 . In brief, 70% confluent 293GP 20-cm plates were co-transfected with 20 μg MSGV vector plasmid and 10 μg RD114 envelope plasmid DNA using Lipofectamine 2000. Medium was replaced at 24 and 48 h after transfection. The 48-h and 72-h viral supernatants were collected, centrifuged to remove cell debris, and frozen at −80 °C for future use. Third-generation, self-inactivating lentiviral supernatant was produced in the 293T packaging cell line. In brief, 70% confluent 293T 20-cm plates were co-transfected with 18 μg pELNS vector plasmid, and 18 μg pRSV-Rev, 18 μg pMDLg/pRRE (Gag/Pol) and 7 μg pMD2.G (VSVG envelope) packaging plasmid DNA using Lipofectamine 2000. Medium was replaced at 24 h after transfection. The 24-h and 48-h viral supernatants were collected, combined and concentrated by ultracentrifugation at 28,000 rpm for 2.5 h. Concentrated lentiviral stocks were frozen at −80 °C for future use. T cell isolation Healthy donor buffy coats were collected by and purchased from the Stanford Blood Center under an IRB-exempt protocol. Primary human T cells were isolated using the RosetteSep Human T cell Enrichment kit (Stem Cell Technologies) according to the manufacturer’s protocol using Lymphoprep density gradient medium and SepMate-50 tubes. Isolated T cells were cryopreserved at 2 × 10 7 T cells per vial in CryoStor CS10 cryopreservation medium (Stem Cell Technologies). CAR T cell production Cryopreserved T cells were thawed and activated same day with Human T-Expander CD3/CD28 Dynabeads (Gibco) at 3:1 beads:cell ratio in T cell medium (AIMV supplemented with 5% fetal bovine serum (FBS), 10 mM HEPES, 2 mM GlutaMAX, 100 U ml −1 penicillin and 100 μg ml −1 streptomycin (Gibco)). Recombinant human IL-2 (Peprotech) was provided at 100 U ml −1 . T cells were transduced with retroviral vector on days 2 and 3 after activation and maintained at 0.5 × 10 6 –1 × 10 6 cells per ml in T cell medium with IL-2. Unless otherwise indicated, CAR T cells were used for in vitro assays or transferred into mice on days 10–11 afteractivation. Retroviral transduction Non-tissue culture treated 12-well plates were coated overnight at 4 °C with 1 ml Retronectin (Takara) at 25 μg ml −1 in PBS. Plates were washed with PBS and blocked with 2% BSA for 15 min. Thawed retroviral supernatant was added at approximately 1 ml per well and centrifuged for 2 h at 32 °C at 3,200 rpm before the addition of cells. CRISPR knockout CRISPR–Cas9 gene knockout was performed by transient Cas9/gRNA (RNP) complex electroporation using the P3 Primary Cell 4D-Nucleofector X Kit S (Lonza). On day 4 of culture, HA-28z CAR T cells were counted, pelleted and resuspended in P3 buffer at 1.5 × 10 6 –2 × 10 6 cells per 20 μl reaction. 3.3ug Alt-R .Sp. Cas9 protein (IDT) and 40pmol chemically modified synthetic sgRNA (Synthego) (2:1 molar ratio gRNA:Cas9) per reaction was pre-complexed for 10 min at room temperature to create ribonucleoprotein complexes (RNP). A 20-μl cell suspension was mixed with RNP and electroporated using the EO-115 protocol in 16-well cuvette strips. Cells were recovered at 37 °C for 30 min in 200 μl T cell medium then expanded as described above. Knockdown efficiency was determined using TIDE and/or immunoblot. Control HA-28z CAR T cells were electroporated with a gRNA targeting the safe-harbour locus AAVS1. The following gRNA target sequences were used: AAVS1- GGGGCCACTAGGGACAGGAT, JUNB-ACTCCTGAAACCGAGCCTGG, BATF-TCACTGCTGTCGGAGCTGTG, BATF3-CGTCCTGCAGAGGAGCGTCG, and IRF4-CGGAGAGTTCGGCATGAGCG. Cell lines The Kelly neuroblastoma, EW8 Ewing’s sarcoma, 143b and TC32 osteosarcoma cell lines were originally obtained from ATCC. In some cases, cell lines were stably transduced with GFP and firefly luciferase (GL). The CD19 + CD22 + Nalm6-GL B-ALL cell line was provided by D. Barrett. Nalm6-GD2 was created by co-transducing Nalm6-GL with cDNAs for GD2 synthase and GD3 synthase. The Nalm6-Her2 cell line was created using lentiviral overexpression of Her2 cDNA. Single-cell clones were then chosen for high antigen expression. Nalm6-22-knockout (Nalm6-22 KO ) and Nalm6-22 KO plus CD22 low (N6-22 low ) have been previously described and were provided by T. Fry 13 . The Nalm6-CD19 low cell lines were created by R. Majzner (manuscript in preparation). All cell lines were cultured in complete media (CM) (RPMI supplemented with 10% FBS, 10 mM HEPES, 2 mM GlutaMAX, 100 U ml −1 penicillin, and 100 μg ml −1 streptomycin (Gibco)). STR DNA profiling of all cell lines is conducted by Genetica Cell Line testing once per year. None of the cell lines used in this study is included in the commonly misidentified cell lines registry. Before using for in vivo experiments, cell lines were tested with MycoAlert detection kit (Lonza). All cell lines tested negative. Flow cytometry The anti-CD19 CAR idiotype antibody was provided by B. Jena and L. Cooper 48 . The 1A7 anti-14G2a idiotype antibody was obtained from NCI-Frederick. CD22 and Her2 CARs were detected using human CD22-Fc and Her2-Fc recombinant proteins (R&D). The idiotype antibodies and Fc-fusion proteins were conjugated in house with Dylight488 and/or 650 antibody labelling kits (Thermo Fisher). T cell surface phenotype was assessed using the following antibodies: From BioLegend: CD4-APC-Cy7 (clone OKT4), CD8-PerCp-Cy5.5 (clone SK1), TIM-3-BV510 (clone F38-2E2), CD39-FITC or APC-Cy7 (clone A1), CD95-PE (clone DX2), CD3-PacBlue (clone HIT3a). From eBioscience: PD-1-PE-Cy7 (clone eBio J105), LAG-3-PE (clone 3DS223H), CD45RO-PE-Cy7 (clone UCHL1), CD45-PerCp-Cy5.5 (clone HI30). From BD: CD45RA-FITC or BV711 (clone HI100), CCR7-BV421 (clone 150503), CD122-BV510 (clone Mik-β3), CD62L-BV605 (clone DREG-56), CD4-BUV395 (clone SK3), CD8-BUV805 (clone SK1). Cytokine production Approximately 1 × 10 5 CAR + T cells and 1 × 10 5 tumour cells were cultured in 200 μl CM in 96-well flat bottom plates for 24 h. For idiotype stimulation, serial dilutions of 1A7 were crosslinked in 1× Coating Buffer (BioLegend) overnight at 4 °C on Nunc Maxisorp 96-well ELISA plates (Thermo Scientific). Wells were washed once with PBS and 1 × 10 5 CAR + T cells were plated in 200 μl CM and cultured for 24 h. Triplicate wells were plated for each condition. Culture supernatants were collected and analysed for IFNγ and IL-2 by ELISA (BioLegend). Intracellular cytokine staining For Intracellular cytokine staining analysis, CAR + T cells and target cells were plated at 1:1 effector:target ratio in CM containing 1× monensin (eBioscience) and 5 μl per test CD107a antibody (BV605, Clone H4A3, BioLegend) for 5–6 h. After incubation, intracellular cytokine staining was performed using the FoxP3 TF Staining Buffer Set (eBioscience) according to the manufacturer’s instruction using the following antibodies from BioLegend: IL2-PECy7 Clone MQ1-17H12, IFNγ-APC/Cy7 Clone 4S.B3, and TNFα-BV711 Clone Mab11. Incucyte lysis assay Approximately 5 × 10 4 GFP + leukaemia cells were co-cultured with CAR T cells in 200 μl CM in 96-well flat bottom plates for up to 120 h. Triplicate wells were plated for each condition. Plates were imaged every 2–3 h using the IncuCyte ZOOM Live-Cell analysis system (Essen Bioscience). Four images per well at 10× zoom were collected at each time point. Total integrated GFP intensity per well was assessed as a quantitative measure of live, GFP + tumour cells. Values were normalized to the starting measurement and plotted over time. Effector:target ratios are indicated in the figure legends. Immunoblotting and immunoprecipitations Whole-cell protein lysates were obtained in non-denaturing buffer (150 mM NaCl, 50 mM Tris pH 8, 1% NP-10, 0.25% sodium deoxycholate). Protein concentrations were estimated by Bio-Rad colorimetric assay. Immunoblotting was performed by loading 20 μg of protein onto 11% PAGE gels followed by transfer to PVF membranes. Signals were detected by enhanced chemiluminescence (Pierce) or with the Odyssey imaging system. Representative blots are shown. The following primary antibodies used were purchased from Cell Signaling: c-Jun (60A8), P-c-Jun Ser73 (D47G9), JunB(C37F9), BATF(D7C5), IRF4(4964) and Histone-3(1B1B2). The BATF3 (AF7437) antibody was from R&D. Immunoprecipitations were performed in 100 μg of whole-cell protein lysates in 150 μl of nondenaturing buffer and 7.5 μg of agar-conjugated antibodies c-Jun (G4) or JunB (C11) (Santa Cruz Biotechnology). After overnight incubation at 4 °C, beads were washed three times with nondenaturing buffer, and proteins were eluted in Laemmli sample buffer, boiled and loaded onto PAGE gels. Detection of immunoprecipitated proteins was performed with above-mentioned reagents and antibodies. Preparation of chromatin fractions Separation of chromatin-bound from soluble proteins was performed as previously described 49 using cytoskeletal (CSK) buffer: 10 mM PIPES-KOH (pH 6.8), 100 mM NaCl, 300 mm sucrose, 3 mM MgCl 2 , 0.5 mM PMSF, 0.1 mM glycerolphosphate, 50 mM NaF, 1 mM Na 3 VO 4 , containing 0.1% Nonidet P-40 and protease inhibitors 2 mM PMSF, 10 μg ml −1 leupeptin, 4 μg ml −1 aprotinin, and 4 μg ml −1 pepstatin. In brief, cell pellets were lysed for 10 min on ice followed by 5,000 rpm centrifugation at 4 °C for 5 min. The soluble fraction was collected and cleared by high-speed centrifugation, 13,000 rpm for 5 min. Protein concentration was determined by Bradford assays. Pellets containing chromatin-bound proteins were washed with CSK buffer and centrifuged at 5,000 rpm at 4 °C for 5 min. Chromatin-bound proteins were solubilized in 1× Laemmli Sample Buffer and boiled for 5 min. Equal volumes of chromatin and soluble fraction were loaded for each sample and analysed by immunoblotting. ChIP and library preparation Twenty-million CAR T cells were fixed with 1% formaldehyde for 10 min at room temperature. Cross-linking was quenched using 0.125 M glycine for 10 min before cells were washed twice with PBS. Cross-linked pellets were frozen with dry-ice ethanol and stored at −80 °C. Two biological replicates were collected for each cell culture. Chromatin immunoprecipitations were performed with exogenous spike-ins (ChIP-Rx) to allow for proper normalization, as previously described 50 . In brief, pellets were thawed on ice before cell membrane lysis in 5 ml LB1 by rotating for 10 min at 4 °C. Nuclei were pelleted at 1,350 g for 5 min at 4 °C and lysed in 5 ml LB2 by rotating for 10 min at room temperature. Chromatin was pelleted at 1,350 g for 5 min at 4 °C and resuspended in 1.5 ml LB3. Sonication was performed in a Bioruptor Plus until chromatin was 200–700 bp. Debris were pelleted and supernatants were collected and Triton X-100 was added to 1% final concentration. Ten per cent of the sample was collected as input controls. Anti-RF4 (Abcam Ab101168) or anti-c-Jun (Active Motif 39309) targeting antibodies were added at 5 µg per immunoprecipitate to sonicated lysate and rotated at 4 °C for 16–20 h. Protein G Dynabeads (100 μl per immunoprecipitate) were washed three times with Block Solution (0.5% BSA in PBS). Antibody-bound chromatin was added to beads and rotated for 2–4 h at 4 °C. Bead-bound chromatin was washed five times with 1 ml RIPA wash buffer then once with 1 ml TE buffer with 500 mM NaCl. Beads were resuspended in 210 μl. Elution and chromatin was eluted at 65 °C for 15 min. Beads were magnetized and supernatant was removed to a fresh tube. Immunoprecipitated and input control chromatin was reverse cross-linked at 65 °C for 12–16 h. Samples were diluted with 1 volume TE buffer. RNA was digested using 0.2 mg ml −1 RNase A (Qiagen 19101) for 2 h at 37 °C. CaCl 2 was added to 5.25 mM and samples were treated with 0.2 mg ml −1 proteinase K (Life Technologies EO0491) for 30 min at 55 °C. One volume phenol-chloroform-isoamyl alcohol was added and centrifuged 16,500 g for 5 min to extract DNA, followed by a second extraction using one volume pure chloroform. Aqueous phase was removed and DNA was precipitated using two volumes ethanol and 0.3 M soduim acetate. DNA pellets were resuspended in EB elution buffer (Qiagen). To prepare libraries for sequencing, DNA was end repaired using T4 polymerase (New England Biolabs M0203L), Klenow fragment (NEB M0210L), and T4 polynucleotide kinase (NEB M0201L) for 30 min at 20 °C. 3′ A-tailing was performed using Exo- Klenow fragment (NEB M0212L) for 30 min at 37 °C. Illumina TruSeq Pre-Indexed Adaptors (1 µM) or NEBNext Illumina Multiplex Oligo Adaptors (NEB E7335S) were ligated for 1 h at room temperature. Unligated adapters were separated by gel electrophoresis (2.5% agarose, 0.5× TBE) and ligated DNA was purified using a NucleoSpin Gel Clean-up Kit (Macherey-Nagel 740609.250). Ligated DNA was PCR amplified using TruSeq Primers 1.0 and 2.0 or NEBNext Multiplex Primers and purified using AMPure XP beads (Beckman Coulter A63881). Purified libraries were quantified using Agilent 2100 Bioanalyzer HS DNA and multiplexed in equimolar concentrations. Sequencing was performed using an Illumina NextSeq or HiSeq at 2 × 75 bp by Stanford Functional Genomics Facility. Mice Immunocompromised NOD-SCID- Il2rg −/− (NSG) mice were purchased from JAX and bred in-house. All mice were bred, housed and treated in ethical compliance with Stanford University IACUC (APLAC) approved protocols. Six-to-eight-week-old male or female mice were inoculated with either 1 × 10 6 Nalm6-GL leukaemia via intravenous or 0.5 × 10 6 –1 × 10 6 143B osteosarcoma via intramuscular injections. All CAR T cells were injected intravenously. Time and treatment dose are indicated in the figure legends. Leukaemia progression was measured by bioluminescent imaging using the IVIS imaging system. Values were analysed using Living Image software. Solid tumour progression was followed using caliper measurements of the injected leg area. Mice were humanely euthanized when an IACUC-approved end-point measurement reached 1.75 cm in either direction (for solid tumour) or when mice demonstrated signs of morbidity and/or hind-limb paralysis (leukaemia). Five-to-ten mice per group were treated in each experiment based on previous experience in these models, and each experiment was repeated two or three times as indicated. Mice were randomized to ensure equal pre-treatment tumour burden before CAR T cell treatment. In some experiments, researchers were blinded to treatment during tumour measurement. Blood and tissue analysis Peripheral blood sampling was conducted via retro-orbital blood collection under isoflurane anaesthesia at the indicated time points. Fifty microlitres of blood was labelled with CD45, CD3, CD4 and CD8, lysed using BD FACS Lysing Solution and quantified using CountBright Absolute Counting beads (Thermo Fisher) on a BD Fortessa flow cytometer. For ex vivo analysis of CAR TILs, 14 days after T cell treatment (day 28 after tumour engraftment), six mice per group were euthanized, solid tumour tissue was collected, mechanically dissociated using the gentleMACS dissociator (Miltenyi), and single-cell suspensions were either analysed by flow cytometry, re-plated with 3 × 10 5 Nalm6-Her2 + target cells for ICS analysis, or labelled for sorting. Live, CD45 + TILs were sorted from each tumour and re-stimulated at 1:1 effector:target ratio with Nalm6-Her2 + target cells. Twenty-four-hour supernatant was analysed for IL-2 production by ELISA. ATAC-seq ATAC-seq library preparation was carried out as previously described 51 . In brief, 100,000 cells from each sample were sorted by FACS into CM, centrifuged at 500 g at 4 °C, then resuspended in ATAC-seq resuspension buffer (RSB) (10 mM Tris-HCl, 10 mM NaCl, 3 mM MgCl 2 ) supplemented with 0.1% NP-40,0.1% Tween-20 and 0.01% digitonin. Samples were split into two replicates each before all subsequent steps. Samples were incubated on ice for 3 min, then washed out with 1 ml RSB supplemented with 0.1% Tween-20. Nuclei were pelleted at 500 g for 10 min at 4 °C. The nuclei pellet was resuspended in 50 μl transposition mix (25 μl 2× TD buffer, 2.5 μl transposase (Illumina), 16.5 μl PBS, 0.5 μl 1% digitonin, 0.5 μl 10% Tween-20, 5 μl H 2 O) and incubated at 37 °C for 30 min in a thermomixer with 1,000 rpm shaking. The reaction was cleaned up using the Qiagen MinElute PCR Purification Kit. Libraries were PCR-amplified using the NEBNext Hi-Fidelity PCR Master Mix and custom primers (IDT) as previously described 23 . Libraries were sufficiently amplified following 5 cycles of PCR, as indicated by qPCR fluorescence curves 23 . Libraries were purified with the Qiagen MinElute PCR Purification Kit and quantified with the KAPA Library Quantification Kit. Libraries were sequenced on the Illumina NextSeq at the Stanford Functional Genomics Facility with paired-end 75-bp reads. Adaptor sequences were trimmed using SeqPurge and aligned to hg19 genome using bowtie2. These reads were then filtered for mitochondrial reads, low mapping quality ( Q ≥ 20), and PCR duplicates using Picard tools. Then we converted the bam to a bed and got the Tn5 corrected insertion sites (‘+’ stranded + 4 bp, ‘−’ stranded −5 bp). To identify peaks, we called peaks for each sample using MACS2 ‘–shift -75–extsize 150–nomodel–call-summits–nolambda–keep-dup all -p 0.00001’ using the insertion beds. To get a union peak set, we (1) extended all summits to 500 bp; (2) merged all summit bed files; and (3) used bedtools cluster and selected the summit with the highest MACS2 score. This was then filtered by the ENCODE hg19 blacklist ( ), to remove peaks that extend beyond the ends of chromosomes. We then annotated these peaks using HOMER and computed the occurrence of a transcription factor motif using motifmatchr in R with chromVARMotifs HOMER set. To create sequencing tracks, we read the Tn5 corrected insertion sites into R and created a coverage pileup binned every 100 bp using rtracklayer. We then counted all insertions that fell within each peak to get a counts matrix (peak × samples). To determined differential peaks we first used peaks that were annotated as ‘TSS’ as control genes or ‘housekeeping peaks’ for DESeq2 and then computed differential peaks with this normalization. All clustering was performed using the regularized log transform values from DESeq2. Transcription factor motif deviation analysis was carried out using chromVAR as previously described 24 . Transcription factor motif enrichment were calculated using a hypergeometric test in R testing the representation of a motif (from motifmatchr above) in a subset of peaks vs all peaks. Subset RNA-seq For T cell subset-specific RNA-seq, T cells were isolated from healthy donor buffy coats as described above. Before activation, naive and central memory CD4 + or CD8 + subsets were isolated using a BD FACSAria cell sorter (Stem Cell FACS Core, Stanford University School of Medicine) using the following markers: naive (CD45RA + CD45RO − , CD62L + , CCR7 + , CD95 − , and CD122 − ), central memory (CD45RA − CD45RO + , CD62L + , CCR7 + ). Sorted starting populations were activated, transduced and cultured as described above. On days 7, 10 and 14 of culture, CAR + CD4 + and CD8 + cells were sorted, and RNA was isolated using Qiagen mRNEasy kit. Samples were library prepped and sequenced via Illumina NextSeq paired end platform by the Stanford Functional Genomics Core. Bulk RNA-seq For bulk RNA isolation, healthy donor T cells were prepared as described. On day 10 or 11 of culture, total mRNA was isolated from 2 × 10 6 bulk CAR T cells using Qiagen RNEasy Plus mini isolation kit. Bulk RNA-seq was performed by BGI America (Cambridge, MA) using the BGISEQ-500 platform, single-end 50-bp read length, at 30 × 10 6 reads per sample. Principal component analysis was performed using stats package and plots with ggplot2 package in R (version 3.5) 52 . GSEA was performed using the GSEA software (Broad Institute) as described 53 , 54 . DAVID analysis was performed for transcription factor enrichment as described 55 , 56 . Single-cell RNA-seq To compare gene expression in single CD19-28z and GD2-28z CAR cells, we sorted naive T-cell subset on day 0 for subsequent single-cell analysis on day 10 using the Chromium platform (10X Genomics) and the Chromium Single Cell 3′ v2 Reagent Kit according to the manufacturer’s instructions. cDNA libraries were prepared separately for CD19-CAR and GD2-CAR cells, and the CD4 + cells and CD8 + cells were combined in each run to be separated bioinformatically downstream. Sequencing was performed on the Illumina NextSeq system (paired-end, 26 bp into read 1 and 98 bp into read 2) to a depth of more than 100,000 reads per cell. Single-cell RNA-seq reads were aligned to the Genome Reference Consortium Human Build 38 (GRCh38), normalized for batch effects, and filtered for cell events using the Cell Ranger software (10X Genomics). A total of 804 CD19-CAR and 726 GD2-CAR T cells were sequenced to an average of 350,587 post-normalization reads per cell. The cell–gene matrix was further processed using the Cell Ranger R Kit software (10X Genomics) as previously described 57 . In brief, we first selected genes with at least one unique molecular identifier (UMI) counts in any given cell. UMI counts were then normalized to UMI sums for each cell and multiplied by a median UMI count across cells. Next, the data were transformed by taking a natural logarithm of the resulting data matrix. For correlation network of exhaustion-related transcription factors, transcription factor genes identified as differentially expressed ( P < 0.05) by DESeq2 form the nodes of the network. Colours represent log 2 -transformed fold change (GD2 vs CD19 CAR). Edge thickness represents the magnitude of correlation in expression between the relevant pair of genes across cells. Correlation score greater than 0.1 was used to construct networks. To compare gene expression in single JUN-overexpressing and control Her2-BBz CAR T cells in vivo, live human CD45 + tumour-infiltrating cells were sorted and pooled from six NSG mice bearing 143B osteosarcoma tumours 14 days after CAR T cell infusion. Sorted cells were analysed using the 10X Genomics platform as described above and sequenced on the Illumina HighSeq 4000 system to a depth of more than 50,000 reads per cell. A total of 6,946 Her2-BBz and 10,985 JUN-Her2-BBz cells were sequenced to an average of 49,542 post-normalization reads per cell. The cell–gene matrix was further processed using the Seurat v.3.0 software 58 , 59 . In brief, we selected genes expressed in ≥50 cells. Single live cells were selected as droplets expressing ≥500 genes with ≤20,000 UMI counts and ≤10% mitochondrial reads. UMI count data matrix was transformed and scaled, including variable feature selection, with SCTransform pipeline. T cells were selected as CD3 + events (99.3% cells expressing CD3G , CD3D , CD3E , and/or CD247 gene). Where indicated, CD4 + and CD8 + T cell subsets were selected (8.3% CD4 + CD8 − , 70.3% CD4 − CD8 + ). The resulting data matrix was then examined using differential expression analysis, cell cycle analysis, clustering, and UMAP embedding. Statistical analysis Unless otherwise noted, statistical analyses for significant differences between groups were conducted using unpaired two-tailed t -tests without correction for multiple comparisons and without assuming consistent s.d. using GraphPad Prism 7. Survival curves were compared using the log-rank Mantel–Cox test. See Supplementary Table 2 for full statistical analyses, including exact P values, t-ratio, and degrees of freedom. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The sequencing datasets generated in this publication have been deposited in NCBI Gene Expression Omnibus (GEO) 60 , 61 and are accessible through GEO series accession numbers: bulk RNA-seq: GSE136891 , scRNA-seq CD19/GD2-28z: GSE136874 , scRNA-seq control/JUN-Her2-BBz TILs: GSE136805 , ATAC-seq: GSE136796 , ChIP–seq: GSE136853 .
A new approach to programing cancer-fighting immune cells called CAR-T cells can prolong their activity and increase their effectiveness against human cancer cells grown in the laboratory and in mice, according to a study by researchers at the Stanford University School of Medicine. The ability to circumvent the exhaustion that the genetically engineered cells often experience after their initial burst of activity could lead to the development of a new generation of CAR-T cells that may be effective even against solid cancers—a goal that has until now eluded researchers. The studies were conducted in mice harboring human leukemia and bone cancer cells. The researchers hope to begin clinical trials in people with leukemia within the next 18 months and to eventually extend the trials to include solid cancers. "We know that T cells are powerful enough to eradicate cancer," said Crystal Mackall, MD, professor of pediatrics and of medicine at Stanford and the associate director of the Stanford Cancer Institute. "But these same T cells have evolved to have natural brakes that tamp down the potency of their response after a period of prolonged activity. We've developed a way to mitigate this exhaustion response and improve the activity of CAR-T cells against blood and solid cancers." Mackall, who is also the director of the Stanford Center for Cancer Cell Therapy and of the Stanford research center of the Parker Institute for Cancer Immunotherapy, treats children with blood cancers at the Bass Center for Childhood Cancer and Blood Diseases at Stanford Children's Health. Mackall is the senior author of the study, which will be published Dec. 4 in Nature. Former postdoctoral scholar Rachel Lynn, Ph.D., is the lead author. Genetically modified cells of patient CAR-T cells is an abbreviation for chimeric antigen receptor T cells. Genetically modified from a patient's own T cells, CAR-T cells are designed to track down and kill cancer cells by recognizing specific proteins on the cells' surface. CAR-T cell therapy made headlines around the world in 2017 when the Food and Drug Administration fast-tracked their approval for the treatment of children with relapsed or unresponsive acute lymphoblastic leukemia. Later that year, a version of CAR-T treatment was also approved for adults with some types of lymphoma. But although blood cancers often respond impressively to CAR-T treatment, fewer than half of treated patients experience long-term control of their disease, often because the CAR-T cells become exhausted, losing their ability to proliferate robustly and to actively attack cancer cells. Overcoming this exhaustion has been a key goal of cancer researchers for several years. Lynn and Mackall turned to a technique co-developed in the laboratory of Howard Chang, MD, Ph.D., the Virginia and D.K. Ludwig Professor of Cancer Genomics and professor of genetics at Stanford, to understand more about what happens when T cells become exhausted and whether it might be possible to inhibit this exhaustion. The technique, called ATAC-Seq, pinpoints areas of the genome where regulatory circuits overexpress or underexpress genes. "When we used this technique to compare the genomes of healthy and exhausted T cells," Mackall said, "we identified some significant differences in gene expression patterns." In particular, the researchers discovered that exhausted T cells demonstrate an imbalance in the activity of a major class of genes that regulate protein levels in the cells, leading to an increase in proteins that inhibit their activity. When the researchers modified CAR-T cells to restore the balance by overexpressing c-Jun, a gene that increases the expression of proteins associated with T cell activation, they saw that the cells remained active and proliferated in the laboratory even under conditions that would normally result in their exhaustion. Mice injected with human leukemia cells lived longer when treated with the modified CAR-T cells than with the regular CAR-T cells. In addition, the c-Jun expressing CAR-T cells were also able to reduce the tumor burden and extend the lifespan of laboratory mice with a human bone cancer called osteosarcoma. "Those of us in the CAR-T cell field have wondered for some time if these cells could also be used to combat solid tumors," Mackall said. "Now we've developed an approach that renders the cells exhaustion resistant and improves their activity against solid tumors in mice. Although more work needs to be done to test this in humans, we're hopeful that our findings will lead to the next generation of CAR-T cells and make a significant difference for people with many types of cancers."
10.1038/s41586-019-1805-z
Computer
Electromechanical resonators operating at sub-terahertz frequencies
Jiacheng Xie et al, Sub-terahertz electromechanics, Nature Electronics (2023). DOI: 10.1038/s41928-023-00942-y. Journal information: Nature Electronics
https://dx.doi.org/10.1038/s41928-023-00942-y
https://techxplore.com/news/2023-04-electromechanical-resonators-sub-terahertz-frequencies.html
Abstract Electromechanical resonators operating in the sub-terahertz regime could be of use in the development of future communication systems because they support extremely fast data rates. Such resonators are also of interest in studying quantum phenomena of mechanical entities, as they can maintain the quantum ground state at kelvin temperatures rather than the millikelvin temperatures demanded by gigahertz resonators. Here we report microelectromechanical resonators operating beyond 100 GHz. By incorporating a millimetre-wave dual-rail resonator into a thickness-shear-mode micromechanical system, we achieve efficient electromechanical transduction through enhanced on-chip impedance matching, which is key to revealing the infinitesimal displacements of these sub-terahertz mechanical modes. Our devices are based on commercially available z -cut lithium niobate thin films and patterned using standard semiconductor fabrication processes. Main Modern communication systems rely on high-quality electromechanical resonators as millimetre-wave front ends. The operating frequency directly determines the communication speed, and thus, it is appealing to use resonators of higher frequencies—such as those operating in the sub-terahertz (THz) regime—in communication devices. Micromechanical resonators in the sub-THz regime can also be used to study the quantum motion of micromechanical structures 1 , 2 , quantum entanglement of massive objects 3 , 4 and long-distance hybrid quantum networks 5 . Today, mechanical resonators in the microwave gigahertz (GHz) frequency regime are routinely refrigerated to their mechanical ground state in millikelvin environments 1 , 6 , 7 , 8 , 9 , 10 . According to the Bose–Einstein distribution, a higher-frequency millimetre-wave resonator in the sub-THz regime can maintain the quantum ground state with a temperature on the order of kelvins, making them appealing elements for accessing quantum mechanical motion. Advancing a mechanical resonator into the sub-THz regime is, however, a non-trivial task. One approach to address high-frequency mechanical motion is to use ultrafast laser pulses through an optical pump–probe scheme 11 , 12 , 13 . This, however, limits the scaling of systems for device applications. Electromechanical actuation and readout, on the other hand, can provide integrated device architectures. However, typical acoustic velocities within solids are on the order of thousands of metres per second, and thus, the acoustic wavelength for sub-THz phonons is only tens of nanometres, which creates challenges in effective transduction and nanofabrication. The recently developed thin-film lithium niobate (LN) platform is a favourable candidate to address the actuation challenge owing to its excellent piezoelectric properties and low phonon loss 14 . To circumvent the stringent requirements on the fabrication resolution in the fundamental-tone actuation scheme, efforts have been made to develop electromechanical overtones up to 60 GHz (ref. 15 ). Yet, the inverse-square decrease of the electromechanical coupling with respect to the mode order limits further advances of electromechanical overtones into the sub-THz regime. In this Article, we show that a millimetre-wave dual-rail resonator (DRR) directly on a suspended LN resonator can be used for the efficient actuation and detection of sub-THz thickness-shear-mode (T-mode) overtones. Serving as a tank circuit 16 , the DRR aids electromechanical transduction by providing on-chip impedance matching to the mechanical modes. The quality of electromechanical transduction can be characterized as the change in millimetre-wave reflection when the mechanical mode is on and off resonance since an optimum electromechanical transducer design would have a small signal reflection when in operation. Together with a well-calibrated reflection measurement setup (Supplementary Section II ) that mitigates perturbation to fragile sub-THz signals, we report electromechanical oscillations beyond 100 GHz. Based on the DRR enhancement shown in the experimental results and the achieved high signal fidelity, we project that such a DRR-enhanced electromechanical transduction scheme could be used to scale electromechanical frequencies further beyond the microwave W band. Dual-rail-coupled thickness-mode resonator The false-colour scanning electron microscopy image of the complete resonator suite—namely, the dual-rail-coupled thickness-mode resonator (DRCTR)—is shown in Fig. 1a . To mitigate anchor loss, we suspend the mechanical resonator by chemically removing the silicon dioxide (SiO 2 ) beneath the LN film. The DRR is formed on top of the thickness-mode resonator and comprises two coupled transmission lines GS 1 G and GS 2 G, short-circuited to the ground at the end of transmission line GS 2 G and probed at the start of transmission line GS 1 G. The DRR features a more uniformly distributed electric-field distribution between the lines than a typical quarter-wavelength resonator, making it advantageous to provide efficient piezoelectric coupling to the distributed thickness-mode resonator (Supplementary Section IIIC ). Such a distributive characteristic of the thickness-mode resonator originates from our device dimension being comparable with the sub-THz signal wavelength. Therefore, different from the lumped element model 17 where the mechanical resonator is modelled as lumped RLC components, our DRCTR model (Fig. 1b ) treats the mechanical resonator as distributed conductance between the coupled lines, defined as \(G_{\mathrm{m}} = 1/\left( {R_{\mathrm{m}} + {1 \over {\mathrm{i}\omega C_{\mathrm{m}}}} + {\mathrm{i}}\omega L_{\mathrm{m}}} \right)\) , where R m , C m and L m represent the distributed motional resistance, capacitance and inductance, respectively, and ω represents the signal angular frequency. Mechanical resonant frequency f and quality factor Q can be expressed as \(f = 1/\left( {2\uppi \sqrt {L_{\mathrm{m}}C_{\mathrm{m}}} } \right)\) and Q = 1/(2π f R m C m ), respectively. We detect the embedded thickness-mode resonator by measuring the reflection ( Γ ) spectrum of the DRCTR with a frequency multiplier/directional coupler/downconversion approach that reduces the input/output interference and improves the sub-THz signal fidelity (Methods). The input impedance Z in can be derived from the measured reflection, following Γ = ( Z in − Z 0 )/( Z in + Z 0 ), where Z 0 = 50 Ω, is the impedance of a standard transmission line. By viewing the dual rails as a four-port network, we illustrate (Fig. 1b ) the boundary conditions of the DRCTR, where port 2 and port 4 are open-circuited and port 3 is short-circuited. In the theoretical derivation of the input impedance Z in seen at port 1 (Supplementary Section IIIB ), we treat the propagating signals on the dual rails as a superposition of the differential mode (DR +− ) and common mode (DR ++ ), each with the corresponding transmission-line parameters. In Fig. 1c(i)(ii) , we illustrate their differences in terms of distributed capacitance and conductance. The DR +− mode has opposite voltages across the two signal lines S 1 and S 2 . The thickness-mode resonator can be efficiently read out by contributing distributed motional conductance G m in parallel to mutual capacitance C M . The electromechanical coupling coefficient is defined as the ratio of the motional capacitance to the mutual capacitance as K 2 = C m / C M . For the DR ++ mode, the voltages on both signal lines S 1 and S 2 are identical; therefore, the thickness-mode resonator does not contribute distributed conductance between the signal lines. Generally speaking, the DRR resonant condition Im( Z in ) = 0 requires contributions from both DR +− and DR ++ modes. Note that in our design, the DR ++ mode has a much higher characteristic impedance ( Z c = 327 Ω for the device shown in Fig. 2 ) than that of the DR +− mode ( Z d = 42 Ω) (Supplementary Section IIIC ). As additional advantages of this configuration, the DR +− mode supported by the dual rails provides a more uniform and larger differential voltage distribution between the signal lines than singly ended resonator design such as quarter-wavelength resonators, making it ideal to couple to the distributed thickness-mode resonator. Fig. 1: DRCTR. a , False-colour scanning electron microscopy image of the device. The mechanical resonator is embedded between two coupled transmission lines GS 1 G and GS 2 G to form a DRCTR. b , Distributed DRCTR model. The thickness-mode resonator is modelled as distributed conductance G m in parallel with the distributed mutual capacitance C M . c , Differential mode (DR +− ) has opposite voltages on line S 1 and line S 2 (i). Thus, an electric wall is formed between the lines represented by the blue dashed line. Common mode (DR ++ ) has identical voltages on line S 1 and line S 2 (ii). A magnetic wall is formed between the lines represented by the green dashed line. d , Zoomed-in view of the suspended structure (not to scale). The red and black curves on the electrodes represent the voltages V s1 and V s2 on lines S 1 and S 2 , respectively, when the DRR is on resonance. The red curve on the cross-section illustrates the displacement of the mechanical modes, with the cyan arrows marking the displacement direction. e , Cross-section of the mechanical resonator (not to scale). f , Simulation of the horizontal electric field with a voltage applied between the electrodes (film thickness not to scale). g , Simulation of the mechanical displacement fields for T1 (i), T5 (ii) and T21 (iii) modes, with the coloured surfaces representing the displacement amplitude and the cyan arrows marking the displacement direction. Full size image Figure 1d highlights the suspended device structure as a zoomed-in view of the image in Fig. 1a . The fact that the gap between the electrodes is much larger than the film thickness enables a relatively homogeneous in-plane electric-field distribution (Fig. 1f ). Through the large e 51 value of the LN piezoelectric coupling tensor, originating from the LN being a member of the trigonal 3 m point group, such a horizontal electric field can efficiently excite the T modes of the suspended structure. To further elucidate the mechanical overtones utilized in our work, we plotted the finite-element-method-simulated displacement fields of the mechanical modes for mode order n = 1, 5 and 21 (Fig. 1g(i)–(iii) ), with the colour surface plots representing the displacement amplitude and the cyan arrows marking the displacement direction. For the n th-order mechanical overtone with an acoustic wavelength of λ n in the thickness direction, its resonant condition is when film thickness h can be written as nλ n = 2 h , or in terms of frequency f , nv = 2 hf , where v is the acoustic velocity. Due to the reduced or even vanished effective overlap between the applied electric field and mechanical piezoelectric field of high-order overtones, only odd-order overtones can be excited and their electromechanical coupling coefficient decreases quadratically with respect to the mode order ( K 2 ∝ 1/ n 2 ). Resonator performance characterization We emphasize large electromechanical coupling and significant DRR enhancement in our DRCTR design, as both are important to overcome the challenge of diminishing transduction efficiency at sub-THz frequencies. The flexibility of tuning the DRR frequency via device length aids us in augmenting the mechanical features in the desired bands. For W-band thickness-mode readout (75–110 GHz), we choose a device length of 155 μm with its reflection spectrum shown in Fig. 2a (Fig. 2b shows the optical image). In the full W-band span, we resolve three prominent thickness modes, namely, T17, T19 and T21, at 84, 94 and 104 GHz, respectively. To compare with devices without DRR enhancement in the W band, we additionally plot (Fig. 2c ) the Smith chart representations of the reflection spectra for device lengths of 80 and 245 μm. The minor and even invisible resonant circles of these devices showcase the significance of DRR enhancement in the sub-THz mechanical-mode actuation and detection. Fig. 2: Sub-THz electromechanics. a , Amplitude of the reflection spectrum for a device with a length of 155 μm. b , Phase plot of the reflection spectrum. The inset shows the optical image of the measured device. Device parameters, s = 2 μm; g = 2 μm; h e = 200 nm (Methods provides the definitions). c , Smith chart representations of the reflection spectra for devices with lengths of 80, 155 and 245 μm. Source data Full size image A systematic demonstration of the DRR behaviour is shown in Fig. 3a , where we combine the reflection spectra obtained from three separate measurement setups covering the microwave X/Ka, V and W bands. The broadband reflection spectrum spans from 10 to 110 GHz. Our calibration procedure (Supplementary Section II ) ensures the accurate stitching of the measured spectrum from each band, except for a spectral gap between 43 and 50 GHz. As the DRR length increases from 110 to 245 μm, the dual-rail resonance shifts to lower frequencies, as expected. As a tank circuit, the DRR effectively mediates impedance mismatch to the thickness-mode resonators and facilitates electromechanical transduction, as seen by the higher extinction of the mechanical modes in the reflection spectra. With this broadband measurement, the aforementioned linear relationship between the mechanical resonant frequency f and mode order n is experimentally demonstrated (Fig. 3b ). To assess mechanical damping, we resort to the commonly used frequency–quality factor ( fQ ) product as a figure of merit for our mechanical system. Benefiting from the non-degrading mechanical Q (Fig. 3c ) as the mechanical frequency increases, we achieve the highest fQ product of around 2.5 × 10 13 at the 21st mode (Fig. 3d ), setting a record for the thin-film LN platform 15 . Yet, the increasing trend of the fQ product suggests opportunities for further scaling up device frequencies. Consistent with the theoretically predicted inverse-square dependence of the electromechanical coupling coefficient on the mode order in unloaded resonators, we extract an approximate 1/ n 2.2 relationship from the measured reflection spectra (Fig. 3e ). Fig. 3: DRR-enhanced electromechanics and figure-of-merit extraction. a , Reflection spectra for devices with lengths of 110, 155 and 245 μm. The adjacent spectrum is shifted by 10 dB for visual purposes. Device parameters, s = 2 μm; g = 2 μm; h e = 200 nm. b , Mechanical resonant frequency and mode order relationship. c , Q factor and mode order relationship. d , fQ product and mode order relationship. e , Electromechanical coupling coefficient ( K 2 ) and mode order relationship. Source data Full size image Dependence on film thickness and device orientation Another way that aids efficient electromechanical transduction in the sub-THz regime is to utilize a lower-order overtone to preserve a high electromechanical coupling coefficient, as described by the aforementioned inverse-square relationship between K 2 and n . Given the resonant condition of the thickness modes, a thinner film yields a larger mechanical free spectral range, thus resulting in a faster scaling up to the sub-THz regime with a lower mode order. Figure 4a(i) shows the reflection spectra for a 365-nm-thick device and 270-nm-thick device. The 365-nm-thick device scales up to the sub-THz regime with a mode order of 21 ( K 2 = 4.3 × 10 −4 ), whereas the 270-nm-thick device achieves that with a mode order of only 15 ( K 2 = 1.1 × 10 −3 ). However, the faster scaling up in frequency resorting to thinner films comes at the cost of compromising the rigidity of the suspended structure. Such a trade-off should be well considered in designing sub-THz electromechanical transducers. In addition, the introduction of DRR could introduce an additional phase response, as shown by the admittance ( Y ) spectra (Fig. 4a(i) ), which must be compensated in practical filter designs. Fig. 4: Thickness and orientation variations. a , Measured Γ and Y spectra for devices with thicknesses of 365 and 270 nm (i). Device parameters, s 1 = 2 μm; s 2 = 1 μm; g = 2 μm; h e = 300 nm (Methods provides the definitions). Note that in the Y spectra, the feed-through shunt components are deembedded. Schematic for devices with thicknesses of 365 nm (ii) and 270 nm (iii). Here, the DRR design is generalized to have two mirror-symmetric S 2 lines. b , Smith chart representations of the measured reflection spectra for devices of different orientations (i). Device parameters, s 1 = 4 μm; s 2 = 2 μm; g = 6 μm; h e = 200 nm. An optical image of a typical device array (ii). Source data Full size image Furthermore, from the device application perspective, the strong anisotropy of LN may impact the resonator frequency reproducibility due to fabrication variations in the wafer preparation and alignment processes. Thankfully, for z -cut LN devices operating with thickness-shear modes, their performances do not show orientation dependence due to the rotational symmetry of the involved piezoelectric properties. We illustrate this result with the intuitive Smith chart representations (Fig. 4b(i) ), where we measured electromechanical devices of different in-plane orientations. The consistency of the device performances, reflected by the similarities of the resonant circles shown in the Smith charts, marks the robustness for electromechanical applications that utilize thickness-shear-mode resonators on a z -cut LN film. The slight differences in the reflection spectra of different devices are probably due to systematic fabrication error. Conclusions Using a DRR-coupled thickness-mode electromechanical system, we have reported efficient electromechanical transduction into the sub-THz regime. Such sub-THz phonon control capabilities provide new opportunities for mechanical resonators to be used in the radio-frequency front end of future broadband wireless communication systems. They could also facilitate the development of quantum phononics studies, where the DRR is a high- Q superconducting resonator that can interface with high-frequency circuit quantum electrodynamics systems 18 , 19 . Methods Nanofabrication We start with a 600 nm LN-on-insulator film 20 . The Si substrate has a high resistivity of over 10,000 Ω m. First, we define the etching mask with hydrogen silsesquioxane resist patterned to cover the film except for the release window. Then, the film is argon milled until the LN thickness in the release window is below 200 nm. Afterwards, the etching mask (hydrogen silsesquioxane) is removed, and the film is etched for a second time to reach the target thickness of the mechanical resonator. Next, the gold electrodes are deposited through lift-off using a polymethyl methacrylate resist. Finally, the mechanical resonators are released by soaking the chip in buffered oxide etchant and removing the SiO 2 beneath the LN. Device parameters: for the DRR shown in Fig. 2 , we define s as the width for both signal lines S 1 and S 2 , g as the gap between the signal lines and h e as the gold thickness. For the more generalized DRR design shown in Fig. 4 , we define s 1 as the width of signal line S 1 , s 2 as the width of signal line S 2 , g as the gap between signal lines S 1 and S 2 , and h e as the gold thickness. Reflection measurement We measure the room-temperature reflection spectra of the device under test between 10 and 110 GHz, except for a small spectral gap between 43 and 50 GHz, which we currently do not have an instrument to cover. The 10–43 GHz spectra were taken with a commercial vector network analyser (ShockLine MS46122B). The 50–75 GHz and 75–110 GHz spectra were taken with a V-band and W-band reflection measurement setup, respectively. For the V-band and W-band setups, we utilize frequency multipliers, frequency downconverters and directional couplers to perform reflection measurements (Supplementary Section II ). We calibrate such network analysers using the off-wafer short–open–load calibration technique with an impedance standard substrate. We add a shunt capacitor and shunt conductor in our DRCTR model to account for the difference in the dielectric constant (both real and imaginary parts) between the thin-film LN platform and impedance standard substrate 21 . Data availability The data that support the findings of this study are included in this Article. Source data are provided with this paper.
To further advance communication systems, increasing both their speed and efficiency, electronics engineers will need to create new and highly performing components, including electromechanical resonators. Electromechanical resonators are essential components of communications systems that can be used to generate powerful waves of specific frequencies or selectively broadcast communication signals at specific frequencies. To speed up communications further and pave the way for the next generation of wireless networks (6G), new resonators should ideally operate at sub-terahertz frequencies (i.e., at frequencies above 100GHz). In a recent paper published in Nature Electronics, a team of researchers led by Prof. Hong Tang at Yale University introduced new electromechanical resonators that could operate at these high frequencies. "Our research emphasizes increasing the operating frequencies of electromechanical resonators to exceed 100 GHz," Jiacheng Xie, the lead author who carried out the study, told Tech Xplore. "The foundation of modern communication systems relies on the continuous progress in resonator technologies, as higher-frequency oscillators lead to faster communication speeds. With the ongoing worldwide implementation of 5G communication technologies, there is a growing demand for higher-frequency resonators to support emerging technological advancements." The microelectromechanical resonators created by Xie and his colleagues are comprised of a millimeter-wave dual rail resonator placed on top of a suspended lithium niobate beam. To suspend this beam inside their device, the researchers chemically etched away the silicon dioxide underneath it, which also minimized the loss of acoustic waves into the surrounding space. "To effectively stimulate and measure the sub-terahertz mechanical resonances, we employ a millimeter-wave dual rail resonator that aids the electromechanical transduction by providing improved on-chip impedance matching to the mechanical modes," Xie explained. "An intuitive analogy can be drawn to the way a violin generates powerful sounds audible to listeners in large concert halls without the need for an amplifier. Although the strings determine the instrument's pitch, the violin's body functions as a broadband resonator that projects the sound, similar to how a dual-rail resonator broadcasts the sub-terahertz resonances for detection." Notably, the team's resonator was created using commercially available thin films of lithium niobate, which were patterned using technologies that are widely employed for the fabrication of semiconductors. This could greatly facilitate its large-scale fabrication and implementation in the future. Xie and his colleagues were the first to create electromechanical resonators that operate at frequencies beyond 100 GHz. Their work could thus have important implications for the development of 6G communication systems. "This breakthrough has the potential to contribute to the evolution of future communication systems, as the Federal Communications Commission (FCC) has created experimental licenses for the use of frequencies between 95 GHz and 3 THz," Xie said. "Furthermore, from a quantum science and technology perspective, it is beneficial to bring mechanical quantum systems out of million-dollar dilution refrigerators. Sub-THz resonators, with ultrahigh resonant frequencies, are significantly more resilient to thermal fluctuations than GHz resonators, and therefore can reach the quantum ground state at much more accessible Kelvin temperatures." The recent work by Xie and his colleagues could soon inform the development of other electromechanical resonators that operate at sub-terahertz frequencies. Meanwhile, the researchers plan to advance their devices further, while also trying to create other highly performing components for future communication systems. "We will now continue our efforts to develop electromechanical resonators with even higher frequencies," Tang added. "Additionally, our focus will be on creating applications leveraging our existing technologies."
10.1038/s41928-023-00942-y
Biology
New findings on basking sharks blow assumptions out of the water
E. M. Johnston et al, Cool runnings: behavioural plasticity and the realised thermal niche of basking sharks, Environmental Biology of Fishes (2022). DOI: 10.1007/s10641-021-01202-8
http://dx.doi.org/10.1007/s10641-021-01202-8
https://phys.org/news/2022-02-basking-sharks-assumptions.html
Abstract Long-distance migrations by marine vertebrates are often triggered by pronounced environmental cues. For the endangered basking shark ( Cetorhinus maximus ), seasonal changes in water temperature are frequently proposed as a cue for aggregation within (and dispersal from) coastal hotspots. The inference is that such movements reflect year-round occupancy within a given thermal ‘envelope’. However, the marked variance in timing, direction and depth of dispersal movements hint at a more nuanced explanation for basking sharks. Here, using data from pop-off archival transmitters deployed on individuals in Irish waters, we explored whether autumnal decreases in water temperature triggered departure from coastal habitats and how depth and location shaped the sharks’ realised thermal environment over time. Temperature was not an apparent driver of dispersal from coastal seas, and variance in daily temperature ranges reflected occupancy of different habitats; coastal mixed/stratified and offshore subtropical/tropical waters. Furthermore, individuals that moved offshore and into more southern latitudes off Africa, exhibited a distinct daily cycle of deep dives (00:00–12:00, 200 m–700 m; 12:00–00:00, 0–300 m), experiencing a more extreme range of temperatures (6.8–27.4 °C), including cooler minimum temperatures, than those remaining in European coastal habitat (9.2–17.6 °C). Collectively, these findings challenge the supposition that temperature serves as a universal driver of seasonal dispersal from coastal seas and prompts further studies of deep-water forays in offshore areas. Working on a manuscript? Avoid the common mistakes Introduction Animal migrations are defined by their reciprocity and predictability, differentiating them from other movement strategies such as nomadism, invasions and dispersal (e.g. Newson et al. 2009 ; Teitelbaum and Mueller 2019 ). For marine vertebrates, the most frequently documented migrations are seasonal (Luschi 2013 ), with animals moving in response to shifts in prey availability (Corkeron and Connor 1999 ) and/or physiological constraints aligned with changes in their ambient environment such as water temperature (Schlaff et al. 2014 ). In response to rapidly warming seas, effective plans for conservation and management must recognise that migration patterns may not be static because migration routes can shift and the timing of migratory movements can be dependent on fluid environmental conditions as they are experienced (Hays et al. 2019 ; Senner et al. 2020 ). Such management considerations are magnified for long-distance migrants that traverse territorial boundaries adding a layer of political complexity (e.g. Newson et al. 2009 ; Mackelworth et al. 2019 ; Mason et al. 2020 ). Seasonal shifts in environmental conditions have been proposed as dispersal cues for basking sharks ( Cetorhinus maximus ) that aggregate in coastal hotspots at temperate latitudes during summer months, moving typically into offshore areas with the onset of autumn (Sims et al. 2003 ; Gore et al. 2008 ; Skomal et al. 2009 ; Doherty et al. 2017a , 2017b , 2019 ; Braun et al. 2018 ; Dolton et al. 2019 ). Oscillations in plankton abundance provide an intuitive explanation for this pattern (Sims and Reid 2002 ), but prey densities appear adequate for year-round foraging (Sims 1999 ) and residence in coastal seas (Doherty et al. 2017a , b ). Likewise, water temperature has been proposed frequently as a cue for both seasonal aggregation and subsequent dispersal of basking sharks (NW Atlantic: Skomal et al. 2009 ; Siders et al. 2013 ; Braun et al. 2018 —NE Atlantic: Berrow and Heardman 1994 ; Cotton et al. 2005 ; Priede and Miller 2009 ; Witt et al. 2012 ; Miller et al. 2015 ; Austin et al. 2019 ; Doherty et al. 2019 ). The broad inference is that latitudinal movements, triggered by thermal cues, may allow individuals to remain within a given ‘envelope’ of temperatures (i.e. an optimal thermal range) throughout the year irrespective of season. Certainly, movements of basking sharks in the Northwest Atlantic appear to mirror seasonal changes in water temperature, with individuals ranging from Cape Cod, USA, to tropical waters (Skomal et al. 2009 ; Hoogenboom et al. 2015 ; Braun et al. 2018 ). Despite evidence of water temperature as a driver of basking shark movements, winter dispersal from coastal seas is not ubiquitous (Gore et al. 2008 ; Doherty et al. 2017a ), raising an interesting question. Quite simply, if migration from coastal areas is to maintain a thermal envelope, why do some animals stay and others disperse? Recent evidence from beyond the Atlantic may provide some insights, with Finucci et al. ( 2021 ) revealing that sea surface temperature (SST) aligned poorly with basking shark habitat suitability. Taken together, these findings hint at a more nuanced role for temperature in dispersal, or distinct regional differences in behaviour. Within this overall context, we examined the movements and behaviour of basking sharks in the Northeast Atlantic displaying markedly different ‘overwintering’ strategies, namely, residency in temperate coastal seas and long-distance dispersal offshore. Using archived temperature, location and depth data relayed via animal-borne satellite transmitters, we initially questioned whether decreases in coastal water temperatures during late boreal summer triggered dispersal from coastal seas in the NE Atlantic. Subsequently, we explored how depth and location shaped the sharks’ realised thermal environment over time. Methods Device deployment site Searches for sharks were conducted during calm (Beaufort Force < 3) sunny conditions to maximise the likelihood of encounters. Basking sharks were tagged between the 26th of July and the 8th of August 2012 at Malin Head, Ireland (55.37 ○ N, 7.40 ○ W). This deployment time frame was chosen to maximise data recording across autumn and winter months when sharks are known to disperse from coastal seas around Great Britain and Ireland (Sims et al. 2003 ; Doherty et al. 2017a ; Dolton et al. 2019 ). The waters around Malin Head are a seasonal aggregation area (hotspot) for basking sharks in the North East Atlantic (Johnston et al. 2019 ). Water temperature at the deployment site was recorded year-round, for fisheries purposes by the Irish Marine Institute, using Tidbit temperature probes (location: 7.55028° W, 55.15845° N) deployed at 1-m and 11-m depths (Marine Institute Data Catalogue, 2012). To illustrate the seasonal change in water temperature at the deployment site, we plotted water temperature preceding the tag deployments (April–July/August), the dispersal phase (August) and the post dispersal periods (September–December) (Fig. 1 ). Fig. 1 Water temperature in the study area recorded by Tidbit temperature probe (location: 7.55028° W, 55.15845° N) at 1-m and 11-m depth during the months preceding the PATF tag deployments, dispersal and post dispersal periods. Red points indicate the PATF tag deployment dates for shark 1 (26 July 2012) and 2, 3 and 4 (8 August 2012). Black points show the approximate date of movement offshore (> 300 m) for shark 1 (22 August 2012) and shark 2 (26 August 2012) respectively Full size image Bathymetry data (ocean base layer sources: Esri, GEBCO, NOAA, National Geographic, DeLorme, HERE, Geonames.org and other contributors) were used to determine the location of the west European coastal shelf (< 300-m-depth contour) with shelf edge and offshore oceanic habitat defined as greater than 300-m depth, after Huthnance et al. ( 2009 ). Device deployment and recovery During tagging, the size of the shark was estimated with reference to the boat after Bloomfield and Solandt (2008) (i.e. 0–2 m; 3–4 m; 5–6 m; 7–8 m; 8 m +). A GoPro Hero with modified dive housing, attached to a 2-m fibreglass painters’ pole, was used to gather underwater footage of the genital area from which shark sex could be determined. Next, Wildlife Computers Mk 10 Pop-off Archival Transmitters with Fastloc GPS (PATF) tags (length: ~ 150 mm; weight in air: 100 g) were deployed onto sharks ( n = 5) by use of a 2-m fiberglass pole with epoxied applicator from the bow of a 7.4 m rigid inflatable boat (RIB). Stainless steel aviation wire tethers of 1.2-m length with 5-cm Wildlife Computers titanium (Ti) anchors were used to secure the device in the dorsal musculature to the rear of the shark’s dorsal fin. An additional high-density foam float (10-cm length and 5-cm diameter) was fitted mid-way along each tether to maximise surface time and dampen the effect of shark movements on the devices’ surface stability. The PATF tags are hybrid, archival transmitters that record depth, water temperature (accuracy to 0.05 °C), light level (5 × 10 −12 W cm −2 to 5 × 10 −2 W cm −2 ), and opportunistically generated Fastloc GPS and ARGOS locations when the transmitter is exposed at the surface. All of the tags were programmed to release from their host shark (pop-off), after 140 days of deployment via an electrically corrodible pin. The time interval of 140 days was chosen as a compromise between maximising the period of data collection, likelihood of data recovery and battery longevity (e.g. Musyl et al. 2011 ). A safety cut-off limit of 15% of battery power remaining was also set to ensure the tag popped off with sufficient power to transmit archived data via the ARGOS satellite network. Tags were labelled to aid opportunistic recovery by members of the general public should they ultimately be washed ashore. Location data The PATF tags provided three different location data types including the following: (i) Fastloc GPS locations (for a detailed description of determining positions from Fastloc GPS generated data, see Dujon et al. ( 2014 ) and Wensveen et al. ( 2015 )) that were transmitted via the ARGOS satellite network (data received from CLS via ARGOS Direct email service) (ii) ARGOS satellite-derived locations (for a detailed description of deriving positions from multiple ARGOS uplinks, see Costa et al. ( 2012 ) and Hoenner et al. ( 2012 )) and (iii) geolocation positions from light level data (for a detailed description of deriving positions from light level data, see Teo et al. ( 2004 ) and Braun et al. ( 2018 )). We used Wildlife Computers Geolocation Processing and modelling software (GPE3) to generate broad latitude derived surface areas of uncertainty (i.e. Lightloc locations) that were subsequently constrained by matching device recorded surface events and water temperature readings (taken at night to avoid solar influence) with corresponding reference data for sea surface temperature and bathymetry to determine the most likely area of location. Historically, estimates of location in offshore areas (where the sharks ranged deeper into the water column) were likely to be associated with broader confidence intervals owing to a reduction in the quality of light data (see Doherty et al. 2017a ; Braun et al. 2018 ). However, the incorporation of ‘maximum swimming depth’ into the underlying GPE3 Hidden Markov models allows for location refinement by a process of exclusion (i.e. dive depth cannot exceed bathymetry). The resulting surface probability grids were further constrained by a predetermined animal speed parameter to eliminate locations too far to be biologically feasible. The recommended model parameter (Wildlife Computers) is 1.5–2 times the normal cruise speed of the animal. Here, we used a maximum sustained cruise speed for basking sharks of 2 ms −1 after Johnston et al. ( 2018 ) and a model parameter of 3.5 ms −1 . In addition, when accurate Argos and Fastloc positions were available the GPE3 software used these as ‘anchor points’ to refine the trajectory of the track and improve the accuracy of the model. Maximum likelihood tracks were then plotted using the ggplot2 package in R (Wickham 2019 ). Depth and temperature data The PATF tags recorded pressure every second with an accuracy equating to ~ 10 cm (i.e. pressure was taken as a proxy for depth in metres) and water temperature every minute to a resolution of 0.05 ○ C. These raw data were summarised as minimum and maximum depth and minimum and maximum temperature, in four data files daily, covering the pre-determined time blocks: 00:00–06:00; 06:00–12:00; 12:00–18:00; 18:00–00:00. For each 6-h time period, the tag recorded the percentage of time the shark spent in five pre-defined temperature bins (< 9 °C, 9–12 °C, 12–15 °C, 15–18 °C > 18 °C). The four data summary files were then aggregated and combined with available location data (Lightloc) for the 24-h period and subsequently compressed on board the tag (i.e. raw data were not available) for transmission post pop-off. Post pop-off only a representative sample of the compressed data files are transmitted via the ARGOS network. To compare the depth and temperature ranges experienced by each shark we first calculated moving averages for the maximum and minimum depth and temperature recorded during each 6-h summary period (e.g. 06:00 to 12:00). We used a moving average with an interval of 120 h to ease data visualisation. Given the summarised nature of the daily data records and the limitations of the representative sample transmitted via satellite, only partial amounts of data are recoverable. When missing data points were encountered the moving average was calculated on the next available data point in the dataset. We built a Bayesian structural time-series model to compare the temperature profiles of offshore and coastal sharks. Time series intervention analysis can be used to estimate the causal impact of events or interventions on the trajectory of a time series. To accomplish this, the time series of interest is compared to a ‘control’ time series that has not been exposed to the same intervention. In our case, we were interested in exploring how dispersal off-shore affected the realised thermal environment of sharks compared to those that remained in coastal environments. Using shark 3 as a control time series for remaining in coastal areas, we examined how movement offshore in sharks 1 and 2 impacted the minimum and maximum temperatures experienced. This analysis was completed using the CausalImpact function from the CausalImpact R package (Brodersen et al. 2015 ). All data processing and analysis was undertaken in R statistical computing software (R Core Team 2018 ), and visualised via ‘ggplot 2’ (Wickham 2019 ). Results Deployments and data recovery Five medium-sized sharks (3–6 m) were successfully equipped with PATF tags off Malin Head, Ireland, between 26th of July 2012 and the 8th of August 2012 (Table 1 ). Three of the tags functioned for the entire deployment period of 140 days, one prematurely popped off after 55 days (shark 4) (Table 1 ) and one tag did not transmit any data. The reason for the premature pop-off remains unknown, but the shark was at a depth of 37 m when this occurred, indicating it was not as a result of by-catch (i.e. removed from the shark when taken aboard). One tag (shark 3) was physically recovered post pop-off, resulting in the recovery of additional data beyond that received via transmission (depth and temperature summary files: n = 274 transmitted; n = 570 physically recovered and Lightloc fixes; n = 41 transmitted; n = 219 physically recovered, noting raw data is not archived). The remaining tags (sharks 1, 2 and 4) successfully transmitted a representative range of Lightloc locations and summary depth and temperature data recorded throughout their deployment periods (Table 1 ). This resulted in the recovery of 100% of the compressed data files for sharks 3 and 4 during their deployment and 85% and 75% of the compressed data files for sharks 1 and 2 respectively. A low number of Argos and Fastloc GPS positions were generated whilst the tags were on the sharks (Table 1 ) limiting the number of anchor points for the GPE3 model. Table 1 Details of sharks; transmitter deployment and pop-off dates and locations; number of data files (max-min temperature, time at temperature, time at depth) and location data received or recovered Full size table Dispersal movements Within 5 days of transmitter deployment, sharks 1, 2 and 4 dispersed from the aggregation site in three different directions (Fig. 2 ), whilst shark 3 remained within close proximity (maximum displacement recorded from deployment site 166 km) to the aggregation site throughout the entire deployment period ( N = 140 days) (Fig. 2 ). Over the course of the deployments, the four sharks moved into, or remained in, two broadly definable habitat types (i) offshore, including the shelf edge (> 300-m depth) (sharks 1 and 2) and (ii) coastal < 300-m depth (sharks 3 and 4) (Figs. 2 and 3 ). More specifically, sharks 1 and 2 displayed wide ranging movements (maximum displacement recorded from deployment site for shark 1: 5004 km; shark 2: 2581 km) into offshore tropical (10° latitude) and subtropical shelf-edge waters (32° latitude) respectively. Shark 2 did not move back onto the coastal shelf in Northern Spain rather it stayed in waters over 1000 m deep except for a very brief transit across the edge of the coastal shelf as it leaves the Bay of Biscay and moves south into deeper waters. In contrast, sharks 3 and 4 remained on the coastal shelf for their entire deployments (shark 3 N = 140 days; shark 4: N = 55 days). Furthermore, shark 4 moved south through the Irish Sea into the Celtic Sea to a known autumnal aggregation area (maximum displacement recorded from deployment site for shark 4: 652 km) where the water column is typically stratified (Stéphan et al. 2011 ) (Fig. 2 ). Fig. 2 GPE-3 generated tracks for the four sharks illustrating dispersal routes from the aggregation site at Malin Head and subsequent movement patterns in the Eastern Atlantic. Background colour density indicates depth with darker patches representing greater depths. Ocean base layer sources: Esri, GEBCO, NOAA, National Geographic, DeLorme, HERE, Geonames.org and other contributors (Esri 2019 ) Full size image Fig. 3 Latitudinal movements recorded throughout respective deployments, colours represent deployment months; minimum and maximum depths occupied (6-h intervals) over the entire deployment with moving averages (MA) for each time block of 20 observations (i.e. approximately every 120 h); minimum and maximum temperatures occupied (6-h intervals) over the entire deployment with moving averages (MA) for each time block of 20 observation (i.e. approximately every 120 h); time spent in five predefined temperature bins expressed as a percentage of each 6-h interval. For instance, in the month of October shark 3 spends every 6-h interval in 9–12 °C (100%) whereas shark 1 regularly spends a portion of each 6-h interval in multiple temperature ranges. Sharks 1 and 2 are considered to be offshore when their maximum depths move below 300 m Full size image A limited number of Argos and Fastloc GPS anchor points were generated whilst the tags were on the sharks (Table 1 ) allowing refinement of location estimates en route. Furthermore, as all sharks spent considerable time in the photic zone (either constantly or cyclically) adequate light data were available for the GPE3 processing. Depth and temperature We observed that all sharks continued to range throughout the entire water column (Fig. 3 ) for the duration of the deployments. However, sharks that moved into offshore waters spent a greater proportion of time below 100 m (shark 1: 0.66 and shark 2: 0.54) than the sharks that remained on the coastal shelf (shark 3: 0.14 and shark 4: 0.02) (Fig. 3 ). The maximum depth recorded for each shark was as follows: shark 1: 1168 m; shark 2: 1168 m; shark 3: 264 m and shark 4: 280 m. Sharks 1 and 2 departed offshore (shark 1 on 22 August 2012; shark 2 on 26 August 2012) before water temperatures recorded the seasonal peak (16.0 °C) on the 29 of August (Fig. 1 ). Thereafter, water temperatures were stable at approximately half a degree Celsius below the peak value (range 16.0–15.6 °C) for a period of 12 days before rapidly deceasing (Fig. 1 ). Moreover, water temperatures at the deployment site did not drop below the equivalent water temperature recorded on the day of the tag deployments until 24 and 17 days after sharks 1 and 2 had moved offshore, respectively. Sharks 1 and 2 that moved offshore into more southern latitudes experienced a wider temperature range (range 20.6 °C: 6.8–27.4 °C) than sharks 3 and 4 that remained on the coastal shelf (range 8.4 °C: 9.2–17.6 °C) (Fig. 4 ). Moreover, the sharks that moved offshore into southern latitudes experienced cooler minimum temperatures (shark 1: 6.8 °C; shark 2: 8.6 °C) than those concurrently recorded by sharks occupying higher latitudes (shark 3: 9.2 °C; shark 4: 9.8 °C) (Fig. 3 ). Sharks that moved offshore, sharks 1 and 2, experienced significantly higher maximum temperatures than shark 3 that remained in coastal waters (Bayesian one-sided tail-area probability p < 0.01). Sharks 1 and 2 experienced an increase of 46% and 15% in maximum temperatures respectively. Shark 1 had a significant decrease of 10% in minimum temperatures (Bayesian one-sided tail-area probability p < 0.01), which when combined with maximum temperature increases resulted in a 115% increase in the temperature range (Bayesian one-sided tail-area probability p < 0.01). Shark 2 minimum temperatures were not significantly different than shark 3 even after moving offshore into warmer surface waters (Bayesian one-sided tail-area probability p = 0.439); however, shark 2 did experience a 33% increase in temperature range when compared to shark 3 (Bayesian one-sided tail-area probability p < 0.01). The contrast in temperature ranges experienced between offshore and onshore sharks reflects the underlying time-at-temperature profiles (Fig. 3 ). Fig. 4 Variance in the daily depth and temperature ranges for the four individual sharks over their entire deployment periods. Ranges were calculated by subtracting the minimum values from the maximum values over each 24-h period for depth and temperature respectively. Black lines represent median range values for the entire deployment with points representing outliers Full size image Daily forays through the water column When sharks 1 and 2 moved off the coastal shelf, they began to undertake deep daily forays into the mesopelagic zone (Fig. 3 ). Thereafter, sharks 1 and 2 routinely moved into deep waters (~ 200–700 m) during the early morning (00:00–06:00 h), returning to shallower waters (~ 0–300 m) during the afternoon (12:00–18:00) (Figs. 5 and 6 ). Conversely, sharks that remained in coastal habitat displayed no apparent diel pattern in-depth use (Fig. 6 ). Fig. 5 Boxplots representing minimum depth, maximum depth, minimum temperature and maximum temperature recorded during each of the 6-h intervals over their entire deployments. Black lines represent median values with boxes indicating the interquartile range. Outliers are identified with black circles Full size image Fig. 6 Sub-sections of dive profiles for offshore sharks 1 and 2 exemplifying deep daily forays into the mesopelagic zone. Minimum (green) and maximum (blue) depths every 6 h are shown for shark 1 from 25 August–31 August and for shark 2 from 13 to 18 September. Points represent the measurement for the preceding 6 h (i.e. 06:00 points show the minimum and maximum depths recorded from 00:00 to 06:00). Inset plots show full dive profiles for each shark with the section of the dive profile indicated by the black boxes Full size image We compared minimum and maximum recorded daily temperatures for sharks 3 and 4 (coastal residents) over the period of 30 of August to 9 of October (i.e. before the tag on shark 4 prematurely detached) and found that shark 4 had experienced a wider temperature range than shark 3 (shark 3: 13.6 –14.8 °C; shark 4: 11–17.6 °C), despite having similar depth ranges during this period (Fig. 7 ). Furthermore, the depth temperature profiles for sharks 3 and 4 that remained in coastal waters indicate that the two sharks concurrently occupied different thermal habitats (Fig. 7 ) with shark 3 likely in a mixed coastal front whilst, shark 4 was likely in a highly stratified water column. Fig. 7 (A) The location on the coastal shelf of sharks 3 and 4 between August 30 and October 9. (B) The contrasting changes in ambient minimum temperature experienced with depth by sharks 3 and 4 at their respective locations during their overlapping time period (August 30 and October 9). Points are slightly transparent to identify areas of high overlap. Smooth lines for each shark represent a loess fit and shaded areas around the line represent the standard error of the smooth. Minimum temperatures were recorded in-depth bins of 8 m (e.g. 0–8 m, 8–16 m, 16–24 m) to a precision of 0.2° C Full size image Discussion In an ever-changing climate, identifying how water temperature shapes the distribution of marine life is fundamental for effective conservation on a regional and global scale (Rijnsdorp et al. 2009 ; Poloczanska et al. 2016 ; Campana et al. 2020 ; Payne et al. 2015). However, given the diverse physiologies amongst elasmobranchs (e.g. Watanabe et al., 2015) a ‘one-size-fits-all’ understanding of thermal range is not appropriate with some species apparently preferring a narrow set of temperatures (e.g. tiger sharks—Payne et al. 2018 ) whilst others range widely across broad temperature ranges (e.g. white sharks—Boustany et al. 2002). Here, we explored whether the post-aggregation movements of basking sharks reflected the seasonal shift in water temperature in the NE Atlantic (i.e. summer-autumn). Our findings suggested that ‘decisions’ linked to dispersal or residency might be highly individualised (Shaw 2020 ), providing further evidence of individual variation in dispersal dynamics for basking sharks in this region (Doherty et al. 2017a ). For example, shark 3 remained exclusively in coastal habitat throughout the winter months, reaffirming that seasonal dispersal from high latitudes is not obligate (Doherty et al. 2017b ). Extending this argument, sharks that remained in coastal habitats were not compelled physiologically to move to warmer climes following autumnal decreases in water temperature. Likewise, the offshore movements by sharks 1 and 2 occurred at a time when the water temperature at the deployment site, in NE Atlantic coastal waters, was still increasing (Fig. 1 ). For basking sharks, identifying whether responses to temperature are regionally adaptive or consistent at a species level will help ‘future-proof’ management approaches within our changing climate (e.g. Senner et al. 2021; Thorburn et al. 2021; Lennox et al. 2021). With regards to thermal envelopes, sharks in offshore habitat and more southern latitudes experienced a wider and more extreme range of temperatures (6.8–27.4 °C) than those sharks that remained in coastal habitat at higher latitudes (9.2–17.6 °C; Figs. 3 and 4 ). Thus, it is improbable that the reason for southerly movements was to remain within a constant temperature range year-round or to move to warmer waters overall. Indeed, sharks 1 and 2 off the coast of Africa routinely experienced cooler minimum temperatures during the winter than sharks residing off the coasts of Great Britain and Ireland at the same time (Fig. 3 ). These findings expand the known thermal range for basking sharks in the NE Atlantic from the 8.0–16.0 °C reported by Doherty et al. ( 2019 ) aligning more closely with studies from the NW Atlantic (4.2–29.9 °C; Braun et al. 2018 ). The salient point is that horizontal movements to southerly latitudes alone did not explain the differences in temperature experienced by coastal and offshore sharks (i.e. a 2D conjecture). Rather, it was a combination of location (Fig. 2 ) and the behavioural shift to deep forays in the offshore that led to the expansion of the realised thermal niche (Figs. 5 and 6 ). Separately, the sharks that resided continually in coastal habitats (sharks 3 and 4) also experienced markedly different temperature ranges during the autumn and winter (Fig. 4 ). Differences were driven by the degree of thermal stratification in the water column at their given locations (Fig. 7 ), which implies that neither residence nor dispersal behaviours served to maintain a constant temperature range over time. These findings again highlight the importance of sub-surface measures of temperature when investigating habitat association in deep-diving species (Edwards et al. 2019 ). Any discussions of habitat use must also account for the underlying bathymetry (Cogan et al. 2009 ) as well as the conditions experienced below the surface (Curtis et al. 2014 ). For example, deep forays into the water column commenced once individuals moved beyond the shelf edge, with depth emerging as the key determinant of realised thermal niche (Figs. 3 and 6 ). The distinct periodicity of these forays by sharks 1 and 2 (Fig. 5 ), indicated that such behaviours were following a daily cycle (Fig. 6 ), mirroring the vertical distribution of mesopelagic scattering layers (e.g. ~ 400–600 m during day) in the North Atlantic (Klevjer et al. 2016 ). Deep foraging behaviour has been alluded to previously in the species (Sims et al. 2003 ; Braun et al. 2018 ; Doherty et al. 2019 ) although associations with specific mesopelagic prey remain unknown. This suggestion does not negate other reasons for extensive forays into the water column. For example, oscillatory and or ‘yo yo’ deep-diving and surfacing events can serve several functions (reviewed by Braun et al. 2022 ) such as conservation of energy during travel, detection of chemical cues, improving magnetic perception and thermoregulation (Nelson et al. 1997 ; Klimley et al. 2002 ; Doherty et al. 2019 ). Unravelling the significance of offshore behaviour (i.e. mesopelagic foraging) is timely given the emergence of regional elasmobranch conservation efforts in the NE Atlantic (Queiroz et al. 2019 ; Walls and Dulvy 2021 ). Ireland, as a member of the European Union, has a significant role to play in the management of migratory marine species such as the endangered basking shark (Sims et al. 2015 ) that reside within or frequent its expansive territorial waters (i.e. EEZ 880,000 km 2 ). Here, we reiterate that offshore areas (traversing Ireland’s EEZ and beyond) likely constitute more than simple migratory pathways for the species (Doherty et al. 2017a ), further highlighting the requirement for dedicated study beyond well-established coastal hotspots (Sims 2008 ). From a physiological perspective, the oscillating excursions from the surface to depth (Fig. 6 ) may serve a similar function to those reported for other large-bodied pelagic sharks (Queiroz et al. 2017 ) such as tiger sharks Galeocerdo cuvier (Nakamura et al. 2011 ), whale shark Rhincodon typus (Meekan et al. 2015 ) and the bluntnose sixgill shark Hexanchus griseus (Coffey et al. 2020 ) that are required to move to surface waters regularly to rewarm. Arguably, basking sharks’ large mass (Mathews and Parker 1950 ) and substantial stores of liver oil (Tsujimoto 1935 ) might dampen the rate of heat diffusion allowing them to temporarily access cold water prey at depth in open ocean areas, where surface prey fields are more depleted. Indeed, the concept of ‘thermal inertia’ in other large deep-diving sharks is well established (e.g. Carey et al. 1981 ; Kitagawa and Kimura 2006 ; Thums et al. 2013 ; Meekan et al. 2015 ; Howey et al. 2016 ) and warrants further attention in the species. In summary, our data from the NE Atlantic revealed no apparent link between the timing of offshore dispersal in basking sharks and water temperature, nor a sustained thermal envelope over time. Irrespective of latitude, depth use was the key determinant of thermal range (and minimum temperatures) within coastal and offshore areas. We avoid extrapolating these findings across the Atlantic and simply suggest that thermal responses might be regionally adaptive. Finally, from a management perspective, basking sharks in the NE Atlantic may possess the adaptive capacity to tolerate projected shifts in water temperature linked to climate change. However, this conjecture is premature without a clearer understanding of why basking sharks range so extensively and continuously throughout the world’s oceans. Data availability All data are stored on the secure ‘Pure’ server at Queen’s University Belfast maintained by central IT support. Data will be accessible in line with science publication protocols from the authors on request. Code availability Data will be accessible in line with science publication protocols from the authors on request.
If basking sharks were like Canadians, their migration habits might be easily explained: Head south to avoid winter's chill, and north again to enjoy summer's warmth. It turns out basking sharks are a more complex puzzle, Western biology professor Paul Mensink and his colleagues discovered while examining the seasonal movements of the enormous fish. Mensink and fellow researchers at Queen's University Belfast in Northern Ireland tracked four basking sharks—so large they can reach the size of a school bus—off the northernmost tip of Ireland to find out where they overwinter, and why. They affixed tags that would record the sharks' location, along with water temperature and depth. "The assumption was that they swim to Ireland's coastal waters for the summer, because of its really productive feeding areas, and then take off for the south in winter where it's warmer. We initially thought that water temperature must be the trigger for them to leave," Mensink said. "They blew our assumptions out of the water." Two of the four sharks, surprising the researchers, stayed put. The other two migrated south to tropical and subtropical waters off the African coast, as expected. But instead of lounging in warm waters there, these two spent much of their time in water that was deeper—and much colder—than if they'd stayed nearer to Ireland. Early each morning, these sharks would dive to depths of 200 to 700 meters, return near the surface near midday, then submerge to the chilly depths in the evening. "It was like clockwork," said Mensink, a marine ecologist who specializes in educational technology and is a teaching fellow in Western's Faculty of Science. "It's almost like the dive pattern of an animal that has to come up to the surface to breathe air—but, of course, they don't." During their dive cycle, the sharks put themselves through an extreme range of water temperatures: 27 C near the surface and a frigid 7 C in the depths. So why would any rational, cold-blooded creature do that? Pair of basking sharks in Inishtrahull Sound, Ireland. Credit: Emmett Johnston, Queen’s University Belfast "Ultimately, what we suspect is they're going down to feed in what we call the deep scattering layer, which is chock full of gelatinous zooplankton and other things they feed on," Mensink said. And at midday, they rise nearer the surface to bask in the warmth, as their name would suggest. Meanwhile, the sharks that remained in shallower waters near Ireland's coast experienced neither extreme cold nor warmth as winter water temperatures ranged from 9 C to 17 C Unlike their offshore counterparts, the stay-at-home sharks showed little variation in depth during the six months the four animals were tracked. Their newly published study appears in the journal Environmental Biology of Fishes. As to why two lingered and the other two swam south, Mensink's team doesn't yet have an answer. Study co-author Jonathan Houghton of Queen's University Belfast said, "This study tempts us to think about basking sharks as an oceanic species that aggregates in coastal hotspots for several months of the year (most likely for reproduction), rather than a coastal species that reluctantly heads out into the ocean when decreasing water temperatures force them to." The findings are a reminder that research into ocean habitats and sea animals' habits must include both depth and distance, Mensink said. "We have to think about the ocean as a three-dimensional place." One important lesson of the study is that conservation of deep-ocean habitats is essential to the creatures' survival, said Queen's University Belfast researcher Emmett Johnston, lead author of the study. "Likewise, further evidence of individual basking sharks occupying Irish coastal waters year-round has significant implications for national and European conservation efforts." Basking sharks are the second-largest fish in the ocean (behind whale sharks) and use their enormous mouths and gill-rakers to filter-feed tiny organisms such as zooplankton. They are deemed a vulnerable species globally. In Canada, they have been designated an endangered species off the Pacific coast. The team is collaborating on basking sharks as part of the major EU SeaMonitor, which with partners in Ireland, Northern Ireland, Scotland, Canada and the U.S., is developing a collective conservation strategy for wide-ranging marine species. Mensink's teaching and research into basking sharks includes a plan to bring Western undergraduate students into the fishes' ocean habitat through an augmented reality app in March, when they will, virtually, immerse themselves in the ocean ecosystem and "swim" beside the creatures.
10.1007/s10641-021-01202-8
Computer
Researchers present a blueprint for building green
Natalie Voland et al, Public Policy and Incentives for Socially Responsible New Business Models in Market-Driven Real Estate to Build Green Projects, Sustainability (2022). DOI: 10.3390/su14127071
https://dx.doi.org/10.3390/su14127071
https://techxplore.com/news/2022-08-blueprint-green.html
Abstract Guidelines Hypothesis Interesting Images Letter New Book Received Obituary Opinion Perspective Proceeding Paper Project Report Protocol Registered Report Reply Retraction Short Note Study Protocol Systematic Review Technical Note Tutorial Viewpoint All Article Types Advanced Search Section All Sections Air, Climate Change and Sustainability Bioeconomy of Sustainability Development Goals towards Sustainability Economic and Business Aspects of Sustainability Energy Sustainability Environmental Sustainability and Applications Green Building Hazards and Sustainability Health, Well-Being and Sustainability Pollution Prevention, Mitigation and Sustainability Psychology of Sustainability and Sustainable Development Resources and Sustainable Utilization Social Ecology and Sustainability Soil Conservation and Sustainability Sustainability in Geographic Science Sustainability, Biodiversity and Conservation Sustainable Agriculture Sustainable Chemical Engineering and Technology Sustainable Education and Approaches Sustainable Engineering and Science Sustainable Food Sustainable Forestry Sustainable Management Sustainable Materials Sustainable Oceans Sustainable Products and Services Sustainable Transportation Sustainable Urban and Rural Development Sustainable Water Management Tourism, Culture, and Heritage Waste and Recycling All Sections Special Issue All Special Issues (In)Corporate Sustainability: A Systemic Shift towards Sustainability 10th Anniversary of Sustainability—Recent Advances in Sustainability Studies 14th CIRIAF National Congress - Energy, Environment and Sustainable Development 15th CIRIAF National Congress – Environmental Footprint and Sustainable Development 16th CIRIAF National Congress – Sustainable Development, Environment and Human Health Protection 17th CIRIAF National Congress—Energy–Environmental Sustainability and Seismic Retrofit of Built Heritage 3D Printed Object or Molds for Educational Use: Design and Validation 3D Printing Applications and Sustainable Construction 3D Printing Influence in Engineering 3D Technology for Sustainable Education, Culture and Divulgation 40th Anniversary of 'The Limits to Growth' 4th Industrial Revolution and New Trends in Service Industry 5G, Energy Efficiency and Sustainability 5G: Smart Technology for Environment and Sustainable Development 5th World Sustainability Forum - Selected Papers 6th World Sustainability Forum - Selected Papers 7th World Sustainability Forum—Selected Papers 8th World Sustainability Forum—Selected Papers A Contextual and Dynamic Understanding of Sustainable Urbanisation A Decision-Making Model on the Impact of Vehicle Use on Urban Safety A Multidisciplinary Approach to Sustainability A Promising Approach: Nanotechnology A Research Agenda for Ecological Economics A Sustainable Revolution: Let's Go Sustainable to Get our Globe Cleaner A Systemic Perspective on Urban Food Supply: Assessing Different Types of Urban Agriculture Academic Contributions to the UNESCO 2019 Forum on Education for Sustainable Development and Global Citizenship Accounting, Financing and Governing Global Sustainability for Businesses and Societies Achieving a Just Transition in the Pursuit of Global Sustainability Achieving Sustainable Development Goals (SDGs) among Walking and Talking Achieving Sustainable Development Goals in COVID-19 Pandemic Times Achieving Sustainable Development of Enterprises through Digital Transformation Achieving Sustainable Public Transportation System: Travel Behaviors, New Technologies and Their Equity, Accessibility and Planning Implications Achieving Sustainable Village Development though Traditional and Innovative Approaches Achieving Zero Carbon Strategy: Towards What Direction Should Supply Chain Focus On? Adaptation or Extinction Adapting to Climate Change: The Interplay between International and Domestic Institutions in the Context of Climate Finance Adaptive Components for Building Performance Control Adaptive Reuse Processes: Bridging the Gap between Memory and New Values Additive Manufacturing and Sustainability in the Digital Age: People, Factories and Businesses of the Future Addressing Sustainability at a Community Scale Addressing Sustainable Development in the Digital Construction Age Advanced Analysis of Energy Economics and Sustainable Development in China in the Context of Carbon Neutrality Advanced Application of Green and Sustainable Computing Advanced Cutting-Edge Research on Applied Research on Human-Computer Interaction Advanced Forum for Sustainable Development Advanced IT based Future Sustainable Computing Advanced IT based Future Sustainable Computing Advanced Methodologies for Sustainability Assessment: Theory and Practice Advanced Science and Technology of Sustainable Development for Cities, Infrastructures, Living Spaces and Natural Environment: Including Collections from the Latest Papers of KRIS 2023 Advanced Technologies Applied to Renewable Energy Advanced Technology for Sustainable Development in Arid and Semi-Arid Regions Advanced Theory and Practice in Sustainable Sport Management Advances and Challenges of Green Chemistry and Engineering Advances and Challenges of Sustainability in/by Software Engineering Advances in Architectures, Big Data, and Machine Learning Techniques for Complex Internet of Things Systems Advances in Artificial Intelligence in Sustainable Business Management Advances in Corporate Governance Mechanisms and Corporate Social Responsibility Advances in Decision Making and Data Analysis for Sustainable Operations and Supply Chain Management in the Industry 4.0 Era Advances in Food and Non-Food Biomass Production, Processing and Use in Sub-Saharan Africa: Towards a Basis for a Regional Bioeconomy Advances in Gas Separation Technologies for Green Process Engineering Advances in Green Infrastructure Planning Advances in Hydrological Modelling, Quantitative Analysis and Prediction for a Changing Environment Advances in Industrial Risk Analysis and Management Advances in Machine Learning Technology in Information and Cyber Security Advances in Multiple Criteria Decision Making for Sustainability: Modeling and Applications Advances in Post Occupancy Evaluation Advances in Research and Sustainable Applications of Energy—Related Occupant Behavior in Buildings Advances in Rural and Aquatic Sustainability Advances in Sports Science and Physical Activity: New Methodological Perspectives Advances in Sustainability Oriented Innovations Advances in Sustainability Research at the University of Malta Advances in Sustainability Research from Poznan University of Technology Advances in Sustainability Research from the University of Oradea Advances in Sustainability: Selected Papers from 1st World Sustainability Forum Advances in Sustainability: Selected Papers from the Second World Sustainability Forum (2012) Advances in Sustainable Development: Selected Papers from 2016 Energy and Environment Knowledge Week Advances in Sustainable Drainage Systems and Stormwater Control Measures Advances in Sustainable Energy Systems and Environmental Remediation Technologies Advances in Sustainable Nanocomposites Advances in Sustainable Technology: The Lean 6S Methodology Advances in Sustainable Tourism and Responsible Travel Advances in the Sustainability and Resilience Interface at the Urban and Regional Levels: Sciences, Plans, Policies, and Actions for the Next Challenging Decades Advances of Digital Transformation in the Education Domain Advances of Sustainability Research: A Canadian Perspective Advances on Building Performance and Sustainability Advancing a Sustainable Future: Automation, Data Analysis, Modelling, and Simulations Advancing Sustainability through Well-Being Advancing Urban Sustainability through a Diverse Social-Ecological System Research Agenda Aerosol Pollution and Severe Weather Affordable Housing Planning for Sustainability After the An­thro­po­cene: Time and Mo­bil­ity Agricultural and Food Systems Sustainability: The Complex Challenge of Losses and Waste Agricultural Development and Food Security in Developing Countries: Innovations and Sustainability Agricultural Domain and Its Dual Role in Global Food Security, Biorefinery and Sustainability Agricultural Economics Meet Environmental Challenges: Innovations & Technology Adoption Agricultural Engineering for Sustainable Agriculture Agricultural R&D: Recent Trends and Tools for the Innovation Systems Approach Agricultural Sustainability at the Crossroads: Indicators, Approaches, Impacts and Options AI and Interaction Technologies for Social Sustainability AI and Sustainability: Risks and Challenges AI for Sustainable Real-World Applications AI-Driven Technology for Sustainable Living Air Pollution as a Threat to Sustainable Development Air Quality Assessment Standards and Sustainable Development in Developing Countries Airport Management System and Sustainability Algorithms, Models and New Technologies for Sustainable Traffic Management and Safety An International Approach of Corporate Social Responsibility and Environmental Management in Multinational Enterprises Anaerobic Digestion and Biogas Production as a Renewable Energy Source with Increasing Potential Analysis on Real-Estate Marketing and Sustainable Civil Engineering Animal Nutrition and Welfare in Sustainable Production Systems Animal Welfare and the United Nations Sustainable Development Goals Announced and Submitted Papers (Regular Issues) Application of Artificial Intelligence and Virtual Reality Technology in Sustainable Fashion Industry Application of Big Data and Artificial Intelligence in Sustainable Development Application of Big Data and Artificial Intelligence in Tourism Application of Microfluidic Methodology for Sustainability Application of Multi-Criteria Decision Analysis in Sustainable Development Application of Nanomaterials in Oil and Gas Drilling and Production Applications and Advanced Control of Microgrids Applications of Artificial Intelligence Based Methods in Transportation Engineering Applications of Artificial Intelligence for Sustainable Development (AAISD) Applications of Artificial Intelligence in New Energy Technology Systems Applications of Big Data Analysis for Sustainable Growth of Firms Applications of Business Model Innovations for Multi-Sized Software Companies for Sustained Competitive Advantage Applications of Communication and Decision Support Systems in Sustainable Farming Applications of Complex System Approach in Project Management Applications of Cyber-Physical Systems for a Sustainable Future Applications of Internet of Things (IoT): Challenges and Opportunities Applications of Internet of Things and Artificial Intelligence for Smart Urban Living from a Sustainable Perspective Applications of the Multi-Criteria Decision Aid Methods in the ex-ante Assessment and Management of Strong and Weak Sustainability Applications of Theory and Techniques on Cosmic Rays and Their Secondaries into Sustainable Practice Applications, Methods, and Technologies of Sustainable Landscape Planning and Design Applied Sustainability for SDG Implementation Applying Remote Sensing for Sustainable Land Use Changes Architecture and Salutogenesis: Beyond Indoor Environmental Quality Artificial Intelligence (AI) For Sustainability Artificial Intelligence & Quantum Computing Artificial Intelligence and Artificial Intelligence in Education for Sustainable Development Artificial Intelligence and Cognitive Computing: Methods, Technologies, Systems, Applications and Policy Making Artificial Intelligence and Smart Technologies for Achieving Sustainable Goals Artificial Intelligence and Sustainability Artificial Intelligence and Sustainability of Development Artificial Intelligence and Sustainable Digital Transformation Artificial Intelligence Applications in Safety and Risk Assessment Artificial Intelligence for Sustainability Artificial Intelligence of Things for Carbon Neutrality Assessing and Valuing Ecosystem Services Assessing the Sustainability of Urban Agriculture: Methodological Advances and Case Studies Assessing Urban Policies for No-Net Land Take for 2050 Assessment Methods Applied to Environmental Projects in Low- and Middle-Income Countries Assessment of Disparities as a Sustainable Tool for the Improvement of Public Health Astrobiology and Sustainability Asymmetric Response of Trade Flows to Exchange Rate Volatility Atmospheric Pollution Augmented Fashion—Sustainable Fashion and Textiles through Digital Technologies Augmented Reality and Virtual Reality-supported Sustainable Education Autonomous and Sustainable Computing for preparing the Internet of Things Environment Banking, Corporate Finance and Sustainability Barriers and Incentives to an Electric Vehicle Mobility Transformation Behavior and Marketing for Sustainability Behavior of Road Users and Sustainable Traffic Modes Behavioral Changes in the Tourism Industry: Implications for Sustainable Tourism Development Behavioral Economics and Sustainable Public Policies Belt & Road Initiative in Times of ‘Synchronized Downturn’: Issues, Challenges, Opportunities Benefit Corporations – Diffusion, Societal Impact, Challenges, and Best Practices Better Product Lifetimes Beyond the Parks – Exploring the Potential of Informal Green Spaces for Ecosystem Services Provisioning Beyond the Waste Hierarchy Bibliometric Reviews of Research on Sustainability in Management Big Data and Sustainability Big Data for Sustainable Anticipatory Computing Big Data Research for Social Sciences and Social Impact Big Data Security, Privacy and Sustainability Big Data, Artificial Intelligence and the Sustainability of Democracy Bio-Based Construction Materials for Sustainable Development of the Built Environment Bioeconomy Innovation Pipelines and Supply Chain and Policy Shocks Bioenergy and Sustainability Bioenergy for Sustainable Development: Advances and Applications Bioenergy, Land Use and Ecology Protection Biophilic Cities: Putting the Vision into Practice Biophysical Sustainability of Food System in a Global and Interconnected World Biosustainability and Waste Valorization Biotechnological Processes for the Valorization of Sludge and Waste Within a Circular Economy Context Biotechnology and Sustainable Development Blockchain and Building Information Modeling (BIM) Blockchain Fostering Sustainability: Challenges and Perspectives Blockchain Technology and Operations Sustainability Blockchain Technology for Enhancing Supply Chain Performance and Reducing the Threats Arising from the COVID-19 Pandemic Blue Economy and Resilient Development: Natural Resources, Shipping, People, and Environment Blue-Green Sustainability and Tomorrow Shocks Borderland Studies and Sustainability Boundary Organizations & Sustainability Brand Equity, Satisfaction and Word of Mouth Branding the eco-city: Rhetoric or reality? Bridging the Gap in the Technology Commercialization Process and Sustainability Bridging the Gap: The Measure of Urban Resilience Bridging the Labor Market Gender Gap: Towards a Holistic Paradigm Building Energy Assessment Building Regional Sustainability in Urban Agglomeration: Theories, Methodologies, and Applications Building Stronger Communities through Social Enterprise Buildings and Infrastructures Management: Models Strategies and Evaluation Tools Business and Energy Efficiency in the Fashion Industry and Branding in the Age of Industry 4.0 Business and the Environment: Critical Issues and Emerging Perspectives Business Innovation and Sustainable Development Business Model Innovation for Sustainability. Highlights from the Tourism and Hospitality Industry Business Models and Financial Innovation for Sustainable Infrastructure Business Sustainability Management and Eco-Innovation Can Smart Cities Cope with Sustainability? Carbon Emissions: Economic Consumption Carbon reduction strategies and methods in transportation Ceramic Industry Transition to a Sustainable Model Challenges and Possibilities for Sustainable Development in a Baltic Sea Region Context Challenges and Possibilities for Sustainable Development in a Baltic Sea Region Context - Volume 2 Challenges and responses to population health and urbanization in the 21st century Challenges and Solutions for Greater Sustainability in Agri-Food Transport and Logistics Systems Challenges and Strategies for Sustainable Development in Deep Mines Challenges for a Sustainable Water Use and Re-Use Challenges for Historic Gardens’ Sustainability between Restoration and Management Challenges in Achieving Sustainable Development Goals by 2030 Challenges in Overcoming Current and Future Sustainability Crises Challenges on the Asian Models and Values for the Sustainable Development Challenging the Human-Nature Relationship: Towards New Educational Paradigms Changing the Energy System to Renewable Energy Self-Sufficiency (RESS) - Selected Papers from the RESS Conference, 15-16 September 2011, Freiburg, Germany Characteristics, Sources, and Impacts of Black Carbon Aerosols China and World Sustainability: The Present Problem and Solution for the Future Circular and Climate-Neutral Solutions: Biomass as the Base of Our Future Circular Society Circular Economy and Eco-Innovation: Taking Stock and Looking Ahead Circular Economy and Supply Chain Management 4.0 Circular Economy and Technological Innovation Circular Economy Evaluation: Towards a Transparent and Traceable Approach under a Life Cycle Perspective Circular Economy in Industry 4.0 Circular Economy in Small and Medium Enterprises Circular Economy in the COVID-19 Era Circular Economy in the Service Sector: Current Situation and Future Perspectives Circular Economy Practices in the Context of Emerging Economies Circular Economy Transformations in the Production and Consumption System: Critical Issues and Emerging Perspectives under the Scope of the Sustainable Development Goals Circular Economy: A Move towards Economical Viable Sustainability Circular Procurement - A Means towards Sustainability Circularity in the Built Environment Cities and Waterfront Infrastructure Citizen Participation in Sustainable Local Decision-Making City Marketing and Planning for Sustainable Development Civil Engineering as a Tool for Developing a Sustainable Society Clean Energy Management: Emerging Technologies and Mathematical Modeling Cleaner Production Principle and Application in Promoting Sustainability in Developing Countries Climate Change Adaptation and Mitigation- Organic Farming Systems Climate Change Adaptation, Mitigation and Development Climate Change and Sustainability Education Climate Change and Sustainable Development in the Global South Climate Change Effects at Watershed, Estuary and In-Stream Scales: Implications on Water Quality and Water Management Climate Change Impacts on Inland Fisheries Climate Change Influence in Agriculture-Experimenting the Introduction of Cultivated Crops in Culture Climate Change Mitigation and Adaptation - ZEMCH 2016 Climate Resilience and Sustainability of Interconnected Critical Infrastructures Cloud Platform Sustainability Technologies for Industrial Internet of Things Co-Creating Sustainability: Integration of Local Ecological Knowledge in Art Works Coding Literacy for Sustainable Society Cognitive Digital Twins for Construction Sustainability Cognitive Infocommunications in Service of Sustainable Digital Spaces Cold Region Environments and Sustainable Development of Civil Engineering Collaboration for Sustainability Collaboration, Risk Management and Governance in SMEs: Drivers of Competitiveness and Innovation towards meeting the SDGs Collaborative Supply Chain Networks Collapse of Easter Island Collective and Computational Intelligence Techniques in Energy Management Combining Multi-Criteria Decision Analysis and Life-Cycle Assessment: Challenges, Proposals, and Real-World Applications Commodity Trade and Sustainability Common-Pool Resources and Sustainability Communication for and about Sustainability Communication on Sustainability in Universities: A Bridge between Academia and Society Communicative and Behavioral Interventions to Increase Sustainability Community Self-Organisation, Sustainability, and Resilience in Food Systems Company and Climate Changes Conference CCC'2022 Competitive and Sustainable Manufacturing in the Age of Globalization Competitive and Sustainable Semiconductor Manufacturing Competitive Sustainable Manufacturing: Making Sustainability Make Business Sense Complex System Modeling Methods Applied to Sustainable Development Computational Intelligence for Sustainability Concerning the Application of Big Data-Based Techniques to Social Sciences Condition Assessment of Water Infrastructures Connecting Science with People: Creating Science Communication that Matters Conservation in Peripheral and Ultra-Peripheral Territories Considering Irreversibility in Transport Infrastructure Planning Constructed Natures: Shaping Ecology through Landscape Design Constructing Heritage in the Light of Sustainable Development Construction 4.0: The Next Revolution in the Construction Industry Consumer Behavior and Sustainability in the Electronic Commerce Consumer Behavior and Sustainable Food Development Consumer Behavior on Social Media in the Era of Artificial Intelligence and Chatbot Relationship Marketing Consumer Behavior Research in Food: A Focus on Health, Safety, and Sustainability Consumer Neuroscience and Sustainable Marketing Consumer Preferences for Sustainable Agriculture Product Consumer's Willingness to Pay for Green Products and Services Contemporary Trends in Sustainable Development: A Double-ECO (edged) Sword Cooperation and Internationalization Networks for the Promotion of Corporate Sustainability Cooperative Longevity: Why are So Many Cooperatives So Successful? Coping with Climate Change in Developing Countries Corporate Environmental Management and Voluntary Actions to Pursue Competitiveness and to Support the Transition to a Circular Economy Corporate Finance Management and Social Responsibility for Sustainable Development Corporate Social Responsibility (CSR) and CSR Implementation Corporate Social Responsibility (CSR) and CSR Reporting Corporate Social Responsibility (CSR) and Sustainable Development Goals Corporate Social Responsibility and Communication during COVID-19 Pandemic Corporate Social Responsibility and Corporate Performance Corporate Social Responsibility Disclosure Research – Sustainability Perspective Corporate Social Responsibility Practice in the High-Tech Sector Corporate Social Responsibility: Organizational Strategy for Sustainable Growth Corporate Sustainability and Innovation in SMEs Corruption in Sustainable Health Care COVID-19 Impacts on Last-Mile Logistics: Catalyst for Change or Enhancer of Current Trends? Crack Prediction and Preventive Repair Methods for the Increasing Sustainability and Safety Requirements of Structures Creating a Brighter Future for Life in the Tropics Creating a More Equitable City: Alternative Narratives of Spatial Production Creative Industries Entrepreneurship as a Means for Sustainable Innovative Transition Creative Solutions to Big Challenges Creativity and Innovation for Sustainability—State of the Art and Future Perspectives Critical Environmentalism: Questioning Practices of Sustainability Critical Junctures in Assistive Technology and Disability Inclusion Cross-fertilized Fields of Knowledge and Sustainable Development Goals (SDGs) Crowd-Powered e-Services Cryospheric Hydrological Processes and Water Resources under Climate Change Cultivating the Ecological Transition: Knowledge and Practices of Sustainability, Neo-Endogenous Development and Participatory Processes Cultural Cities: A Path towards Sustainability Cultural Heritage Storytelling, Engagement and Management in the Era of Big Data and the Semantic Web Current Energy and Environmental Issues in Emerging Markets Current Issues in Sustainable Energy Production: Multidimensional Outlook for the Sustainable Economic Growth Customer Engagement and Organizational Performance for Sustainability Customer Relationship Management—Strategies and Business Implications Cutting Edge Chemistry and its Impacts on Sustainable Development Data Analysis and Data Envelopment Analysis in Municipal Solid Waste Management Data Analytics on Sustainable, Resilient and Just Communities Data Driven Analysis for Active Transportation Data Science Applications for Sustainability Data-Driven Emergency Traffic Management, Optimization and Simulation Dealing with Projects in a Dynamic Scenario: New Managing Approaches and User-Centered Tools Decarbonisation Investment Towards Environmental Sustainability Decarbonised Economy Decarbonization and Circular Economy in the Sustainable Development and Renovation of Buildings and Neighborhoods Decarbonization of Industry through Green Hydrogen and Power to X Processes Decarbonizing the International Shipping Industry Decipher the Present to Shape the Future- Rethinking the Urban–Rural Nexus Decision Models for Sustainable Development in the Carbon Neutrality Era Decision Support Systems and Multiple Criteria Decision Making for Sustainable Development Decision-Making Approaches to Support the Sustainability of Supply Chain System in Pandemic Disruptions Deep Mining Engineering in Sustainability Degrowth: The Economic Alternative for the Anthropocene Density and Sustainability Design (be)for(e) Disaster Design and Control of Advanced Powertrain Technologies Design and Emotional Sustainability Design and Implementation of Sustainability Programs in Higher Education Design for Sustainability—Axiomatic Design Science and Applications Design of New Efficiency and Productivity Indexes with Applications Designing and Implementing Innovative Business Models and Supply Chains: The Digitalization and Sustainability Imperative Designing Artifacts/Tools for Increasing Sustainability Designing Products and Services for Circular Consumption Determinants and Aspects of Regional and Local Development in Poland Determinants of Sustainable Productivity Growth in Post-COVID Central and Eastern Europe Determinants, Components and Impacts of Sustainable Governance Developing 5G/6G Wireless Technologies for Sustainable Communication Systems Developing Agricultural Produce as Part of Local Sustainable Management: New Challenges and Opportunities Developing Competencies for Sustainability of Future Managers: What, How and Why Development and Application of Sustainable Refrigeration Adsorption Technology Development at the Crossroads of Capital Flows and Migration: Leaving no One Behind? Development Economics and Social Resilience: Perspectives for Sustainability with or without COVID-19 Development for the Role of Japanese food Overseas Development of Renewable Energy from Perspectives of Social Science Development, Sustainability and Finance in Emerging and Developing Economies Developments in Machining Applications by Abrasive Waterjet Developments on Corporate Social Responsibility Reporting Digital Consumption, Privacy Issues and Sustainability Digital Culture Sustainability Digital Divides and Sustainability: How to Overcome Technological Inequality to Secure Sustainability in the Post-pandemic Future? Digital Economy, E-commerce, and Sustainability Digital Entrepreneurship: Sustaining the Bridge between Innovation and Entrepreneurship Digital Innovation and Technology Transfer in Emerging Markets and/or from Emerging Markets Digital Manufacturing and Industrial Sustainability Digital Marketing and Digital Capabilities Digital Marketing for Sustainable Growth: Business Models and Online Campaigns using Sustainable Strategies Digital Processes in Social, Cultural and Ecological Conservation Digital Supply Chain and Sustainability Digital Sustainability and Customer-Centric Project and Operations Management in Support of the People, Planet, and Profit Digital Technologies and Sustainability Management in Organisational and Developmental Environments Digital Technologies Enabling Sustainability in Manufacturing and Supply Chain Digital Technology Supports for Sustainable Development of Human Society during and Post-pandemic Contexts Digital Transformation and Its Opportunities for Sustainable Manufacturing Digital Transformation and Sustainability: Tips to Sport Industry from Research Digital Transformation for Smart Logistics under the Impact of 5G and IoT Technologies Digital Transformation of Business Model Innovation and Circular Economy Digital Transformation, Information Management, and Sustainable E-government Digitalization and Its Application of Sustainable Development Digitalization and Sustainable Development Digitalization Leading the Way for a Sustainable Future Disaster Risk Reduction and Sustainable Development Discursive Mobilization for Green Transformation: Culture, Policy and Methodology Disruptive Technologies in Smart Systems for Sustainability: Challenges and Opportunities Distributed and Sustainable Manufacturing Divestment and Sustainability Drilling Technologies and Process Safety Drivers and Forms of Sustainability-Oriented Innovation Driving Sustainability through Engineering Management and Systems Engineering Drones for Precision Agriculture: Applications and Impacts in Developing Countries Drought Management in Semi-Arid and Arid Environments for Food Security Dust Events in the Environment Dynamic Sustainability of Small and Medium Size Towns E-business - The Perspective of Systems Thinking and Sustainability E-commerce and Sustainability E-learning, Digital Learning, and Digital Communication Used for Education Sustainability Earthquake Source Imaging and Rupture Process Study Eco-Initiatives and Eco-Attitudes at Leisure Events Eco-Responsible Use of Technologies for Sustainable Development Ecofriendly Materials and Clean Energy Ecological Footprint Indicator Ecological Restoration of Soils and Wastewater Ecologically Sustainable Transport and Other Linear Infrastructure in Asia and Europe Economic Feasibility for Sustainability Economic Growth and Sustainable Wildlife Management Economic Impact of Water and Soil Salinity Economic Profitability and Agriculture Sustainable Development Economic Sustainability: Strategy, Efficiency, Profitability, and Prediction of the Insolvency of Organizations Economic Thought, Theory and Practices for Sustainability Economics of climate change impacts on developing countries: Selected studies on Sub-Sahara Africa and South-East Asia Economics of Climate Smart Agriculture Ecosystem Approach for Sustainability - A special issue in memory of James J. Kay (1955-2004) Ecosystem Services and Institutional Dynamics Ecosystem Services in a Bio- and Circular Economy Ecosystem Services in Community Well-Being for a Sustainable Future Edge Artificial Intelligence in Future Sustainable Computing Systems Editorial Board Members’ Collection Series: Transport, Environment and Sustainability Education and Skills for the Green Economy Educational Spaces and Sustainability Effect of 6G and beyond Communication Technologies on Healthcare Sector Effectiveness of Sustainability Reporting Tools Effects of COVID 19 for Sustainable Education, Systems and Institutions Efficiency and Effectiveness of Universities in Achieving Sustainable Development Goals (SDGs) Efficiency and Sustainability of the Distributed Renewable Hybrid Power Systems Based on the Energy Internet, Blockchain Technology and Smart Contracts-Volume II Efficiency in Energy Storage Efficient and Non-polluting Biomass and Wastes Thermal Gasification Efficient Management of Sustainable Supply Chains Efficient Purification and Recycling of Heavy Metals in Wastewater Electric and Hybrid Vehicles in a Smart Grid Scenario Electric Vehicles: Production, Charging Stations, and Optimal Use Electronic Marketing Sustainability Embedded Entrepreneurship and Innovation: The Role of Places, Institutions, and Networks Emergence of Resource and Capability of Logistics Service Providers (LSPs) under Dynamic and Uncertain Environments Emergency Plans and Disaster Management in the Era of Smart Cities Emerging Markets’ Competitive Advantages in Sustainable Management Emerging Research on Socio-Technological Sustainability Transitions Emerging Technologies for Sustainability and Safety Emerging Trend in Achieving ‘Zero Waste’ and Sustainable Consumption Emotional Communication, Organizations, and Sustainability Enabling Sustainable IoT toward State-of-the-Art Technologies Encouraging Social and Environmental Sustainability through Public Procurement End of Life Products and Processes in the Emerging Circular Economy Endangered Human Diversity: Languages, Cultures, Epistemologies Energy Economics and Sustainability Energy Harvesting Communication and Computing for Sustainable IT Energy in Districts Energy Modeling Related to Sustainability Energy Policy and Sustainability Energy Sustainability after Global Fossil Energy Depletion Energy Sustainability and Power Systems in an Industry 4.0 Context Energy Sustainability and Tourism Energy Sustainable Management Energy-Efficient Scheduling in Production and Transportation Energy–Sustainable Real Estate: Challenges and Goals Engineering and Decision Support for Sustainable Development Engineering Sustainable Building Materials: Advancing the Structural Performance of Earth-based Technologies Enterprise Sustainability Entrepreneurial Ecosystems in Tourism and Events Entrepreneurial Education Strengthening Resilience, Societal Change and Sustainability Entrepreneurial Innovation and Sustainable Growth in the Era of the COVID-19 Crisis Entrepreneurial Orientation for Sustainable Development Entrepreneurial Sustainability: New Innovative Knowledge Entrepreneurship and Business Cases for a Sustainable Accounting and Financial System Entrepreneurship and Eco-Innovation Entrepreneurship and Sustainability Entrepreneurship and Sustainable Firms and Economies Entrepreneurship and Sustainable Gastronomic Tourism Entrepreneurship and the Sustainable Development Goals for the Business-Health Relationship Entrepreneurship, Competitiveness and Innovation: A Trilogy Research Environment in Sustainable Development Environmental Analysis of Water Pollution and Water Treatment Environmental and Social Sustainability in Relocations of Second Degree Environmental and Sustainability Assessment Using Simulation Modelling Environmental and Sustainability Education: Building Bridges in Times of Climate Urgency Environmental Disclosure and Global Reporting Environmental Economics. Contributions to Sustainability of Agricultural Ecosystems Environmental Education for Sustainability Environmental Education: High School Students’ Perception of Sustainability Environmental Geography, Spatial Analysis and Sustainability Environmental Impact Assessment and Sustainable Development Environmental Impact of Livestock Production and Mitigation Strategies Environmental Impacts of Public Transport Systems Environmental Impacts under Sustainable Conservation Management Environmental Justice and Ecosystem Co-governance Environmental Justice and Sustainability Environmental Law for Sustainability Environmental Law for Sustainability 2018 Environmental Laws and Sustainability Environmental Management of Post-Epidemic Mass Carcasses Burial Sites Environmental Management Optimization Environmental Migration and Displacement-Migration Aspirations in Response to Environmental Changes Environmental Policy for Sustainability Environmental Protection Engineering Environmental Resilience in the Pandemic Years 2020–2021 Environmental Sustainability and Strategy: Resilience, Resourcefulness, and Strategic Refresh in the Post-pandemic Era Environmental Sustainability and the Built Environment Environmental Sustainability in IR 4.0 Environmental Sustainability of Agriculture in a Changing Climate Environmental Sustainability, Planning and Energy Efficiency in Energy Communities Environmental, Social and Governance (ESG) Performance Assessment Environmentally Sustainable Diets Environmentally Sustainable Livestock Production Equality, Diversity and Inclusion in STEM for a Sustainable Future Ethics and Sustainability Ethics for a Sustainable World: Academic Integrity and Corrupt Behaviour Evaluating the Impact of Innovation and, Identifying the New Challenges in Smart and Sustainable Manufacturing Evaluation and Indicators for Sustainability: Tools for Governance and Resilience Everyday ICT Consumption and Sustainability Executive Gender Diversity and Corporate Social Responsibility Exobiology Studies and the Study of the History, Present, and Future of Life on our Planet Experience Economy in Times of Uncertainty Expert Systems: Applications of Business Intelligence in Big Data Environments Exploring of Sustainable Supplier Selection Exploring Sustainable Pathways: The Role of Carbon Trading for Climate Solutions in Industry 4.0 Exploring the Connection between Digital Communities, Sustainability and Citizen Science Exploring the Interoperability of Public Transport Systems for Sustainable Development Exploring the Workplace Practices that Foster a Sense of Purpose and Meaning in Life Extended Evolutionary Approaches to Sustainability Extension (and) Education for Sustainable Farming Systems External Thermal Insulation Composite Systems (ETICS): Sustainable Technology for the Growth of a Resource-Efficient and Competitive Economy Extractive Industries toward Sustainable Development Facets of Sustainability in Construction Informatics and Project Management Facing the Crisis: Sustainable Practices as Enablers of Business Resilience Fairness in Transport Faith and Sustainable Development: Exploring Practice, Progress and Challenges among Faith Communities and Institutions Family Business Model and Practices of Sustainability Farm Cooperatives and Sustainability Farming System Design and Assessment for Sustainable Agroecological Transition Fates, Transports, Interactions and Monitoring of Emerging Pollutants FDI and Institutional Quality: New Insights and Future Perspectives from Emerging and Advanced Economies Feature Paper on Sustainability Wastewater Management Finance and Agenda 2030: Building Momentum for Systemic Change—A Special Issue Coordinated by the Sustainable Finance Group of SDSN France Finance in a Sustainable Environment: Uncertainty and Decision Making Financial Implications of Sustainability. Linkages and Tensions between Natural, Social and Financial Capital Financial Innovation for Industrial Renewal Financial Markets in Sustainable Development Finding Common Ground. Conservation and Conflict Resolution/Prevention Fintech: Recent Advancements in Modern Techniques, Methods and Real-World Solutions Firm Responses to Sustainable Development Goals in the Context of the Digital Era Firm Size and Sustainable Innovation Management Fluid Power Components and Systems Food and Agroindustrial Waste Trends and Prospective towards Circular Bioeconomy Food Choice and Environmental Concerns Food Land Belts Food Security and Environmental Sustainability Food Security and Environmentally Sustainable Food Systems Food Sovereignty, Food Security, and Sustainable Food Production Food Systems Transformation and the Sustainable Development Goals in Africa Footprints on Sustainable Consumption and Production in Emerging Economies Forecasting Financial Markets and Financial Crisis Foresight Methodologies in Field of Sustainability Analyses Forms of Informal Settlement: Upgrading, Morphology and Morphogenesis Framework for Managing Sustainable Development From an Atomized to a Holistic Perspective for Safety Assessment of Structures and Infrastructures: Exploring Ecosystems From Eco-Design to Sustainable Product-Service Systems From Green Marketing to Green Innovation From Lean to Green Manufacturing From Rhetoric to Sustainability Research Impact: Sustainability Assessment, Methods and Techniques to Action the Sustainable Development Goals Frontier Information Technology and Cyber Security for Sustainable Smart Cites Frontiers and Best Practices in Bio, Circular, and Green Growth and Eco-Innovation Frontiers in Sustainable Agroecosystems Design and Management Frontiers in Sustainable Information and Communications Technology Future Design Future of Built Environment Seen from the Lens of Sustainability Science Gender and Rural Development: Sustainable Livelihoods in a Neoliberal Context Gender Diversity Across Entrepreneurial Leadership in Hospitality Gender in Sustainable Innovation Geological Storage of CO2 and Climate Control Geospatial Technologies and the 4th Industrial Revolution for Sustainable Urban Environment Geotechnical Stability Analysis for Sustainable Development of Infrastructure GIAHS and Community-Based Conservation in National Parks GIS and Linked Digitisations for Urban Heritage Global Energy System in Transition: Challenge Our Myths Global Warming Global Water Vulnerability and Resilience Globalisation in a VUCA Environment Globalization and Sustainable Growth Going Net Zero—Case Studies of How Firms Are Managing the Challenge of Ambitious Emissions Reduction Aspirations Governance, Power and Institutions and Overall Weaknesses of the SDG System: The Public Participation and the Role of Stakeholders Governing for Sustainability in a Changing Global Order Governing Green Energy Trade: Challenges and Opportunities Government Policy and Sustainability Grape Winery Waste: Sustainability and Circular Economy Gray Shades of Sustainability Issues in Organization Management Green Advertising Impact on Consumer Behavior Green and Sustainable Textile Materials Green Building Green Business: Opportunities and Challenges for Sustainability Green Chemistry and Biorefinery Concepts Green Chemistry for Environment and Health Green City Logistics Green Construction Supply Chain: Sustainable Strategy and Optimization Green Consumer Behavior, Green Products & Services, and Green Brands in the Tourism/Hospitality Industry Green Economy Research for Transformative Green Deals Green Economy, Ecosystems and Climate Change Green Energy and Low-Carbon Environment for Sustainable and Economic-Friendly Cities Green Energy and Tourism Policy for Sustainable Economic Growth Green Growth Policy, Degrowth, and Sustainability: The Alternative Solution for Achieving the Balance between Both the Natural and the Economic System Green Hydrogen Economics and Planning towards Carbon Neutrality Green Information Technologies Practices and Financial Performance: Emerging Issues Green Information Technology and Sustainability Green Infrastructure and Nature-Based Solutions in the Urban and Rural Context Green Infrastructure towards Sustainable Cities Green Innovations and the Achievement of Sustainable Development Goals Green IT and Sustainability Green Manufacturing Processes for Leading Industrial Sectors Green Materials in Sustainable Construction Green Practice in Data Storage Green Public Procurement in Civil Engineering at a Regional Level Green Stormwater Infrastructure for Sustainable Urban and Rural Development Green Supply Chain Management and Optimization Green Technology and Sustainable Development Green Technology Innovation for Sustainability Green Transformation of the Construction Industry through Project Management Green Transition and Waste Management in the Digital Era: Strategies, Learnings, Challenges and Future Trends Green Transition Paths under the Carbon-Neutral Targets: Policy Design, Digital Governance, and Technological Progress Green Urban Development Greening Behavior towards Carbon Neutrality Greenwashing and CSR Disclosure of Sustainability in Controversial Industries Groundwater Vulnerability and Sustainability Group Processes and Mutual Learning for Sustainability Hakka Tulou and Sustainability: The Greenest Buildings in the World Harmful Organisms and their Management for Sustainable Environment Health Geography—Human Welfare and Sustainability Healthy and Sustainable Cities by Day and Night. The Future of Research-Based Practice. Heating and Cooling: Mapping and Planning of Energy Systems High Precision Positioning for Intelligent Transportation System High-Strength Steels Welding—Sustainability Based Approach Household Sustainability Housing and Public Health How Consumer Behavior Patterns Change in a Pandemic Condition Human Dimensions of Conservation Research and Practice Human Exposure to Carbon Monoxide in Urban Regions of Asia and the Global South Human Oriented and Environmentally Friendly Lighting Design of Exterior Areas Human-Centered Design and Sustainability: Are They Compatible? Human-Computer Interaction and Sustainable Transportation Human-Cyber-Physical Systems (H-CPS) for Intelligent Civil Infrastructure Operation and Maintenance Hydrometallurgy of Metals from Primary and Secondary Resources in Sustainable Method and Process ICMTs for Sustainability in the Post COVID-19 Era: Revisiting Conceptual and Policy Narratives ICT Adoption for Sustainability ICT and Sustainable Education ICT Implementation toward Sustainable Education ICT4S— ICT for Sustainability IEIE Buildings (Integration of Energy and Indoor Environment) Impact of Climate Change on Urban Development Impact of COVID-19 and Natural Disasters: Energy, Environmental, and Sustainable Development Perspectives Impact of Industry 4.0 Drivers on the Performance of the Service Sector: Human Resources Management Perspective Impact of Management Changes on Seminatural Grasslands and Their Sustainable Use Impact of Social Innovation on Sustainable Development of Rural Areas Impact of the Tax Systems, Tax Administration and Administrative Burdens on Sustainable SMEs’ Performance Impacts of Climate Change on Cultural Landscapes and Strategies for Adaptation Impacts of Climate Change on Tropical Cyclone Activities: Cloud–Radiation Feedback and Dynamics Implementation of Sustainable Technologies for the Transition towards a Circular Economy Model Implementation of the Sustainable Development Goals (SDGs) Implications of the COVID-19 Pandemic for Future Urban and Spatial Planning Improving Governance of Tenure: Progress in Policy and Practice Improving Life in a Changing Urban Environment through Nature-based Solutions and Biophilic Design Improving Sustainability Performance of Physical Assets with Green Approaches and Digital Technologies In Quest for Environmental Sustainability: Microorganisms to the Rescue Incorporating Sustainable and Resilience Approaches in Asphalt Pavements Independence and Security of Energy Supply: A Current Issue of Vital Importance Indigenous Peoples and Sustainable Development in the Arctic Indigenous Transformations towards Sustainability: Indigenous Peoples' Experiences of and Responses to Global Environmental Changes Indoor Air Pollution and Control Inductive Charging for Electric Vehicles: Towards a Safe and Efficient Technology Industrial Automation: Realising the Circular Economy through Autonomous Production Industrial Engineering for Sustainable Industry Industrial Sustainability: Production Systems Design and Optimization across Sustainability Industry 4.0 – Implications for Sustainability in Supply Chain Management and the Circular Economy Industry 4.0 and Artificial Intelligence for Resilient Supply Chains Industry 4.0 and Industrial Sustainability Industry 4.0 Implementation in Food Supply Chains for Improving Sustainability Industry 4.0, Digitization and Opportunities for Sustainability Industry 4.0, Internet 3.0, Sustainability 2.0 - Fostering New Technological Paradigms for Resilience Industry 4.0: Quality Management and Technological Innovation Industry 4.0: Smart Green Applications Industry Development Based on Deep Learning Models and AI 2.0 Influence of Emotions and Feelings in the Construction of Digital Hate Speech: Theoretical and Methodological Principles for Building Inclusive Counter-Narratives Information Sharing on Sustainable and Resilient Supply Chains Information Society and Sustainable Development – selected papers from the 2nd International Scientific Symposium`2015 Information Society and Sustainable Development—Selected Papers from the 5th International Conference ISSD 2018 Information Society and Sustainable Development—Selected Papers from the 6th International Conference ISSD 2019 Information System Model and Big Data Analytics Information Systems and Digital Business Strategy Information Systems and Sustainability Information Systems for Sustainable Development Information Technology in Healthcare and Disaster Management Information, Cybersecurity and Modeling in Sustainable Future Innovating in the Management and Transparency of the Sustainability of Governments to Achieve the Objectives of Sustainable Development Innovating Practice and Policy for Sustainable Pest Management Innovation and Environmental Sustainability Innovation and Governance in the Global Energy Transition Innovation and Sustainability in a Turbulent Economic Environment–Selected Papers from the 12th International Conference on Business Excellence Innovation Development and Sustainability in the Digital Age Innovation Ecosystems: A Sustainability Perspective Innovation for Sustainability Development Innovation in Engineering Education for Sustainable Development Innovation in the European Energy Sector and Regulatory Responses to It Innovation Management in Living Labs Innovation Strategies and Sustainable Development: Tensions and Contradictions on the Way to Sustainable Well-Being Innovation, Emerging Technologies and Sustainability in R&D Intensive Industries – Volume 2 Innovation, Emerging Technologies and Sustainability in R&D Intense Industries Innovations in Façade Design and Operation for Healthy Urban Environments Innovations in Small Businesses and Sustainability Innovations in the Circular Economy: Commons or Commodity? Innovations Management and Technology for Sustainability Innovative Advances in Monitoring, Control, and Management of Microgrids Innovative and Sustainable Design for Mechanics and Industry Innovative Design, Technologies, and Concepts of Commercial Wind Turbines Innovative Development for Sustainability in Water Constrained Regions Innovative Economic Development and Sustainability Innovative Management Practice for Resilience and Sustainability of Civil Infrastructures Innovative Solution for Sustainable and Safe Maritime Transportation Innovative Technology for Sustainable Anticipatory Computing Computing Innovative Training Sustainability in an Uncertain Information and Knowledge Society Institutions and Policies for Rural Land Conversion in the Quest for Sustainability Insurtech, Proptech & Fintech Environment: Sustainability, Global Trends and Opportunities Integrated Approaches to Biomass Sustainability Integrated Evaluation of Indoor Particulate Matter (VIEPI) Project: Study Design, Results and Open Questions Integrated Migration Management, ICTs' enhanced Responses and Policy Making: Towards Human Centric Migration Management Systems Integrated Pest Management and Risk Assessment of Biopesticides Integrated Reporting and Corporate Sustainability Integrating Green Infrastructure, Ecosystem Services and Nature-Based Solutions for Landscape Sustainability Integrating Sensors, AI, and Biodiversity Indices for Sustainable Farming and Livestock: Evaluating Environmental Impacts, Efficiency, and Life Cycle Assessment Integration and Optimization of Smart Mobility for Sustainable Rural Electrification Integration of Green ICTs and Industry into Green Governance for a Sustainable Ecosystem Integration of LCA and BIM for Sustainable Construction Intellectual Capital and Sustainability Intellectual Capital and Sustainability Intellectual Property Strategies and Sustainable Business Models for Sustainability Transitions Intelligent Algorithms and Systems for C-ITS and Automation in Road Transport Intelligent Knowledge-Based Models for Sustainable Spatial Planning and Engineering Intelligent Manufacturing for Sustainability Intelligent Networking for Sustainable Environment and Human-Natural Systems Intelligent Sensing for Sustainable Production Industries Intelligent Sensing, Control and Optimization for Sustainable Cyber-Physical Systems Intelligent System and Application Improving Enterprise’s Sustainable Development Intelligent Transportation Systems (ITS), Traffic Operations and Sustainability Intensification of Digitization Tools, Their Development and Applicability Intention and Tourism/Hospitality Development Interactive Learning Environments in Student’s Lifelong Learning Process: Framework for Sustainable Development Goals of the 2030 Agenda Intercropping Systems and Pest Management in Sustainable Agriculture Interdisciplinary Approaches to Sustainability Accounting and Management: Selected Papers from the 4th EMAN Africa Conference on Sustainability Accounting & Management (including related invited papers) International Business Theories and Internationalization of Emerging Economies International Entrepreneurship and Innovation International Finance and Money Market International Fisheries Policy and Economic Analysis International Migration and Sustainable Development: Globalization, Move-In Move-Out Migration and Translocal Development Internet Finance, Green Finance and Sustainability Internet of Things: Towards a Smart and Sustainable Future IoT and Computational Intelligence Applications in Digital and Sustainable Transitions IoT and Sustainability IoT Data Processing and Analytics for Computational Sustainability IoT Learning for the Future of Online Engineering Education IoT Quality Assessment and Sustainable Optimization ISMO—Sustainability in Engineering and Environmental Sciences IT-Enabled Sustainability and Development Just Food System Transformations Karst and Environmental Sustainability Knowledge Management and Sustainability in the Digital Era Knowledge Management, Innovation and Big Data: Implications for Sustainability, Policy Making and Competitiveness Knowledge-based Systems and Sustainability Land Teleconnection, Governance and Urban Sustainability Land Use/Cover Drivers and Impacts: New Trends and Experiences from Asia Land, Water, Food, Energy (LWFE) Security NEXUS for Sustainable Ecosystems and Natural Resource Management Landscape and Sustainability Landscape Monitoring, Ecosystem Services and Sustainable Development Large-Scale Systems: Sustainable Economy and Transport in the Modern World Latest Applications of Computer Vision and Machine Learning Techniques for Smart Sustainability Latest Developments and Challenges in MCDM Theory, Models, and Applications Law and Sustainability in Global Value Chains Leaders and Team Members’ Perceptions of Cooperation at Work Leadership and Sustainable Human Resource Management in the Tourism and Hospitality Industry Leadership, Occupational Stress and Sustainable Operations: Multinational Perspective Lean and Green Manufacturing Lean Design and Building Information Modelling Lean Six Sigma for Manufacturing Sustainability: Present Innovation and Future Prospects Learning Design for Sustainable Education Development Learning, Resilience, and Employability in Organisational Sustainability Less Impact, More Resilience and Welfare: Computer and Electronics to Improve the Sustainability in Agriculture Production Leveraging Digitalization for Advanced Service Business Models: Challenges and Opportunities for Circular Economy Life Assessment and Dynamic Behavior of Components Life Cycle Assessment and Environmental Footprinting Life Cycle Assessment for Sustainable Waste Management Strategies Life Cycle Assessment in Sustainable Products Development Life Cycle Assessment of Agri-Food Products Life Cycle Assessment on Green Building Implementation Life Cycle Assessment, a Tool for Sustainability and Circular Economy Life Cycle Sustainability Assessment Life Cycle Sustainability Assessment: Implementation and Future Perspectives Light and Industry Lighting the Way for Retail Design: Interactions, Trade-Offs and ROI’s Live Well, Live Long: Strategies for Promoting Physical Activity as a Healthy Habit among University Students Living in a Changing Climate: Everyday Knowledge and Everyday Lives Local and Global Threats to Rural and Peri-Urban Cultural and Natural Landscapes Local Development Initiatives and Sustainable Employment Policies Local Heritage and Sustainability Logistics and Sustainable Supply Chain Management (Series) II Looking at Strategic Plans of Universities: Sustainable Education, Innovative and Collaborative Learning Looking Back, Looking Ahead: Environmental Dispute Resolution and Sustainability Looking beyond Sustainability: Selected Papers from the 9th World Sustainability Forum (WSF 2021) Low Carbon Development for Emerging Markets Low CO2 Concrete Low-Carbon Affordable Houses for Sustainable Societies Machine Learning and Data Mining Techniques: Towards a Sustainable Industry Machine Learning and Robots for Sustainability Machine Learning Methods and IoT for Sustainability Machine Learning Techniques in Designing the Efficient Platforms for the Internet of Behaviors (IoB) Machine Learning with Metaheuristic Algorithms for Sustainable Water Resources Management Machine Learning-Enabled Radio Resource Allocation for Sustainability of Wireless Engineering Technologies Machine Learning, IoT and Artificial Intelligence for Sustainable Development Macroprudential Policy, Monetary Policy, and Financial Sustainability Maladaptation to Climate Change Management Approaches to Improve Sustainability in Urban Systems Management of Freshwater Fisheries in the XXI Century: Perspectives, Approaches and Challenges within a Sustainability Framework Management of Sustainable Development with a Focus on Critical Infrastructure Managerial Decision Making: A Sustainable Behavioral Approach Manufacturing and Maintenance Manufacturing and Management Paradigms, Methods and Tools for Sustainable Industry 4.0 oriented Manufacturing Systems Marketing and Business Motivations for Implementing Sustainability Marketing of Innovation, Science and Technological Change Marketing Strategies for Sustainable Product and Business Development Mass Timber and Sustainable Building Construction Materials Properties and Engineering for Sustainability Math Education and Problem Solving Mathematical and Data-Driven Tools to Measure Sustainability Maximizing the Potentials of Unmanned Aerial Vehicles (UAVs) in Sustainability Measuring Progress towards the Achievement of Sustainable Development Goals (SDGs) Measuring Socio-Economic Well-Being Mechanism, Evaluation and Early Warning of Coal–Rock Dynamic Disasters Mechanisms Involved in Sustainable Metabolism of Legume Plants under Biotic and Abiotic Stresses Mechanisms, Technologies, and Policies for Carbon Peaking, Carbon Neutral Processes Medical Education: The Challenges and Opportunities of Sustainability Medium Sized Cities and Their Urban Areas: Challenges and Opportunities in the New Urban Agenda Mergers and Acquisitions Processes and Sustainability Methodological Advances in Research on Sustainable Ecosystems Methodological Aspects of Solving Sustainability Problems: New Challenges, Algorithms and Application Areas Methodologies and Applications of Multiple Criteria Decision Making for Sustainable Development Methods, Tools, Indexes and Frameworks in Sustainability Assessment Microbial Populations and Their Interactions in Agroecosystems: Diversity, Function and Ecology Microbial Resources and Sustainable Remediation Mineral and Microorganism Interactions for Sustainability Mobile and Personalized Learning for Sustainable Development and Education Mobile Communications and Novel Business Models Mobile Networks and Sustainable Applications Mobility for Sustainable Societies: Challenges and Opportunities Modeling and Simulation Formalisms, Methods, and Tools for Digital-Twin-Driven Engineering and Sustainability-Led Management of Complex Systems Modelling and Analysis of Sustainability Related Issues in New Era Modelling and Simulation of Human-Environment Interactions Modelling Approaches to Support Decision Making Modelling of Industrial Processes Modelling the Economic, Social and Environmental Components of Natural Resources for Sustainable Management Modern Statistical Techniques and Sustainability Studies: Selected Papers from the 9th International Conference on PLS and Related Methods (PLS'17) Modernization and Sustainability of Urban Water Systems Monitoring and Intervening with Adolescent Green Attitudes and Values Monitoring and Modelling Techniques for Sea Environment and Sustainable Development Monitoring Arctic Sustainability: Methods, Indicators, Monitoring Systems and Experiences Moving Forward to the Paris Agreement Warming Targets Moving toward Sustainability: Rethinking Gender Structures in Education and Occupation Systems Moving towards Maturity in Sustainable Human Resource Management Multi-disciplinary Sustainability Research Multi-Objective and Multi-Attribute Optimisation for Sustainable Development Decision Aiding Multi-Scale Integrated Energy Management in the Built Environment Multicriteria Decision Analysis and the Sustainability of Public Systems Worldwide Multidimensional Sustainable Development in Higher Education Institutions Multidisciplinary Approaches to Multilingual Sustainability Multifunctional Coatings Innovating in the United Nations 17 Goals on Sustainable Development Multiple Criteria Decision Making for Sustainable Development Municipal Solid Waste Management and Environmental Sustainability Nano- and Micro-Contaminants and Their Effect on the Human and Environment Nanobiotechnology Approach for Sustainable Agriculture Nanomineral and Their Importance on the Earth and Human Health: A Real Impact National Parks: Theories and Practices Natural Events Threatening the Cultural Heritage: Characterization, Prevention and Risk Management for a Sustainable Fruition Natural Products and Sustainable Bioresource Recovery Natural Resource Management for Green Growth in Developing Countries Natural Resources of Tourism: Towards Sustainable Exploitation on Regional Scale Nature Based Solutions to Support Climate Change Adaptation and Sustainable Development Nature-Based Solutions—Concept, Evaluation, and Governance Nature-Inspired Sustainable Development Navigation and Remote Sensing for Sustainable Development Neo-Geography and Crowdsourcing Technologies for Sustainable Urban Transportation Network Science for All in Sustainability Neural Networks and Data Analytics for Sustainable Development Neutrosophic Modelling for Circular Economy and Sustainable Decision Making New Business Models: Sustainable. Circular. Inclusive New Challenges for Sustainable Development New Challenges for Sustainable Organizations in Light of Agenda 2030 for Sustainability New Challenges in Science-Based Entrepreneurship New Challenges in Sustainable Finance New Concepts for Regeneration of Industrial Cities New Data Collection and Processes for Sustainable Development New Directions in Co-Producing Research to Action for Sustainability New Economic and Financial Challenges for Social and Environmental Sustainability New Frontiers in Environmental Citizenship and Risk Management of Sustainability New Horizons for Sustainable Architecture New Insights on Intelligence and Security for Sustainable Application-2nd Edition New Insights on Intelligence and Security for Sustainable Applications New Methodological, Technical-Tactical and Biopsychosocial Perspectives in Opposition Sports New Multidisciplinary Approaches for Reducing Food Waste in Agribusiness Supply Chains New Project Financing and Eco-Efficiency Models for Investment Sustainability New Research Trends and Developments in Nanostructured Materials: Thin Films and Nanotechnology New Studies in EROI (Energy Return on Investment) New Urban Agenda and New Urban Studies: A Sustainable Planning Toolkit Next Generation Technologies for Building Sustainable Smart Cities Nitrification and Denitrification for Nitrogen Removals in Wastewater Treatment Nitrogen: Too Much of a Vital Resource Novel Decision Methods to Enable Technological Innovation for a Sustainable Future OBOR—One Belt One Road Research: New Forms of International and Cross-Industry Collaboration for Sustainable Growth and Development Occurrence, Impact, and Removal of Nutrients in Stormwater On the Socioeconomic and Political Outcomes of Global Climate Change On the Sustainable Relationship between Product-Service Innovation and Outcomes: Pitfalls and Solutions Online Algorithms and Green Data for Sustainable Development Online Global Citizenship, Open Education and Civic Collaboration Open Challenges and Novel Quantitative / Qualitative Techniques in Definition, Policy Instruments, and Measurement Open Innovation Strategy and Advanced Technology Discovery for Sustainable Growth of Firms Operational Performance, Degradation and Reliability of Photovoltaic Systems Operational Research Tools for Solving Sustainable Engineering Problems Operationalizing the Circular City Model for Metropolitan and Port Cities Regeneration: Multiple Approaches, Tools and Evaluations Operative Metropolitan Transformations: A Prospective for Equitable Resilience Optimal Decisions in Sustainable Supply Chains Impacted by Health Crisis Optimal Transition toward Innovation-Led Sustainable Governance under the 2020 Paris Regime in Asia Optimization in Green Supply Chain Management and Ecological Transition Optimization of Logistics Systems Using Industry 4.0 Technologies Optimized Design of Hybrid Microgrid Organic By-products and Waste for Industrial, Agricultural and Energy Applications Organic Farming and a Systems Approach to Sustainable Agroecosystems Organizational Sustainability: Theory, Culture, and Practice Out of the Lab Employment of Neurophysiological Measures and Sustainability Overcoming Social, Political and Economic Impediments to Environmental Education and Action Packaging Sustainability - The Role of Packaging in Reducing Food Waste Participatory OR Approaches for Sustainable Decision Making and Systems Development Passive House Development and High Energy Efficiency Sustainable Buildings Pastoral Goat Husbandry and Environment Path to Carbon Neutrality Pathways for Implementing the Sustainable Development Goals (SDG) Patient Centredness, Values, Equity and Sustainability: Professional, Organizational and Institutional Implications Payment for Ecosystem Services, Environmental Taxation and Environmental Management: Cases from Evidence Based Policy Making and Global to Local Benefit Sharing Schemes Pedagogy for Education for Sustainability (EfS) in Higher Education (HE) People and the City: Real Estate-Scape as a Sustainable Citizenship Project Performance and Durability of Sustainable Cementitious Mixtures Performance Benefits of Circular Economy: Between Convergent and Conflicting Interests Perspectives in the Provision of Public Goods by Agriculture and Forestry Physical Activity as a Means to Reduce Violent Behaviors in the Educational Environment for a Sustainable Education Physical Activity, Education and New Methodologies Physical Fitness and Healthy Lifestyles in Childhood and Adolescence Pipeline Science and Innovation Planet Mosaic: Detecting Environmental Changes Planning for Climate Change Plant Biotic and Abiotic Stress Tolerance: Physiological and Molecular Approaches for Sustainable Agricultural Production Plant Breeding for Sustainable Agriculture Plant Diversity in Sustainable Agroecosystems Plant Environmental Stress Physiology and Sustainability Plant Molecular Genetics and Biotechnology Approaches in Sustainable Agriculture and Global Food Security Plant-Microbe Interactions for Sustainable Agriculture in Changing Environment Policies and Governance for Sustainability in the Cultural Management Context Policy Pathways for Sustainability Political Economy and Sustainability Political Participation and Sustainability: Exploring Contemporary Challenges Post-Industrial Design toward Sustainability Development Goals: Techno-Socio-Economic Development Models for Global Society in the Era of Digital Transformation Power Conversion Systems for Concentrating Solar Thermal and Waste Heat Recovery Applications at High Temperatures Power Distribution System and Sustainability Precision Agriculture and Sustainability Preferential Trade Agreements and Global Value Chains Preserving Ecosystem Services via Sustainable Agro-Food Chains Privacy-Aware Authentication in a Sustainable Internet-of-Things Environment Pro-environmental Decisions: Sustainable Use of Urban Rooftops Proceedings of the 3rd International Sustainability Conference Process Innovations in Agri-Food Supply and Value Chains Process Integration and Optimisation for Sustainable Systems Procurement for a Sustainable Built Environment Product Design and Consumer Behavior in A Circular Economy Product Innovation and Sustainability Product-Service Innovation as a Springboard for Enhancing Sustainable Manufacturing Systems: Challenges & Opportunities for Sustainable Development in Manufacturing Enterprises Product-Service System (PSS) Development and Customization for Sustainability Product/Service System Design for Sustainability Production Line Optimization and Sustainability Progress, Challenges and Priorities of Digital Sustainability in Developing Nations Project Management Practices for Sustainable EPC Projects Submission Promoting Environmental Resiliency and Justice through Urban Blue and Green Spaces Promoting Pro-Sociality in the Real World: Theory and Experiments Promoting Sustainability in Higher Education Promoting the Sustainability of Agricultural Heritage Systems through Dynamic Conservation Prospects and Challenges of Sustainable Public Purchasing Prosumption within Tourist Experiences Prudential Regulation of Financial and Sustainability Risks from Climate Change: Empirical and Theoretical Research on Banks, Insurers and the Wider Financial System Psychological and Behavioral Aspects of Sustainability Psychosocial Risk and Protective Factors for Sustainable Development in Childhood and Adolescence Public Diplomacy, Social Responsibility and Place Branding: A Glocal Perspective Public Health and the Sustainable Development of Human Beings Public Marketplaces Promoting Resilience and Sustainability Public Participation in Sustainability-Oriented Research: Fallacies of Inclusiveness and the Ambivalences of Digital and Other Remedies Public Policy Evaluation and Sustainable Economic Development: Theoretical and Empirical Aspects Public Transport Accessibility and Sustainability Public-Private Partnerships for Sustainable Development Public-Private Partnerships: Development of Sustainable Projects Quality Management and Development of Organizations Quantitative Assessment of Decentralized Sanitation Systems in Small and Remote Communities and Developing Regions Quantitative Research of Technological Innovation and Energy Market Evolution on Carbon Emissions and Green Economy R&D Policies and Economic Sustainability Re-Inventing Globalization: Community, Virtues and the Power of Purpose Real Estate Economics, Management and Investments Real Estate Landscapes: Appraisal, Accounting and Assessment Real Option Valuation for Business Sustainability under Uncertainty Recent Advances for Sustainable Development of Multi-Generation Energy Systems Recent Advances in Clean Technologies for Environmental Sustainability Recent Advances in Geotechnical Stability and Technological Applications Recent Contribution from Large-Scale Data Analytics to the Sustainable Energy Transition Recent Development in Financial Sustainability Recent Research on Statistics, Machine Learning, and Data Science in Sustainability and Penta Helix Contribution Recent Trends and Applications in Intelligent Systems for Sustainability Recent Trends and Applications in Physical Asset Management in the Context of Sustainable Development Recent Trends in Digital Financial and Payment Services: Perspectives for Sustainability and Consumer Well-Being Recent Trends in Sustainable Supply Chains: Challenges and Opportunities in the Era of Crises and Disruption Recent Trends in Time-Frequency Signal Analysis: Sustainable Applications and Systems Reconciling Deterioration and Failure of Soil with Ecological Restoration Recovery and Sustainability of the Sport Sector during the COVID-19 Pandemic Recycling Agricultural Waste towards Low Carbon Recycling and Reuse of Construction and Demolition Waste Reframing Sustainable Tourism Regional Cooperation for the Sustainable Development and Management in Northeast Asia Reimagining Environmental Law for the Anthropocene Reimagining Exercise and Sports Sustainability Remote Sensing and Open-Source Applications of Renewable Energies and Sustainable Management Monitoring Remote Sensing Studies Applied to the Use of Satellite Images in Global Scale Renewable Agriculture Renewable and Sustainable Energy for Sustainable Development Renewable Biodiesel/Green Diesel for a Sustainable Future Renewable Energy and Farming for Sustainability Renewable Energy and Sustainability Renewable Energy Applications and Energy Saving in Buildings Renewable Energy Applications in Livestock Production Renewable Energy Technologies in Households Research in Motivational and Group Processes in Sport Research on Sustainability and Artificial Intelligence Researching Entrepreneurship at Different Levels: Micro, Meso and Macro Researching Simulations and Serious Games for Sustainability Resident-Tourist Relationships, Interactions and Sustainability Resilience and Sustainability in Agricultural and Food Systems Resilience and Sustainability of Health Systems Resilience Engineering for Sustainability: Methodological Approaches and Practical Experience Resilience Engineering Practices for Sustainable Development Resilience in City and Rural Areas under Global Environmental Change Resilience to Natural and Man-Made Disasters Resilience, Sustainability and Voluntary Temporary Populations Resilient Architectural and Urban Design Resilient Cyber-Physical Systems Resilient Economics and the Regional Sustainable Economic Growth Resilient Infrastructure Systems and Sustainable Economic Growth Resilient, Sustainable and Smart Cities: Emerging Trends and Approaches Resource Scarcity and Prospects for Cooperation in Developing Regions Responding to Crisis in Industry 4.0: Sustainable Marketing Innovation in Uncertain Times Responding to Global Changes through Science Promotion and Ecological Education Responding to Pressing Sustainability Issues through Agro-Food Policies Responsible Innovation for a Sustainable Future Responsible Leisure and Sustainable Human Development in Times of Change Responsible Research and Innovation (RRI) in Industry Responsible Supply Chain and the UN's Sustainable Development Goals Responsible Transitions in Agri-Food: Towards a Sustainable Future Responsible Value Chains for Sustainability: Practices and Challenges from EcoBalance Restoring Coastal Resilience Retailing and Sustainability Rethinking Approaches to Sustainability Challenges by Sharing Solutions among Areas Rethinking Novel Tourism Demand Modelling and Forecasting Due to COVID-19: Uncertainty, Structural Breaks and Data Rethinking Sustainability in Human Resource Management Rethinking Sustainable Construction: Renewing Old Perspectives and Emergent New Frames of Thinking Reuse of Waste Streams for Geotechnical and Geo-Environmental Applications Revamp Tourism—Utilization of Affective Reasoning, Artificial Intelligence, and Big Data through Natural Language Processing Reviving Drivers for Sustainable Architecture and Urban Design—Ecology and Energy Revolutionizing Internet of Things (IoT)-Based Technologies for Sustainable Environment Rhizo-Microbiome for the Sustenance of Agro-Ecosystems in the Changing Climate Scenario Risk Analysis, Assessment and Management for Sustainable Engineering Projects Risk and Security Management for Critical Infrastructures Risk Management as a Tool for Sustainability Risk Measures with Applications in Finance and Economics Risk-Informed Decision-Making in Sustainable Management of Industrial Assets River Ecosystem Processes in the Context of Sustainability RNAi-Based Pesticides: A New Tool to Improve Sustainable Agriculture, between Real Opportunities of Suitable Applications and Facing Legal Vacuum and Citizen Bias Robotic Co-Workers for Work and Workforce Sustainability Role of AI, Big Data, and Blockchain in IoT Devices Role of Religion in Sustainable Consumption Role of Third- and Fourth-Party Logistics Providers to Support Sustainable Development Goals in Developing Countries Rural Development and Sustainable Rural Tourism Rural Development: Challenges for Managers and Policy Makers Rural Energy Transition in the Global North: Community Benefits, Contradictions and Future Challenges Rural Sustainable Environmental Management II Safety and Security Issues in Industrial Parks Scheduling Problems in Sustainability Science, Technology and Innovation Reforms for Sustainable Development of the GCC Countries Scientific Research on Sustainable Development Goals Scientific Theory and Methodologies toward a Sustainable Future under Post-COVID-19 Transition Movement SDN Networks for Modern Communication Systems Security Challenges in the Context of Sustainability Security on Web-Based Applications: Technologies, Methodologies, Analysis Methods and Recent Advances in Cybersecurity Selected Papers from "Soil Security and Planetary Health Conference" Selected Papers from 2nd Eurasian Conference on Educational Innovation 2019 Selected Papers from 6th Annual Conference of Energy Economics and Management Selected Papers from 7th Annual Conference of Energy Economics and Management Selected Papers from AISL 2021 Conference on Improving Scientific Literacy through Interdisciplinary Research on Technology-Enhanced Practice Selected Papers from Eurasian Conference on Educational Innovation 2021 Selected Papers from the 2nd International Conference on Transitions in Agriculture and Rural Society. The Global Challenges of Rural History Selected Papers from the 9th International Conference “Production Engineering and Management” (PEM 2019) Selected Papers from the Eurasian Conference on Educational Innovation 2020 Selected Papers from the Sustainable Globe Conference 2021&2022 within the theme of Education, Agriculture and Sustainable Communities Selected Papers from TIKI IEEE ICASI 2019 Selected Papers on Sustainability from IMETI 2022 Self-Employment Sustainability: Exploring the Long-Term Survival Self-Organised Simulation for Sustainable Building Design Sensors and Biosensors in Environmental Applications Services Marketing and Sustainability Services Sector Trade and Investment Sewage Sludge Management and Environmental Control Sharing Economy and Its Role in Fostering Sustainability: How Trust and Regulation Shape Relations of Providers and Consumers Sharing Economy for Sustainability Sharing for Caring: On the Role of Collaborative Economy in the Sustainable Development Goals (SDGs) Simulations and Methods for Disaster Risk Reduction in Sustainable Built Environments Slow Fashion: Past, Present and Future Small and Medium-Size Towns Across the World: From the Past into the Future (SMTs) Smart and Low-Carbon Transition: Urban Planning and Governance under Climate Change Smart and Resilient Interdependent Infrastructure System Smart Approaches to Predict Floods and Droughts: Current Technology and Challenges Smart Cities, Smart Mobilities, and Sustainable Development of Cities Smart City Building and Sustainable Governance Smart City Innovation and Resilience in the Era of Artificial Intelligence Smart City: Intelligent Technology, Renewable Energy, and Public Wellness Smart Classrooms in Higher Education for Sustainability Smart Educational Games and Gamification Systems in Online Learning Environments Smart Energy Regions—Drivers and Barriers to the Implementation of Low Carbon Technologies at a Regional Scale Smart Farming and Bioenergy Feedstock Crops Smart Grid Technologies and Renewable Energy Applications Smart Industry – Theories and Practices for Sustainability Smart Living Technology and Innovations Smart Specialization Regional Development in Times of Uncertainty Smart Sustainable Education: Innovative Digital Transformation for Innovation and Entrepreneurship Smart Technology and Gamification for Exploring the Sustainability of Social-Ecological Systems Smart Technology-Enhanced and Sustainable Assessment Smart Thermal Grids for Sustainable Energy Systems Smart X for Sustainability Social and Environmental Entrepreneur Social and New Technology Challenges of Sustainable Business Social Customer Relationship Management Social Impact and Challenges of Sustainability Reporting in the Digital Era Social Impact Investments for a Sustainable Welfare State Social Innovation and Entrepreneurship: Toward Ecology Friendly Models Social Innovation and Participatory Governance? Exploring Their Relationship in the Context of Urban Sustainability and Climate Adaptation Social Innovation towards Sustainability: Embracing Contemporary Challenges for Regional Development Social Marketing and Social Entrepreneurship Education Social Media and Sustainability in the Digital Era Social Media Strategy in Sustainable Business Social Media Usage in Consumer Behavior Evaluation Social Media, Crisis Communication, and Publics Social Polarization, Inequality and Segregation Social Sustainability and Justice Social Sustainability: Theory, Practice, Problems and Prospects Social-Ecological Systems. Facing Global Transformations Society 5.0 and Industry 4.0 Relations and Implications Socio-Cultural Perspective for Martial Arts Tourism and Motor Recreation Research Socio-Ecological Interactions and Sustainable Development Socioeconomic Approaches to Sustainable Development of Real Estate Markets—COVID-19 Pandemic Lessons Socioeconomic Modelling and Prediction with Machine Learning Soft Computing for Sustainability Soil Carbon Management: Improving Soil Health and Mitigating Climate Change Soil Erosion and Sustainable Land Management (SLM) Solar Energy in Africa Sophisticated Soft Computing Techniques for Sustainable Engineering and Sciences Space for Sustainability: Using Data from Earth Observation to Support Sustainable Development Indicators Spatial Analysis and Geographic Information Systems Spatial Analysis and Real Estate Studies Spatial Planning and Sustainability in the Global South Special issue of Sustainable Asia Conference 2014 Sport Policy and Finance Sport Policy and Finance Ⅱ Sport, Tourism, and Hospitality for SDGs Sports Tourism and Sustainability Stakeholders in Sustainable Project Management Start Sweating the Small Stuff: Prompting Ideas for Doing More Small Things in Sustainability State of the Art of Assessment for Sustainable Development Goals Statistical Applications and Data Analysis for Sustainable Development Strategic and Managerial Decision Making for Enterprise Sustainability Strategic Challenges in Sustainable Human Resources Management Strategic Management and Organizational Innovation: Strategic Innovation as a Means to Improve Organizational Sustainability Strategic Planning of Sports Systems Strategies and Applications for Sustainable Engineering Education Strategies for Increasing the Sustainability of the Built Environment Strategies for Responsible Tourism and Sustainability Strategies for Sustainable Land Use: An Environmental Science Perspective Strategies for the Conservation of Industrial Heritage Aligned with the Sustainable Development Goals Structural Health Monitoring and Sustainability Structural Upgrading Systems for Sustainable and Resilient Concrete Infrastructure Successful Actions on Sustainability Impact Sulfur Compounds in a Sustainable World Sulfur Removal from Hydrocarbon Streams SUMP for Cities’ Sustainable Development Supply Chain Innovability: Combining Innovation and Sustainability for the Future of Supply Chains Supply Chain Sustainability Sustainability Accounting and Integrated Reporting Sustainability Accounting and Reporting Sustainability Accounting in the Global Context Sustainability Analysis and Environmental Decision-Making Using Simulation, Optimization, and Computational Analytics Sustainability Analytics over Responsible Organisations Sustainability and Application of Green Production Sustainability and Artificial Intelligence Sustainability and Consumption Sustainability and Development of Remote Regional Economies in the North Sustainability and Digital Innovation in Places: A Spatially-Bounded Innovation System Perspective Sustainability and Digital Retailing Sustainability and Digital Transformation: The New Challenges of the Construction Industry Sustainability and Econophysics Sustainability and Efficiency of E-mobility in the Current Global Context – the Perspectives of SDGs and Circular Economy Sustainability and Energy Efficiency of Developing 5G/6G Wireless Technologies for Sustainable Wireless Communication Systems Sustainability and Environmental Management: How Can Circular Economy Contribute to Fight Climate Change? Sustainability and Ethics: Reflections on the UN Sustainable Development Goals Sustainability and Green Finance in the Era of Non-Financial Reporting: From Sustainability Reporting to Greenwashing Sustainability and Innovation in an Era of Global Change Sustainability and Innovation: Concepts, Methodology, and Practices Sustainability and Innovation: New Technologies Shaping the Marketplace Sustainability and Institutional Change Sustainability and Justice Sustainability and Management Information and Control Systems Sustainability and Product Differentiation Sustainability and Production of Cropping Systems Sustainability and Resilience in the Urban Realm Sustainability and Resilience of Collaborative Public Service Delivery Sustainability and Resilient Performance Assessment of Corporations and Supply Chains Sustainability and Technological Trajectories of Erosion Sustainability and the Global Pandemic: Issues, Policies and Strategies Sustainability and Urban Metabolism Sustainability as a Component of Competitive Advantage Sustainability Assessment of Energy System Sustainability Assessment of Land Use and Land Cover Sustainability Assessment of Pavement De-icing and Anti-icing Methods Sustainability at the Nexus between Climate Change and Land Use Change Sustainability Challenges in Cyber Security for Smart Cities Sustainability Development: Challenges and Opportunities in Dynamic Times Sustainability for Next Generation Smart Networks Sustainability in 2023 Sustainability in Agriculture: Selected Papers from the 8th International Conference on Agriculture 2021 (AGRICO 2021) Sustainability in an Urbanizing World: The Role of People Sustainability in Applications Using Quantitative Techniques Sustainability in Business Processes Management Sustainability in China: Bridging Global Knowledge with Local Action Sustainability in Construction Sustainability in Construction Engineering Sustainability in Corporate Governance and Strategic Management Sustainability in E-Business Sustainability in Education: a Critical Reappraisal of Practice and Purpose Sustainability in Fashion Business Operations Sustainability in Financial Industry Sustainability in Firm Internationalization and International Trade Sustainability in Food Consumption and Food Security Sustainability in Hospitality and Tourism Sustainability in Interaction of Traditional and Mechanical Financial Systems Sustainability in International Trade Sustainability in Land Use Planning Sustainability in Leadership and Education Sustainability in Manufacturing Sustainability in Manufacturing Scheduling: New Extended Optimization Approaches Sustainability in Marketing Sustainability in Metal Additive Manufacturing Sustainability in Organizational Culture and Intercultural Management Sustainability in Organizational Values and Public Administration Sustainability in Product Development Sustainability in Project Management in the Digital Transition Era: Principles, Tools and Practice Sustainability in Protected Crops Sustainability in Smart Farms Sustainability in Social Investment and Responsible Trade Sustainability in Sport: When the Future of Sport Is Being Decided Now Sustainability in Supply Chain Management Sustainability in Supply Chain Operations and Collaboration in the Big Data Era Sustainability in Supply Chains with Behavioral Concerns Sustainability in Synchromodal Logistics and Transportation Sustainability in Textiles Sustainability in the Equine Industry Sustainability in the Mining, Minerals and Energy Industries Sustainability in the Mountains Region Sustainability in the Platform Ecosystems Era. New Approaches to an Ancient Problem Sustainability in Through-Life Engineering Services (TES) Sustainability in Tourism and Economic Growth Sustainability Issues in Civil Engineering and Management Sustainability Issues in the Textile and Apparel Supply Chains Sustainability Issues in Transport Infrastructure Sustainability Leadership in Small to Medium-Sized Enterprises (SMEs): Bridging the Divide between Theory and Practice Sustainability Marketing: the Use of Sustainability Messages, Labels, and Reports in the Marketing Communication Sustainability Meets Humanitarian Logistics Sustainability of Agricultural Plants under Fluctuations of Environmental Conditions: Physiological Bases, Monitoring, Simulation, and Prediction Sustainability of Bioresources, By-Products and Bio-Waste Management in Rural Areas Sustainability of Care for Older People in Ageing Societies Sustainability of Constructions - Integrated Approach to Life-time Structural Engineering Sustainability of Developing 5G/6G Wireless Technologies for Sustainable Wireless Communication Systems Sustainability of Electric Power Devices Sustainability of Families and Child Welfare Sustainability of Fisheries and Aquaculture: Selected Papers from the 8th International Conference on Fisheries and Aquaculture (ICFA 2021) Sustainability of Global Economy and Governance—Ethics, Cohesion and Social Responsibility Sustainability of Large Satellite Constellations for 5G/B5G Sustainability of Nanotechnology: Novel Approaches Sustainability of Online Communities and Online Communities for Sustainability Sustainability of Protein in Food Ecosystem Sustainability of Rural Areas and Agriculture under Uncertainties Sustainability of Rural Tourism and Promotion of Local Development Sustainability of Small-Scale Fisheries: Recent Trends and Future Prospects Sustainability of Soil Reuse in Civil Construction Sustainability of the Electronic Equipment Sector: Advances and Controversial Aspects Sustainability of the Theories Developed by Mathematical Finance and Mathematical Economics with Applications Sustainability of Young Companies–Contemporary Trends and Challenges Sustainability Optimisation of Electrified Railways Sustainability Outlook: Forecasting the Future Development Sustainability Practices and Corporate Financial Performance Sustainability Rating Tools in the Built Environment Sustainability Science and Technology Sustainability Teaching Tools in the Digital Age Sustainability Technologies and Applications for Green Cloud Computing Sustainability through the Lens of Environmental Sociology Sustainability Transition Towards a Bio-Based Economy: New Technologies, New Products, New Policies Sustainability vs Uncontrollability: COVID-19 and Crisis Impact on the Hospitality and Tourism Community Sustainability with Optimization Techniques Sustainability with Robo-Advisor and Artificial Intelligence in Finance Sustainability, Biodiversity, and Conservation Sustainability, Industry 4.0, and Economics Sustainability, Resilience and Risk Assessments Enabled by Multiple Criteria Decision Analysis (MCDA) Sustainability: An Impossible Future? Sustainable 3D Printing for Smart Manufacturing and Production Sustainable Action in Consumption and Production Sustainable Agribusiness Decision making model in Belt and Road Green Development Sustainable Agriculture and Environmental Impacts Sustainable Agriculture Guided by the Environmentally Responsible Consumption Sustainable Agriculture Through Technological Intervention Sustainable Agriculture-Food Supply Chains: Innovation, Technologies, and Decisions Sustainable Agro-Ecosystem Management: Mechanisms, Measurements and Modelling Strategies with Special Emphasis to the Soil Properties Sustainable Analysis of Traffic Crash Risk Sustainable and Healthy Public Spaces: Towards a More Inclusive City Sustainable and Human-Centric E-Commerce Sustainable and Optimal Manufacturing Sustainable and Resilient Supply Chains Sustainable and Safe Two-Wheel Mobility Sustainable Applications in Agriculture Sustainable Applications of Remote Sensing and Geospatial Information Systems to Earth Observations Sustainable Approaches for Industrial Sector Sustainable Approaches within the Chemical Sciences Sustainable Aquaculture and Community Development Sustainable Architecture Sustainable Assessment in Supply Chain and Infrastructure Management Sustainable Assessment of Urban Water Systems Sustainable Biofuel Production from Lignocellulosic Biomass: Special Focus on Pretreatments Methods, Biomass Hydrolysis and Assessment Methods Sustainable Biomass Fuelled and Solar Driven Tri-generation Systems Sustainable Blockchain and Computer Systems Sustainable Branding and Marketing Sustainable Building Sustainable Building and Indoor Air Quality Sustainable Building Renovation Sustainable Buildings in Developing Countries Sustainable Built Environment of Post-Carbon Era (Sustainable Built Environment 2016 Seoul Conference) Sustainable Built Environments Sustainable Business Model and Digital Transformation Sustainable Business Models and Common Goods Sustainable Business Models and Innovation in the Knowledge Economy/Business Revolution in the Digital Era- Selected Papers from the 13th and 14th International Conference on Business Excellence Sustainable Business Models in Tourism Sustainable Care: Facing Global Ageing More Effectively Sustainable Cities Sustainable City Logistics and Humanitarian Logistics Sustainable Clothing Consumption: Circular Use of Apparel Sustainable Clothing Industry: Production, Consumption and Recycling Systems Sustainable Coastal Development: Justice, Transitions and the Blue Economy Sustainable Communication and Networking Sustainable Communities: Economic, Social and Environmental Dimensions Sustainable Concrete Structures Sustainable Construction and Architecture Sustainable Construction and Demolition (Best Contributions of the International Conference on Sustainable Construction and Demolition, Valencia, November 2021) Sustainable Construction and Interior Comfort Sustainable Construction Engineering and Management Sustainable Construction Investments - Technical and Organizational Implications Sustainable Construction Management Sustainable Construction Project and Program Management Sustainable Consumer Behavior Sustainable Consumer Behavior and Its Role in the Future Economic System Sustainable Consumption and Consumer Socialization Sustainable Consumption, Eco-Friendly Apparel, and Environmental Beliefs Sustainable Corporate Finance and Risk Management Sustainable Country's Concept: Challenges and Development Perspective Sustainable Critical Infrastructures: Progresses, Challenges and Opportunities Sustainable Cross-Border Cooperation: Common Planning, Policies, Strategies, Methods and Activities Sustainable Cyber-Physical Production and Manufacturing Systems Sustainable Data Governance of Government Sustainable Data Usage for Predicting Consumer Behaviour Sustainable Design and Construction Sustainable Design and Construction Sustainable Design in Offsite Construction Sustainable Design Innovation Sustainable Development and Entrepreneurship in Contemporary Economies Sustainable Development and Innovations in the US-Mexico Transborder Region Sustainable Development and Policies: Active Ageing Policies Sustainable Development and Practices: Production, Consumption and Prosumption Sustainable Development from the Management and Social Science Perspective Sustainable Development Goals Sustainable Development Goals through Corporate Social Responsibility Sustainable Development in Natural Protected Areas Sustainable Development in Small and Medium-sized Enterprises Sustainable Development Initiatives towards Poverty Alleviation Sustainable Development of Chinese Economy—In Search of New Sources of Growth: A Sustainable Approach Sustainable Development of Energy, Water and Environment Systems (SDEWES 2021) Sustainable Development of Energy, Water and Environment Systems (SDEWES 2022) Sustainable Development of Energy, Water and Environment Systems (SDEWES 2023) Sustainable Development of Energy, Water and Environment Systems (SDEWES) Sustainable Development of Fluid Mechanics and Hydraulic Engineering Sustainable Development of Irrigated Agriculture Sustainable Development of Pavement Materials, Design, and Road Construction Technologies Sustainable Development of Rural Areas and Agriculture Sustainable Development of Seaports Sustainable Development of Social Commerce in the New Era Sustainable Development of the Bioeconomy—Challenges and Dilemmas Sustainable Development of Urban Electric Transport Systems Sustainable Development, Climate Change, and Green Finance in a Changing World Sustainable Development: The Need for Technological Change Sustainable Diet Combining Socio-Economic, Environmental, and Nutritional Objectives Sustainable Disruptive Technologies in the Built Environment: A Step towards Industry 5.0 Sustainable Drainage Systems: Past, Present and Future Sustainable Eco-Design and Environmental Analysis of Products Sustainable Ecological Infrastructures and Human Well-Being: Regional Landscape and Socioeconomic Contributions Sustainable Ecology and Forest Management Sustainable Economics of Biotechnology Sustainable Ecosystems and Society in the Context of Big and New Data Sustainable Education Sustainable Education in COVID-19 Pandemic in Urban and Rural Areas and its Effects on Overall Development Sustainable Emergency Management based on Intelligent Information Processing Sustainable Energy and Environmental Protection: The Role of Science in Society Sustainable Energy Conscious Design and Refurbishment of Sustainable Buildings Sustainable Energy Planning Sustainable Energy Transition Sustainable Energy Use Sustainable Enterprise Excellence and Innovation Sustainable Enterprise Resources Planning Systems: Current Status, Challenges, and Future Directions Sustainable Entrepreneurial Process and Journey: From Education and Innovation to Green Entrepreneurship Sustainable Entrepreneurship and Eco-Innovation Sustainable Entrepreneurship, Firm Performance and Innovation Sustainable Environmental Beliefs Sustainable Evaluation and Competitiveness in Food Production Sustainable Fashion and Technology: Emerging Opportunities in Research and Practice Sustainable Field Crops Sustainable Finance Sustainable Finance and Banking Sustainable Finance and the 2030 Agenda: Investing to Transform the World Sustainable Financial and Business Performance: Perspectives for Economic Development Sustainable Financial Markets Sustainable Financial Markets II Sustainable Flood Risk Management Sustainable Food Chains Sustainable Futures Sustainable Geomatics and Civil Engineering Sustainable Geotechnical & Geoenvironmental Engineering Designs Sustainable Geotechnics—Theory, Practice, and Applications Sustainable Growing Media for Agriculture Sustainable Healthcare Settings and Health and Social Care Workforce Sustainable Horticultural Practices Sustainable Hospitality and Tourism Marketing Sustainable Human Populations in Remote Places Sustainable Human Resource Management Sustainable Human-Computer Interaction Development Sustainable Hydropower Project Development Sustainable Implications of Anywhere Working Sustainable Industrial Engineering and Electrical Development Sustainable Industry and Innovation in the Industrial IoT Era Sustainable Information Engineering and Computer Science Sustainable Information Systems Sustainable Information Technology Capabilities Applied in Management and Education Sustainable Innovation and Transformation Sustainable Innovations and Governance in the Agri-Food Industry Sustainable Integration of Technology in Mathematic Didactics Sustainable Intensification in the Future Agriculture: Bridging the Gap between Research and Application Sustainable Interdisciplinarity: Human-Nature Relations Sustainable Investment and Finance Sustainable Investment Issues: Financial Products, Performance and Methodologies Sustainable Irrigation System II Sustainable Land Transport from the Point of View of Modern Society Sustainable Land Use and the Bioeconomy Sustainable Land Use in China Sustainable Leadership: Crossing Silos in Leadership Research and Practice Sustainable Living: An Interdisciplinary and Transdisciplinary Challenge for Researchers, Decision Makers and Practitioners Sustainable Local Economic Development of Eco-Friendly Sources Sustainable Logistics and Supply Chain Management in the Aspect of Globalization Sustainable Logistics Management Sustainable Management in Tourism and Hospitality Setting Sustainable Management of Aquatic Ecosystems Sustainable Management of Supply and Consumption Sustainable Management Practices - Key to Innovation Sustainable Manufacturing Sustainable Manufacturing Technology Sustainable Manufacturing: Digital Transformation of Production Technologies and Energy Management Sustainable Marketing, Branding and CSR in the Digital Economy Sustainable Materials: Finding Innovative Practices and Solutions for a Changing Economy Sustainable Mega-Events Sustainable Microgrids for Remote, Isolated and Emerging Areas: Current Trends and Perspectives in Policies, Practices and Technologies Sustainable Mining and Circular Economy Sustainable Mobility and Transport Sustainable Nanocrystals Sustainable Nuclear Energy Sustainable Operations and Supply Chain Management for Small Businesses and Multinational Corporations Sustainable Operations and Supply Chain Management: Evolution and Future Trends Sustainable Organic Agriculture for Developing Agribusiness Sector Sustainable Outdoor Lighting Sustainable Pavement Materials Sustainable Pavement Materials and Technology Sustainable Perspectives: Green Operations Management and Supply Chain Sustainable Perspectives: Green Supply Chain and Operations Management Sustainable Perspectives: Renewable Energy Policy and Economic Development Sustainable Physical Activity and Student’s Health Sustainable Physical Activity, Sport and Active Recreation Sustainable Pig Production Sustainable Planning and Preparedness for Emergency Disasters Sustainable Planning of Urban Regions Sustainable Planning, Management and Economics in Transport Sustainable Plant Responses to Abiotic and Biotic Stresses Sustainable Policy on Climate Equity Sustainable Portfolio Management Sustainable Power Supply in Emerging Countries Sustainable Power System Operation and Control Methodologies Sustainable Practices in Watershed Management and Ecological Restoration Sustainable Processes Development in BioTechSciences Sustainable Product Design and Manufacturing Sustainable Product-Service Systems (PSS) Solutions for Product Development and Management under Governmental Regulations Sustainable Product-Service Systems in Practice Interdisciplinary Perspectives Sustainable Production and Manufacturing in the Age of Industry 4.0 Sustainable Production in Food and Agriculture Engineering Sustainable Production of Renewable Bioenergy Sustainable Production, Consumption, and Policy Applications of Life Cycle Assessment Sustainable Productive Systems – Assessing the Past, Envisioning the Future Sustainable Project Management Sustainable Public Health: Economic and Environmental Performance of the Healthcare Industry Sustainable Public-Private Partnerships for Future-Proof Efficient Assets Sustainable Re-manufacturing Sustainable Regional and Urban Development Sustainable Regional Development: The Social, Environmental and Economic Challenges and Solutions Sustainable Research on Renewable Energy and Energy Saving Sustainable Retailing & Brand Management Sustainable Reuse of Historical Buildings Sustainable Risk Assessment Based on Big Data Analysis Methods Sustainable Rural Community Development and Environmental Justice Sustainable Rural Economics Development in Developing Countries Sustainable Rural Landscape: Study, Planning, and Design Sustainable Safety Development Sustainable Security Management and Analysis of Engineering and Information by Data-Driven Sustainable Solar Thermal Energy Use and Solar Thermal System Sustainable Solutions for Improving Safety and Security at Crossroads, Junctions and Level-Crossings Sustainable Spatial Planning and Landscape Management Sustainable Sport and Physical Activity Education Sustainable Strategic Operations and Management in Business Sustainable Strategies and Technologies for Wastewater Management Sustainable Structures and Construction in Civil Engineering Sustainable Supply Chain and Lean Manufacturing Sustainable Supply Chain and Logistics Management in a Digital Age Sustainable Supply Chain Management for Process Industry Sustainable Supply Chain Management in the Fashion Industry in the Aftermath of COVID-19 Sustainable Supply Chain System Design and Optimization Sustainable Systems Analysis for Enhanced Decision Making in Business/Government Sustainable Territorial Development Sustainable Tourism and Climate Change: Impact, Adaptation and Mitigation Sustainable Tourism and Hospitality Management Sustainable Tourism Experiences Sustainable Tourism Management under Challenge from Climate Change and Economic Transition Sustainable Tourism Strategies in Pandemic Contexts Sustainable Tourism: Issues, Debates and Challenges Sustainable Transformation through Information Systems Use, Design and Development Sustainable Transport and Air Quality Sustainable Urban and Regional Management Sustainable Urban Development Sustainable Urban Development and Regional Management Sustainable Urban Development and Strategic Planning Sustainable Urban Landscape Design for Well-being Sustainable Urban Mining Sustainable Urban Planning and Design Education in Practice Sustainable Urban Stormwater Management Sustainable Urban Transitions: Towards Low-Carbon, Circular Cities Sustainable Urban Transport Policy in the Context of New Mobility Sustainable Urbanization Strategies in Developing Countries Sustainable Use of Biocontrol Agents Sustainable Value Co-Creation Sustainable Venture Capital and Social Impact Investment Management Sustainable Virtual Organization: Management Challenges and Development Perspectives Sustainable Wastewater Management and Water Demand Analysis Sustainable Wastewater Treatment by Biotechnologies and Nanotechnologies Sustainable Water–Energy–Food Nexus Sustainable Wildlife Management and Conservation Sustainable Wind Power Development Sustaining Suburbia: Reassessing the Policies, Systems, and Form of Decentralized Growth Synergies between Quality Management and Sustainable Development Synergies of Soft Computing, Artificial Intelligence and Signal/Image Processing Techniques in the Advancement of Sustainable Technologies. System-wide Disruption of Organisations for Sustainability Systems Engineering for Sustainable Development Goals Tackling the Complexities of the Pearl Industry to Enhance Sustainability Tall Buildings Reconsidered Technological Innovation and the Effect of Employment on Green Growth Technologies and Innovations for Sustainable Growth Technologies and Innovations for Sustainable Storage and Transportation of Oil and Gas Technologies and Models to Unpack, Manage Inventory and Track Wasted Food towards Sustainability Technologies for Developing Sustaining Foods for Specialized Missions Technologies for Sustainability in Smart Cities Technology and Innovation Management in Education Technology Assessment, Responsible Research and Innovation, Sustainability Research: Conceptual Demands and Methodological Approaches for Societal Transformations Technology Enhanced Learning Research Technology, Organisation and Management in Sustainable Construction Telework and Its Implications for Sustainability Terrestrial Ecosystem Restoration Textile Technologies in Sustainable Development, Production and Environmental Protection The 1st International Conference on Future Challenges in Sustainable Urban Planning & Territorial Management—SUPTM 2022 The 4th Industrial Revolution, Financial Markets and Economic Development The Adaptive Reuse of Buildings: A Sustainable Alternative towards Circular Economy Solutions The AI-Augmented Smart Transformation for Sustainable Governance The Application of Communication Technology in Smart Residential Communities The Art and Science of Economic Evaluation and Maintenance Planning for Airports The Arts, Community and Sustainable Social Change The Challenge of Food Waste Reduction to Achieve More Sustainable Food Systems The Circular Economy as a Promoter of Sustainability The Close Linkage between Nutrition and Environment through Biodiversity and Sustainability: Local Foods, Traditional Recipes and Sustainable Diets The Competitiveness and Sustainability of Global Agriculture The Contribution of Sustainable Businesses to achieve the Agenda 2030 and the Sustainable Development Goals The Contribution of the Project Management to the Sustainable Development Goals (SDGs) The Contribution of the Social Economy to the Sustainable Development Goals The Current and Future Role of Public Transport in Delivering Sustainable Cities The Deployment of IoT in Smart Buildings The Eco-Philosophy of an Organic Community The Economics and Ethics of Sustained Individual and Food System Resilience The Effects of COVID-19 Pandemic on Engineering Design and the Sustainability of the New Developed Products The Energy-Sustainability Nexus The Environmental and Economic Sustainability in Building Construction The Environmental Effects from Consumer Behaviour in the Contexts of the Circular Economy and the Sharing Economy The Evolution of Social Innovation: Building a Sustainable and Resilient Society through Transformation The Future and Sustainability of Financial Markets The Future of Interior Lighting is here The Future of Maritime Industry: How Climate Change and Other Environmental Challenges Will Impact on New Market Developments The Gender Dimension in Sustainability Policies and Their Evaluation The Global Jatropha Hype—Drivers and Consequences of the Boom and Bust of a Wonder Crop The Governance of Social Innovation for a Sustainable Economy: Requirements, Actors and Approaches The Human Dimensions of Coastal Adaptation Strategies The Human Side of Sustainable Innovations The Imbalances in the Urban Growth of 21st Century Cities: Case Studies, Innovative Approaches and New Emerging Phenomena The Impact of Audio-Visual Content on Sustainable Consumer Behavior The Impact of COVID-19 Pandemic on Sustainable Development Goals The Impact of Digitalization on the Quality of Life The Impact of Economic Complexity and Trading Complex Products on Environmental Indicators The Impact of Global Change on Biological Control of Pest in Agriculture The Impact of Plant Genome Editing The Impacts of Climate Changes: From Sustainability Perspectives The Importance of Wetlands to Sustainable Landscapes The Influence of Covid-19 on Sustainable and Financial Analysis in Public Administrations The Informatization of Agriculture The Involvement of Crowds for Advancing Knowledge, Promoting Innovation and Nurturing New Entrepreneurial Ventures The Key to Sustainable Manufacturing Enterprise The Link between Tourism, Agriculture and Sustainability in Special Green or Ecological Zones The New Era of Sustainable Public Procurement The Nexus between Information and Communication Technologies (ICT) and Sustainability: The Current Digital Bet The Oil and Gas Industry and Climate Change: Role and Implications The Planetary Wellbeing Initiative: Pursuing the Sustainable Development Goals in Higher Education The Political Economy of Home: Settlement, Civil Society and the (Post-)Global Eco-City The Potential and Contradictions of Mega Infrastructure Development Projects in Contributing to Sustainable Development in Rapidly Growing Countries and Regions The Provision of Ecosystem Services in Response to Habitat Change The Quality of Urban Areas: New Measuring Tools and Methods, Impact on Quality of Life and Costs of Bad Design The Realistic Sustainable Management of Development Operations in Africa The Remediation and Re-qualification of Contaminated Sites The Rise of Domestic Tourism and Non-travelling in the Times of COVID-19 The Role of Artificial Intelligence in Sustainable Development The Role of Communication in Sustainable Development The Role of Corporate Mergers and Acquisitions in Enhancing Environmental Sustainability The Role of Public Policy in Managing and Ensuring the Sustainability of Public and Private Organizations The Role of Sustainable Infrastructure in Climate Change Mitigation and Community Resilience The Role of Underutilized Crops in Sustainable Agriculture and Food-Systems The Specific Role and Value of Accounting within the Private Firm Context The Sustainability of Fishery and the Aquacultural Sector The Sustainability of Social Media Research The Sustainability of the Welfare State The Transition to a Low-Carbon, Smart Mobility in a Sociotechnical Context The Valorization of the Cultural Heritage and Landscape as the Entrance Point for the Circular City Strategy The Value Generation of Social Farming Thermal Management of Urban Subsurface Resources Through the Lens of Telecoupling: New Perspectives for Global Sustainability Tolerance Management in Architecture, Engineering and Construction Tools for Potential Impact Analysis Due to CAVs and ITS Operation: Traffic Microsimulation, Neural Network and Fuzzy Logic Techniques Tools, Methodologies and Techniques Applied to Sustainable Supply Chains Tourism and Sustainability: Combining Tourist’s Needs with Destinations’ Development Tourism Development, Economic Prosperity and Environmental Sustainability Toward a Circular Economy in the Agro-Industrial and Food Sectors Toward a Sustainable Transportation Future Toward a Sustainable Wellbeing Economy Toward Sustainability: Design Techniques in Service Sector Toward Sustainable 6G Wireless Communication Systems Toward Sustainable Environmental Quality – Toxicology Sustainability Towards a Circular Housing Economy Towards a Sustainable Life: Smart and Green Design in Buildings and Community Towards Circular Economy: Evaluation of Waste Treatment Towards Healthy and Sustainable Built Environments Post-2020 Towards Resilient Entrepreneurship and Technological Development in Self-Sustainable Economies Towards Sustainability: Selected Papers from the Fourth World Sustainability Forum (2014) Towards Sustainability: Selected Papers from the Third World Sustainability Forum (2013) Towards Sustainable and Innovative Development in Rural Areas Towards Sustainable Building & Infrastructure Operations & Maintenance (O&M) Towards Sustainable Development of National and Supranational Systems—Political and Legal Challenges Towards Sustainable Engineering: New Technologies and Methodologies Towards Sustainable Tourism: Pros and Cons of Geotourism Towards True Smart and Green Cities? Traditional Knowledge, Revitalization, and Sustainability Traditional Landscapes—from the Past to the Sustainable Future (Factors and Trends of Landscape Functions and Services Provision towards the 21st Century) Tragedy or Transcendence: Reflections on 'The Tragedy of the Commons' Transboundary Sustainable Mountain Governance Transformation to Sustainability and Behavior Change Transformations for a Sustainable Future Transformative Processes for a Circular Economy Transformative Times for Food Consumption: Moving towards Sustainability Transforming Built Environments: Towards Carbon Neutral and Green-Blue Cities Transforming Materials Industries for a Sustainable Future Transition from China-Made to China-Innovation Transitioning Household Energy towards Sustainability Transportation and Sustainability Transportation Network Modelling and Optimization for Sustainability Transportation Operations and Safety Analysis for Sustainable Networks Trends and Challenges in the Management of Technological Projects for the Sustainable Development in the Digital Revolution Era Trends in Sustainable and Ethical Food Consumption Trends in Transport Sustainability and Innovation Trends in Waste Utilization in Construction Trends, Environmental Implications, Recent Obstacles and Solutions in the Sustainable Growth of the Renewable Energy Integrated Power Sector Trust Management: Key Factor of the Sustainable Organizations Embedded in Network Ubiquitous Green IT System for Sustainable Computing Uncertainty in Prospective Sustainability Assessment Understanding Arctic Sustainability Challenges from Systems Perspective Understanding Innovation and New Venture Creation in Reducing Poverty Understanding Sustainable Human Resource Management Understanding the Economic Value of Nature Base Solutions (NBS) towards Sustainable Built Environments Uninterruptible Power Supplies (UPS) Universities, Industries and Sustainable Development University Management Innovations toward Meeting the Sustainable Development Goals Urban Heat Island Urban Pathways: Transition towards Low-Carbon, Sustainable Cities in Emerging Economies Urban Planning and Smart City Decision Management Urban Planning and Social Well-being Urban Political Ecology: The Uneven Production of Urban Space and Its Discontents Urban Regeneration and Ecosystem Services Assessment Urban Regeneration and Sustainability Urban Sprawl and Energy Efficiency: The Relevance of Urban Form in the Environmental Sustainability of Cities Urban Sustainability and Planning Support Systems Urban Sustainability in Historic Cities Using Applied Statistics and Multivariate Data Analysis in the Challenge to Solve Current Real-World Problems Using Life Cycle Thinking and LCA Data in Interdisciplinary Research on Production and Consumption Systems Using Project Management as a Way to Sustainability Using the Psychosociocultural Approach to Academic Persistence and Educational Wellness Valorisation of Waste from Non-Dairy Plant-Based Beverages Values and Housing Vehicular Networks and Sustainability Ventilation and Air Distribution Methods to Promote above Ground and Underground Built Environment Sustainability Virtual and Augmented Reality Learning Environments for Sustainable Development Vision 2030 in Saudi Arabia: Robust Research, Sustainable Development, Limitless Innovation and Economic Boost Visions, Values and Principles for a Sustainable Circular Economy Visual Landscape Research in Sustainable Urban and Landscape Planning Visualising Landscape Dynamics Walkable living environments Warehouse 4.0: Best Practices, Opportunities, and Challenges Waste Minimization: Strategies for the Reduction and Prevention of All Forms of Waste Waste, Garbage and Filth: Social and Cultural Perspectives on Recycling Water Footprint in Supply Chain Management Water Law and Sustainability Water Quality and Its Interlinkages with the Sustainable Development Goals Welfare Implications of Environmental Change and Policy Well-Being and Urban Density What is Sustainability? Examining Faux Sustainability What’s Sustainability for Restoration? Why the Physical Environment Matters: Sustainability’s Role in Child Development Wildlife Conservation: A Sustainability Perspective Women’s Special Issue Series: Sustainable Energy Wood-Product Trade and Policy Youth Action for a Sustainable Future Youth School Violence and the Impact of Social Environment ZEMCH International Research 2020 ZEMCH Research Initiatives: Mass Customisation and Sustainability Zipf’s Law, Central Place Theory, and Sustainable Cities and City Systems All Special Issues Volume Issue Number Page Logical Operator Operator AND OR Search Text Search Type All fields Title Abstract Keywords Authors Affiliations Doi Full Text
The construction industry's green pivot faces a number of challenges, even as more city governments and home buyers are increasingly pushing for more sustainable new homes and buildings. Among the obstacles standing in the way of greener construction are the industry's natural cautious approach to innovation as well as its dominant profit models, which favor previously profitable habits and techniques. In a new paper published in the journal Sustainability, three Concordia researchers from the Next-Generation Cities Institute (NGCI) outline those obstacles and offer practical incentives local governments can offer to encourage private green real estate development. The study also looks at the key stakeholders and building life-cycle process to identify ways to reduce the industry's overall carbon footprint. Change is urgently needed, they write: according to the Global Alliance for Buildings and Construction and the United Nations Environment Program, construction and the built environment account for 38 percent of global greenhouse gas emissions. The study stands apart from other academic papers examining the construction industry. Its lead author, Ph.D. student Natalie Voland, is president of Montreal-based real estate developer Gestion Immobiliere Quo Vadis and co-director of Concordia's recently announced zero-carbon buildings accelerator. The paper relies largely on Voland's own experiences operating a certified B-Corp, part of an international movement that emphasizes sustainability through business. "As a board member of the Montreal Climate Partnership, I realized very quickly that developers have no clue how to become climate neutral," she says. "They didn't have the expertise or the knowledge, and the current business models do not work." Voland believes that this study, co-authored by Concordia Public Scholar Mostafa Saad and NGCI co-director Ursula Eicker, can be used as evidence that a massive shift towards greener buildings will eventually earn developers who create new business models standard profits for their investors and finance requirements. Sustainable precedents The authors write that the realities facing developers—the need for quick term returns on investment, stable financing and satisfying multiple stakeholders, among many others—can be addressed most easily by municipal policies and incentives that merge private goals with the public good. Fast-track zoning, density bonuses, tax breaks and expertise and supplier sharing are all tools cities can use to promote green projects. "In my experience, these incentive-based types of zoning/regulatory processes were often blocked by city officials, who said that they would set a precedent of favorable treatment for green building projects," says Voland. That precedent would be the point. "This way, the city would be telling a developer that if they build something that benefits the environment and the community, they would get an incentive that would benefit their financial models." Building on experience Eicker adds that Voland's experience contributes much needed depth to the subject area, which she says is lacking large datasets and academic literature. "There is a real challenge in translating an individual like Natalie's experience into scientific facts," Eicker says. "But we can substantiate parts of the existing literature by bringing her empirical knowledge into a systemic form, and that is what makes this paper so interesting. It is about real dynamics, and how we can make sustainable projects happen." "This paper is like a map of what is out there now. We are mapping Natalie's experience in the industry and the real obstacles and barriers that exist," adds Saad. "These may not be very evident to academics, who have to observe a subject from a very high level. We need this kind of practical experience to be able to get these insights in an academic way." The authors feel the paper's real-world applications can be used to explain the value of green buildings to developers, bankers and policymakers through their accelerator going forward. "We are not trying to say, 'What is a better building?'" explains Voland. "We know what a better building is. It's about how we implement it."
10.3390/su14127071
Space
4.5-bil­lion-year-old ice on comet 'fluffi­er than cap­puc­ci­no froth'
Laurence O'Rourke et al. The Philae lander reveals low-strength primitive ice inside cometary boulders, Nature (2020). DOI: 10.1038/s41586-020-2834-3 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2834-3
https://phys.org/news/2020-10-billion-year-old-ice-comet-fluffier-cappuccino.html
Abstract On 12 November 2014, the Philae lander descended towards comet 67P/Churyumov–Gerasimenko, bounced twice off the surface, then arrived under an overhanging cliff in the Abydos region. The landing process provided insights into the properties of a cometary nucleus 1 , 2 , 3 . Here we report an investigation of the previously undiscovered site of the second touchdown, where Philae spent almost two minutes of its cross-comet journey, producing four distinct surface contacts on two adjoining cometary boulders. It exposed primitive water ice—that is, water ice from the time of the comet’s formation 4.5 billion years ago—in their interiors while travelling through a crevice between the boulders. Our multi-instrument observations made 19 months later found that this water ice, mixed with ubiquitous dark organic-rich material, has a local dust/ice mass ratio of \({2.3}_{-0.16}^{+0.2}:1\) , matching values previously observed in freshly exposed water ice from outbursts 4 and water ice in shadow 5 , 6 . At the end of the crevice, Philae made a 0.25-metre-deep impression in the boulder ice, providing in situ measurements confirming that primitive ice has a very low compressive strength (less than 12 pascals, softer than freshly fallen light snow) and allowing a key estimation to be made of the porosity (75 ± 7 per cent) of the boulders’ icy interiors. Our results provide constraints for cometary landers seeking access to a volatile-rich ice sample. Main Fly-bys and rendezvous missions have been central to the provision of close-up images of cometary surface structures, delivering important insights into the chemical and physical processes that have defined them 7 . The presence of boulders on their surfaces with sizes ranging from the metre scale up to tens of metres—often in locations not matching where they were initially exposed—certainly points to the dynamic nature of their creation 8 , 9 . A determination of mechanical strength properties derived from in situ measurements carried out on the primitive ice located in the interior of a cometary boulder allows unique comparisons to be made with the cometary body internal structure. Furthermore, these properties provide information about the comet’s dynamical history and deliver important constraints for the design of cometary landers and cryogenic sample return missions. The European Space Agency’s Rosetta mission 10 was launched in 2004 and began orbiting comet 67P/Churyumov–Gerasimenko in August 2014. On 12 November 2014, the Philae lander was released with a faulty harpoon system, touching down on the surface on two occasions while also experiencing a glancing collision against the Hatmehit depression edge. After touchdown 2 (TD2), it proceeded to its final position at touchdown 3 (TD3), located under an overhang in the Abydos region of the comet 1 (see Fig. 1a–d ). Although scientific analysis of data 1 , 2 , 11 from Philae’s first and third touchdown points have provided important insights into the properties of a cometary nucleus, the location and scientific implications stemming from the second touchdown point were unknown up to now. Its importance was noted 12 , however, as Philae was found to have changed both velocity and rotation rate at this location, as well as having penetrated the surface with the Rosetta Lander Magnetometer and Plasma Monitor 13 (ROMAP) sensor, possibly exposing ice at the same time. Fig. 1: Philae landing trajectory, TD2 and TD3, Philae and visible ice. a – c , Three views showing the Philae landing trajectory as it crosses the surface of the comet (represented by a shape model) highlighting the locations of touchdown 1 (TD1), collision, touchdown 2 (TD2) and touchdown 3 (TD3). d , OSIRIS image (2 September 2016, 19:59 ut ; 0.049 m per pixel), showing locations of TD2 and TD3 as boxed in c . This image is enhanced in order to show the skull-top crevice (inside the green dashed box marked TD2) and the Philae lander hidden in the distant shadows (inside the blue dashed box marked TD3). e , f , Views of the ice in the crevice (6 August 2016 and 24 August 2016, respectively). Full size image The production of a new Philae landing trajectory ( Supplementary Methods , Supplementary Fig. 1 ) served to start the search for TD2, with a ridge region identified as being a likely candidate for its location 11 . A comparative analysis was performed of this area using pre- and post-landing imagery (Supplementary Figs. 2 , 4 ) from the Rosetta Optical, Spectroscopic, and Infrared Remote Imaging System (OSIRIS) 14 , combined with high-resolution digital terrain models of the ridge (for example, Supplementary Figs. 9 – 11 ) and of the Abydos region as a whole. While notable changes in boulder positions were observed in the Abydos valley (Supplementary Fig. 4e, f ), the geomorphological structures along the length of the ridge showed no differences with regards to position or orientation. Changes were found however in the pre- and post-landing surface morphology of two adjoining boulders located on the ridge (Extended Data Fig. 1 , Supplementary Video 1 , Supplementary Figs. 5 , 6 ). These changes included the identification of an unusual ice feature located in the boundary between the boulders (Fig. 1e, f , Supplementary Video 2 ). Our analysis of these changes found that only Philae’s presence could explain their existence ( Supplementary Methods ). As the topological structures of these two boulders, viewed from above, give the impression of a skull face as shown in Supplementary Fig. 3 , we chose to name the location ‘skull-top ridge’ and the boundary between the two boulders as ‘skull-top crevice’. We highlight in Methods that the boulders themselves represent assemblages of dust/ice aggregates. The timing involved in this chain of events (Extended Data Table 2 , Supplementary Video 3 ) was derived from magnetometer data produced by the ROMAP sensor and checked against thermal and power information from Philae’s subsystems. The ROMAP instrument provided attitude information (combined with the Rosetta Plasma Consortium Fluxgate Magnetometer, RPC-MAG 15 ), as well as unique accelerometer measurements based on sensor boom movement (see Methods ). Combined with our image analysis, a reassessment of the ROMAP science data (see Methods and Supplementary Methods ) determined that the initial contact took place at 17:23:48 ± 10 s ut , approximately 1.5 min before the previously published contact time 16 . Indeed Philae spent nearly two full minutes at TD2, making four surface contacts in its trip across it (see Supplementary Fig. 6 ). Of the four, the third (TD2c) of these contacts (Fig. 2 ) is the most notable due to the 0.25-m-deep depression visible in an ice-like feature on the side of the crevice. We found a perfect correlation between the presence of that depression and ROMAP boom movements (Fig. 2a ) that shows the expected deviations matching the stamping movement required for its creation. The compression lasted 3 s (Fig. 2b, c ) before Philae proceeded to rise out of the crevice to then make its final TD2 contact with the surface (TD2d), creating the ‘eye’ of the ‘skull’ in the process. Fig. 2: ROMAP guide and Philae impact TD2c. a , ROMAP magnetometer rotation and boom measurements matching Philae touchdown events (TD2a–d) are plotted as magnetic field values versus time (see Methods and Supplementary Methods ) whereby B x , B y and B z represent the three magnetic field components. b , OSIRIS image focussing on TD2c, where the lander (1 m width in the image) compressed the ice in the crevice (2 September 2016). c , Left, overhead view of the Philae lander highlighting its instruments. The red, orange and green markings map to the impression in the ice (right; OSIRIS image from 24 August 2016). See Supplementary Video 2 showing a fly-over of the crevice and Supplementary Video 3 for an animation of this figure. Full size image Data from the OSIRIS and VIRTIS (Visible, InfraRed and Thermal Imaging Spectrometer) instruments on the Rosetta orbiter were used to determine whether the high-albedo ice-like features observed in the skull-top crevice were water ice. For the OSIRIS instrument, we focused on multi-filter images generated during the timeframe of 12 to 14 June 2016 (Fig. 3a , Extended Data Fig. 2 ). A spectrophotometric analysis of the data sets from this period (see Methods ) provided spatially resolved data that confirmed the presence of water ice in the crevice, matching a visible area of approximately 3.5 m 2 , with a brightness 6 times greater than that of the dust-covered terrain. The water-ice abundance was derived from the observed reflectance, after corrections were made for the illumination conditions and phase function based on geographical mixtures of the comet’s dark terrain and water ice (grain size of 30–100 μm) applied to the bright material’s absolute reflectance 17 . A water-ice abundance value of 46.4% ± 2.0% was measured during the 14 June 2016 observations at 10:30:32 ut (Fig. 3c, d , Extended Data Table 1 ), resulting in an approximate local dust/ice volume ratio of \({1.15}_{-0.08}^{+0.1}:1\) . Assuming a dust/ice bulk density ratio of two 18 , and a similar porosity for dust and ice material, the local dust/ice mass ratio is approximately \({2.3}_{-0.16}^{+0.2}:1\) . This value is below the average dust/ice mass ratio for the nucleus, which is >3 for most estimates derived from measured data 19 . Fig. 3: Multi-instrument view of the water ice. a , Main panel, NAC image (taken on 14 June 2014 at 10:29 ut ); insets, OSIRIS images (left, multi-filter view; right, star and black dot show positions for plots c , d ). b , The 0.55 μm VIRTIS-M hyperspectral cube (V1_00424522185.QUB) of the Abydos region (14 June 2016, 10:51–11:35 ut ) with skull-top ridge located in the green box of the lower panel, b3. The upper panels b1 and b2 zoom-in on spectral slope (left) and the radiance factor (I/F) image (right), respectively. ‘Sample’ identifies the relevant pixel numbers in each ‘Line’ (row) of the cube. c , Plot of reflectance and I/F versus wavelength, and d , the measured albedo as a function of wavelength. e , OSIRIS image (2 September 2016) shows the crevice edge-on with e1 (crevice width) 1.03 ± 0.07 m and e2 (compression width) 0.246 ± 0.049 m. Full size image Lower spatial resolution data from the VIRTIS Instrument on 14 June 2016 confirmed this detection of water ice. Figure 3b shows the resulting hyperspectral image of the Abydos region with the skull-face crevice identified at the edge of the field of view. A signal from the ice located in the crevice was found in the tail of the optical point spread function (see Methods ). An estimation by VIRTIS of the water-ice abundance in the location concurred with that of the OSIRIS measurement whereby the water-ice-rich spot was determined to have an approximate area of 1.27 ± 0.5 m 2 (matching the upper bound of 48% of 3.5 m 2 ), calculated from the inferred abundances (0.5% over an area of 253 m 2 ). The general longevity of ice on the comet’s surface is dependent on local surface topography 6 , 17 , 20 , 21 , with most ice features disappearing quite quickly owing to limited surface shading from the Sun within days to weeks of discovery. Local dust/ice ratios of <3:1, equivalent to that in the skull-top crevice, have been found at other locations across the comet (as well as on other comets 7 , 22 )—in particular, in newly exposed water ice observed on cliffs and scarps linked in some cases to outbursts as well as in clustered bright spots in both hemispheres 4 , 6 , 17 . As dust/ice ratios have been found to increase over time due to solar illumination exposure, the facts that our measurement of high water-ice abundance was made 19 months post-landing, and that the ice in the crevice was observed 22 months post-landing without notable measurable changes, both point to the ice in the crevice receiving very low solar illumination due to shadowing. We confirmed this using a horizon mask (Extended Data Fig. 3 ), determining that the ice on the left-hand side of the crevice was illuminated <0.55% of the time during the perihelion passage while the ice on the right-hand side of the crevice, where the compression took place, received <0.21% of direct sunlight during the same period. Using as input the corresponding energy flux, we found our sublimation modelling over-estimated by an order of magnitude the amount of ice that sublimed (compared to that visible in the imagery), pointing to even greater morphological shadowing than our horizon mask could derive ( Supplementary Methods ). This very low illumination is supported by our direct measurements of the crevice dimensions; the width of the TD2c location matches the width of the Philae lander, thus pointing to very little sublimation or erosion having taken place (see e1 in Fig. 3e ). For these reasons, we conclude that while the super-volatiles may have sublimed over time, the water ice itself at TD2c did not sublime and remained in a highly unprocessed state. The depth of the impression made by Philae in this icy surface, combined with a detailed correlation of the ROMAP boom measurements, contribute to direct in situ measurements allowing an estimate to be made of the compressive strength of this dust/ice feature. The depth of the impression in the ice is 0.246±0.049 m (e2 in Fig. 3e ) and its area is ≥0.2208 m 2 (see Methods ). A detailed analysis was performed (Methods), and found the compressive strength of the ice to be <12 Pa. It is important to note that this very low compressive strength is of ‘primitive’ ice (see Supplementary Methods for explanation) buried and hidden from view until it was exposed and compressed at the time of the Philae landing itself. Whereas compressive strengths 1 , 2 of 1 kPa and 2 MPa were measured at TD1 and TD3 respectively (although deployment uncertainties do affect the reliability of the MUPUS—Multi Purpose Sensors for Surface and Subsurface Science—penetrator result), a number of other publications find much lower values. Model-dependent analyses of the collapse of cliff overhangs observed from orbit 23 , as well as those derived from the scratches Philae made at the final landing location 11 , calculated compressive strength values ranging between 30 Pa and 150 Pa for the former and from 10 Pa to 100 Pa for the latter. Such model-based low compressive strength derivations show consistency with our in situ findings. Three independent porosity estimations have been made for this comet, with values ranging from 65%–85% (Philae Consert radar 3 , 24 ) to 70%–75% (Rosetta Radio Science Instrument) 25 , and a modelled third estimate 26 of 63%–79%. These same numbers equate to volume filling factors of 0.15–0.35, 0.25–0.3 and 0.21–0.37, respectively. We modelled the compression Philae made in the cometary boulder material, with the aim of determining how much the volume-filling factor of the material is altered. The model we use 27 assumes that the material making up the cometary interior is made up of a hierarchical arrangement of building blocks (see Supplementary Methods ). According to this model, applied here to the interior of a cometary boulder, the submicrometre-sized solid grains 28 , 29 are contained in larger units (‘pebbles’), which themselves are clustered together to make up the boulder. We find that such an arrangement of hierarchical building blocks, which previously has demonstrated excellent correlation with the Philae Consert radar results 24 , also provides the conditions needed to achieve the low compressive strength (<12 Pa) of the material of the boulder into which Philae stamped (see Methods ). We further note that although the local dust/ice ratio we measured ( \({2.3}_{-0.16}^{+0.2}\) ) is lower than the model-based average 30 estimate of the nucleus (between 3 and 9), we can resolve this inconsistency. A study of results from gas activity models 31 , 32 (based also on our hierarchical pebble model) explains our finding in that dust/ice ratios of 5 and lower can be found to be present locally for up to 5% of the volume of the nucleus 29 . As a result and on the basis of the consistencies found using our model, we derive a volume filling factor for the boulder material of 0.25 ± 0.07 (a porosity range of 68%–82%), equivalent to previously published values for the overall nucleus of comet 67P. Our results provide important constraints for future cometary lander missions, as the knowledge of cometary boulder interiors is not only vital for impact analysis but also provides insights into the mechanical processes needed to retrieve a volatile-rich cryogenic sample for in situ analysis—or indeed for delivery back to the Earth 33 , 34 . The operational dangers of landing in a cometary boulder field would however need careful study and preparation. Methods OSIRIS data analysis The normal albedo presented here has been evaluated from photometrically corrected images using the shape 7S model with 12 million facets 35 and the Hapke model 36 parameters (table 4 of ref. 37 ) from resolved photometry in the orange filter centred at 649 nm. We assume that the phase function at 649 nm also applies at the other wavelengths 17 . The flux of a region of interest (ROI) in each of the 3 filters has been integrated over 3-pixel-wide, square boxes. We attempt to reproduce the spectral behaviour and the normal albedo of the ice-rich patches by obtaining synthetic spectra of areal mixtures (spatially segregated) of the comet’s dark terrain (DT), derived from areas near the boulder with water ice: $$R=\rho {R}_{{\rm{ice}}}+(1-\rho ){R}_{{\rm{DT}}}$$ (1) where R is the reflectance of the bright patches, R ice and R DT are the reflectance of the water ice and of the comet’s dark terrain, respectively, and ρ is the relative surface fraction of water ice or frost. We use areal mixture models due to the absence of reliable and relevant optical constants for the dark material needed to run more complex scattering models, and the absence of clear absorption features in the wavelength range covered by the OSIRIS observations. The water-ice spectrum was derived from Hapke modelling of optical constants 38 using grain sizes of 30 μm and 100 μm. In fact, the typical size of ice grains on cometary nuclei was found to be a few tens of micrometres 22 , 39 , 40 . We also attempt to use models with larger water-ice grains (up to 1,000 μm and 2,000 μm), but these models gave a worse spectral match and a lower χ 2 fit. The models that best fit the maximum absolute reflectance observed on the bright patch on the skull boulder are areal mixtures of the average cometary dark terrain (DT) enriched with 46.4%–47.4% of water ice (Fig. 3d , Extended Data Table 1 ) with grain sizes of 30–100 μm. A further analysis is provided in Supplementary Methods . VIRTIS data analysis The best viewing opportunity for VIRTIS-M 41 of the Philae touchdown 2 (TD2) site occurred on 14 June 2016 between 10:51:12 and 11:35:31 utc when the Rosetta spacecraft was at a distance of 27.3 km from 67P and the solar phase angle was 57°. During this period of time, the VIRTIS-M instrument acquired a visible hyperspectral cube (acquisition V1_00424522185.QUB) by collecting 133 consecutive slit images of the surface with a spatial resolution of about 6.5 × 10 m (along slit × scan). Each slit image was acquired with an integration time of 16 s and a repetition step time of 20 s while the Rosetta spacecraft was maintaining nadir pointing. At each step the internal scan mirror is rotated by an angle of 250 μrad (corresponding to one Instantaneous Field-Of-View—IFOV) to achieve an angular scan of about 1.9° in 133 steps. Consecutive slits are not completely connected among them, being separated by about 13 m. The resulting hyperspectral image of the Abydos region shown in Fig. 3b has a scale of 1.66 km (slit width) by 0.86 km (scan length). The position of the cracked boulder on the TD2 site has been identified close to pixels at line = 1, samples = 163–168 and appears located at the edge of the FOV (see Fig. 3b , panel b2, blue box). According to reconstructed geometries computed on the SHAP7 digital model and including the current best estimate of the errors, the centres of these pixels are offset by a minimum of 25 m to a maximum of 58 m with respect to the reference position of TD2. The identification of the TD2 location on the VIRTIS image is therefore not fully certain owing to the limited spatial resolution and position of the pixels on the edge of the VIRTIS image. Owing to the coarse spatial resolution, VIRTIS-M is not able to resolve the cracked boulder whose exposed bright area is about 0.5 m 2 while the pixel area is 65 m 2 . Moreover, owing to the instrumental point spread function 42 , 43 FWHM (<500 μrad) and to the uncertainty on the position of the TD2 site, we are averaging the signal of the candidate TD2 area taken on 6 contiguous pixels where higher reflectance and blueing is observed. The average reflectance spectrum of the TD2 pixels is shown in Extended Data Fig. 4a and compared with an average collected on nearby pixels (line = 2, samples = 163–168) as a reference for an adjacent non-icy terrain (Extended Data Fig. 4b ). The analysis of the spectral slope measured on the two ROI gives a value of 2.39 μm −1 and 2.59 μm −1 for the cracked boulder pixels (blue curve) and nearby dark terrain (red curve), respectively. These values correspond to a spectral slope difference of Δ = 0.20 μm −1 . This difference is equivalent 44 to a water-ice abundance of 0.1% in areal mixing (Extended Data Fig. 4c ). The size of the bright area on TD2 has been constrained to about 3.5 m 2 on OSIRIS high-resolution images, of which about 1 m 2 is made of exposed water ice in an areal mixture (where water ice and dust do not thermally interact leading to less energy being available for sublimation). Our analysis shows that VIRTIS is missing the TD2 location by about one line and that the signal is harvesting the tail of the optical point spread function. As a consequence of this, we are collecting about 20% of the photons coming from the water-ice patch on the TD2 site. This means that the water-ice abundance of 0.1% ± 0.04% previously estimated is likely to be five times larger, leading to 0.5% ± 0.2%. Scaling this value for the total area of 253 m 2 on 6 pixels, the water-ice-rich spot corresponds to an area of about 1.27 ± 0.5 m 2 , in agreement with OSIRIS findings. ROMAP and RPC-MAG data analysis The tri-axial lander magnetometer ROMAP and orbiter magnetometer RPC-MAG were operating during the descent, landing and rebound phase with a sampling rate of 1 Hz. Extended Data Fig. 5 shows the measurements of both instruments for the interval around TD2. In order to use magnetic field measurements for flight dynamics reconstructions, reference measurements are necessary to separate external events in the background magnetic field (for example, magnetic wave activity as seen in the RPC-MAG observations in Extended Data Fig. 5 ) from changes caused by the spacecraft dynamics or operation (for example, rotation change or boom movements). In this case, the RPC-MAG orbiter instrument was used as background field reference to be able to reconstruct the Philae dynamics. A rotation of the lander along an arbitrary axis relative to the background magnetic field causes an apparent rotation of the magnetic field vector observed by the lander relative to the orbiter reference. This causes the three-dimensional (3D) quasi-sinusoidal modulation of the ROMAP measurements relative to the RPC-MAG measurements visible in Extended Data Fig. 5 . A time-dependent rotation matrix between these two 3D observations can therefore be calculated to describe the attitude of the lander relative to the orbiter. This information can then in theory be used to derive a set of time-dependent quaternions giving the absolute attitude of the lander. A statistical analysis has to be used to accurately determine the absolute attitude to account for small deviations between the orbiter and lander measurements caused by noise and plasma phenomena (for a detailed description, see ref. 12 ). In this case only a minimal number of data points is available due to the low 1 Hz sampling rate and relatively fast changes in lander dynamics. Hence, a simplified analysis for the reconstruction of the rotation rate during descent was used 12 to estimate the average lander rotation rate between the individual surface contacts (see Extended Data Table 2 ). Instead of determining the total absolute lander orientation, this method is based on determining only the orientation of the lander rotation axis, which allows the lander magnetic field observations to be transformed into a temporary coordinate system in which one magnetic axis remains stable (that is, the field along the rotation axis) and only the two remaining axes show a modulation. This modulation relative to the orbiter reference measurements can easily be determined, and results directly in the lander rotation frequency 12 . A rotation of the magnetometer boom relative to the lander creates a characteristic signature (as can be seen around 17:25:30 ut in Fig. 2a and Extended Data Fig. 5 ) in the magnetic field caused by the displacement of the ROMAP sensor relative to the static lander bias field 13 . The shape and duration of this signature allows us to constrain the acceleration acting on the lander perpendicular to the boom rotation axis, that is, along the lander z axis. The other touchdowns 1 , 11 were used as references to determine the direction and timing, and gain a qualitative insight into the magnitude (see Supplementary Information for more details) Philae lander dynamics at the TD2c point This section covers only the dynamics at the TD2c point, where the ice was compressed by the Philae balcony and SD2 tower. The full dynamics that took place through all four TD2 points (TD2a–d) are described in the Supplementary Methods and summarized in Extended Data Table 2 . The TD2c contact duration is linked to the ROMAP boom, which showed an upwards movement at 17:25:24 ± 1 s continuing until approximately 17:25:27 ± 1 s (Fig. 2a , Extended Data Fig. 5 ). At 17:25:27 ± 1 s, the boom started to move downwards away from the lid, meaning that the acceleration of the lander had stopped or reversed. The change at 17:25:27 ± 1 s is considered the end position of the stamping in TD2c, because the geometry from the deceleration of Philae during stamping causes an upward deflection of the boom, which is what was observed. This resulted in a duration of 3 ± 1 s for the stamping/compression of the ice (see Fig. 2a–c and the animation in Supplementary Video 3 ). The energy loss at TD2c caused by the stamping is estimated to be 0.671 ± 0.297 J. Estimating TD2c surface area and depth The images used in the data analysis are obtained from numerous epochs and distances from the comet. The pixel resolution linked to distance ranged from 0.16 m per pixel in May 2016 when the spacecraft was 8.5 km from the crevice, to 0.13 m per pixel at a distance of 7 km in June 2016, to 0.049 m per pixel at the closest distance reached on 2 September 2016 (2.63 km). The pixel size in Extended Data Fig. 1 for both pre- and post-landing images was 0.15 m per pixel, as the images were taken at an equivalent distance (8 km) to the crevice. Image analysis relied primarily on cross-correlating multiple images to determine accurate estimates of heights and width of different boulder features. The lower the resolution the greater the error bar, as it became difficult to determine where the edge of a feature actually started or ended owing to the greater coverage of the pixel. The error bar is therefore linked to the pixel resolution achievable in the image analysis. A lower limit of 0.2208 m 2 has been estimated for the area of ice compressed by Philae in the TD2c position. Only two OSIRIS images (21 August 19:19 ut , 24 August 2016 19:39 ut ; Figs. 1 f, 2c ) provide a clear, albeit angled, view of the ice in the compression. The full width of the ice impression could therefore not be estimated due to lack of direct visibility. It was feasible however to use both these images as well as another from 2 September 2016 19:59 in Fig. 3e (which provides an edge-on-view of the ice impression) to obtain a lower-limit measurement of the length of the sides of the Philae balcony that made the impression. The estimate is a combination therefore of the actual area of the Philae balcony matching these lengths plus the area of an arc matching the remainder of the angled visible ice. Figures 1 d, 2b and 3e (same images) were taken on 2 September 2016 at the closest distance (2.63 km) to the nucleus surface achieved by Rosetta during the mission, and provide therefore the highest resolution. This image, which was the famous Philae discovery image, provides a clear high-resolution view of the edge of the compressed ice region in TD2c. The solar illumination in this image is equivalent to the illumination observed on 6 August 05:52 ut , and allows to conclude that while the sunlight was gradually moving down the crevice, it had not yet arrived at the compressed region. As a result, the stamped edge lies in shadow, with the exposed ice further back in the crevice creating a back-light effect. This edge-on view allows an accurate measurement of 0.246 ± 0.049 m to be taken of the depth of the compressed region. In that respect, the one image showing Philae resting on the surface of the comet (Fig. 1d ) also provides the key input measurement for this paper. Further dimension-related images are provided (Supplementary Figs. 7 , 8 ). Compressive strength and porosity analysis The model that we use . To derive material properties for the cometary boulder from our findings, we consider a hierarchical setup of the interior structure of the boulder assuming the dust and ice grains are concentrated in larger units (‘pebbles’), which themselves are clustered together to make up the boulder 45 . Thus, the boulder itself is a kind of ‘rubble pile’ and possesses pore space on two length scales, the microscopic and the pebble-sized. Comparisons between our model and others addressing the basic building blocks of the internal material of the comet can be found elsewhere 24 , 32 , 46 , 47 and are not dealt with further in this paper. Compressive strength estimates . As the incoming trajectory of Philae at TD2c with respect to the surface normal shows, Philae lost a total kinetic energy of Δ E = 0.671 ± 0.297 J owing to compression (‘stamping’) of the surface material of comet 67P. The vertical component of the incoming motion of Philae resulted in the compaction of the porous cometary matter. The characteristic stress required for stamping is determined by the compressive strength P C of the material. With the depth and minimum area of the stamping impression Philae made estimated (see main text) to be h = 0.246 ± 0.049 m and A Min = 0.2208 m 2 , respectively, a minimum volume of V Min = A Min h = 0.054 m 3 was displaced during stamping. We can make a first, crude, estimate of the compressive strength by assuming that the compressed volume is at least V Min and that the compressive strength is a constant material value, that is, is independent of the compressional state of the matter. In reality, a much larger volume than V Min is affected by the impact of Philae and P C is a strong function of the porosity (see below). However, we can state that \({P}_{{\rm{C}}} < \frac{\Delta E}{{V}_{{\rm{M}}{\rm{i}}{\rm{n}}}}=12\,{\rm{P}}{\rm{a}}\) is required to account for the observed material compression and energy loss of Philae. It should be noted that this upper limit for the compressive strength of 67P’s surface material is model independent and based on firm measurements (this paper). Compressive strength link to volume filling factor . The compressive strength is not a constant material value, but rather depends on the volume filling factor (fraction of total volume filled by material), Φ , in a characteristic way 27 , 48 , 49 $${P}_{{\rm{C}}}(\varPhi )={p}_{{\rm{m}}}{\left(\frac{\varPhi -{\varPhi }_{1}}{{\varPhi }_{2}-\varPhi }\right)}^{{\varDelta }^{{\prime} }}$$ (2) with Φ 1 and Φ 2 being the formal lower and upper limits of the volume filling factor for which P C ( Φ 1 ) = 0 and P C ( Φ 2 ) → ∞, respectively (see below for p m and Δ ′). Experiments and numerical simulations showed 27 , 48 , 49 , 50 that Φ 2 is in the range 0.1–0.9, depending on the particle properties, such as grain size, grain-size distribution, grain shape, and the mode of deposition or compression. The value of Φ 1 , which has no physical meaning and is merely a fitting parameter, ranges 45 between 0.05 and 0.35. The factor p m is the characteristic compressive strength and Δ ′ describes the logarithmic range of stresses in which compression takes place: that is, most compression happens in the interval \(({10}^{-{\varDelta }^{{\prime} }}{p}_{{\rm{m}}},{10}^{+{\varDelta }^{{\prime} }}{p}_{{\rm{m}}})\) . Schräpler et al. 49 showed that this relation holds for a wide range of particle sizes, from loose granular ensembles of dust aggregates (the ‘pebbles’ in our notation) to others of a more homogeneous assemblage. They also showed that Δ ′ = 1.3 is an appropriate value for all kinds of grain and pebble sizes. Pebble assemblages possess characteristic compressive strengths of p m ≈ 6.1 × 10 −2 Pa for pebbles with 1 mm radius and p m ≈ 4.7 × 10 −3 Pa for pebbles with 1 cm radius (for the heterogeneous case—as is our model), whereas submicrometre-sized particles in a more homogenous structure are compressed with p m ≈ 10 4 Pa. The schematic functional dependence of the volume filling factor on compression for our model is shown in Extended Data Fig. 6c . Volume filling factor applied to the cometary boulder interior (TD2c) . To estimate the volume filling factor, and therefore the porosity, we need to take into account the manner in which the pebble assemblage compacts when the boulder interior is formed. The different ways in which this can happen can vary, as touched on briefly in the Supplementary Information , whether it occurs at initial cometary formation or long afterwards as the result of cometary dynamical events. The resulting packing fraction of the pebble assemblage can be defined in these scenarios by random loose packing (RLP), which means 51 Φ RLP ≈ 0.55, if inter-pebble friction is strong 52 (a criterion satisfied for dust/ice aggregates) and if the size–frequency distribution of the pebbles is narrow (see Supplementary Methods ). The maximum random packing density is random close packing (RCP), or 51 , 52 Φ RCP ≈ 0.64 for narrow size distributions. Thus, Philae’s energy was dissipated by the compaction from RLP towards RCP (see Extended Data Fig. 6c ). The total compressed volume must be larger than the displaced volume, V Min , to make room for the displaced material (Extended Data Fig. 6a, b ). Assuming that the overall compressed volume is V Eff = ηV Min and that the increase in volume filling factor from initially Φ Min = Φ RLP to Φ Max is identical everywhere inside this volume, the volume scaling factor can be derived as \(\,\eta =\frac{{\varPhi }_{{\rm{M}}{\rm{i}}{\rm{n}}}}{{\rm{\delta }}\varPhi }\) , with δ Φ = Φ Max − Φ Min . We can calculate the energy dissipated by Philae’s stamping motion (in the z direction), Δ E = 0.671 J to be: $$\begin{array}{c}\Delta E={\int }_{{V}_{{\rm{E}}{\rm{f}}{\rm{f}}}+{V}_{{\rm{M}}{\rm{i}}{\rm{n}}}}^{{V}_{{\rm{E}}{\rm{f}}{\rm{f}}}}{P}_{{\rm{C}}}{\rm{d}}V\\ =A{\int }_{0}^{h}{P}_{{\rm{C}}}{\rm{d}}z\\ =A{p}_{{\rm{m}}}{\int }_{0}^{h}{\left(\frac{\varPhi ({\rm{z}})-{\varPhi }_{1}}{{\varPhi }_{{\rm{R}}{\rm{C}}{\rm{P}}}-\varPhi ({\rm{z}})}\right)}^{{\varDelta }^{{\prime} }}{\rm{d}}z\\ =A{p}_{{\rm{m}}}{\int }_{{\Phi }_{{\rm{M}}{\rm{i}}{\rm{n}}}}^{{\varPhi }_{{\rm{M}}{\rm{a}}{\rm{x}}}}{\left(\frac{\varPhi -{\varPhi }_{1}}{{\varPhi }_{{\rm{R}}{\rm{C}}{\rm{P}}}-\varPhi }\right)}^{{\varDelta }^{{\prime} }}\frac{{\rm{d}}{\rm{z}}}{{\rm{d}}\varPhi }{\rm{d}}\varPhi \\ =\frac{{V}_{{\rm{M}}{\rm{i}}{\rm{n}}}\,{p}_{m}}{{\rm{\delta }}\varPhi }{\int }_{{\varPhi }_{{\rm{M}}{\rm{i}}{\rm{n}}}}^{{\varPhi }_{{\rm{M}}{\rm{a}}{\rm{x}}}}{\left(\frac{\varPhi -{\varPhi }_{1}}{{\varPhi }_{{\rm{R}}{\rm{C}}{\rm{P}}}-\varPhi }\right)}^{{\varDelta }^{{\prime} }}{\rm{d}}\varPhi ,\end{array}$$ (3) in which we use equation ( 2 ) with \({\varPhi }_{2}={\varPhi }_{{\rm{R}}{\rm{C}}{\rm{P}}}\) and the approximation \(\varPhi (z)={\varPhi }_{{\rm{M}}{\rm{i}}{\rm{n}}}+\frac{z}{h}{\rm{\delta }}\varPhi \) and \(\frac{{\rm{d}}z}{{\rm{d}}\varPhi }=\frac{h}{{\rm{\delta }}\varPhi }\) , with ℎ = 0.246 m and A = 0.2208 m 2 being Philae’s intrusion and stamping cross-section. Applying equation ( 3 ) (see Supplementary Table 1 ) with the nominal values p m ≈ 10 −2 Pa, Φ Min = Φ RLP = 0.55, Δ ′ = 1.3 and Φ 1 = 0.05 shows that Φ Max ≈ Φ RCP and, thus, η =6.1 and V Eff = 0.33 m 3 . It should be noted that the volume filling factor assumed in the above calculation, Φ = Φ RLP ≈ 0.55, is that of the pebble assemblage. The pebbles themselves consist of submicrometre-sized dust/ice particles so that they possess internal porosity, which can be expressed by their inner volume filling factor Φ pebble ≈ 0.33–0.58, depending on the type of compression experienced by the pebbles, as shown in laboratory experiments 48 . Taking the average of this range in volume filling factor of the pebbles, Φ pebble = 0.455 ± 0.125, we get an overall volume filling factor of the boulder of Φ boulder = Φ RLP Φ pebble = 0.25 ± 0.07, comparable with the values determined by Rosetta 24 , 25 , 26 . It should be noted that our two-step hierarchical model for the inner constitution of the boulder is plausible, but by no means the only feasible solution (see Supplementary Methods for more details). Data availability All OSIRIS, VIRTIS, RPC-MAG and ROMAP calibrated data are publicly available through the European Space Agency’s Planetary Science Archive website ( ). The Supplementary Information contains additional supporting images, data and explanatory text with the aim of allowing readers to understand what we have done and how we have done it.
After years of detective work, scientists working on the European Space Agency (ESA) Rosetta mission have now been able to locate where the Philae lander made its second and penultimate contact with the surface of Comet 67P/Churyumov-Gerasimenko on 12 November 2014, before finally coming to a halt 30 metres away. This landing was monitored from the German Aerospace Center Philae Control Center. Philae left traces behind; the lander pressed its top side and the housing of its sample drill into an icy crevice in a black rocky area covered with carbonaceous dust. As a result, Philae scratched open the surface, exposing ice from when the comet was formed that had been protected from the Sun's radiation ever since. The bare, bright icy surface, the outline of which is somewhat reminiscent of a skull, has now revealed the contact point, researchers write in the scientific publication Nature. All that was known previously was the location of the first contact, that there had been another impact following the rebound, and the location of the final landing site where Philae came to rest after two hours and where it was found towards the end of the Rosetta mission in 2016 . "Now we finally know the exact place where Philae touched down on the comet for the second time. This will allow us to fully reconstruct the lander's trajectory and derive important scientific results from the telemetry data as well as measurements from some of the instruments operating during the landing process," explains Jean-Baptiste Vincent from the DLR Institute of Planetary Research, who was involved in the research published today. "Philae had left us with one final mystery waiting to be solved," says ESA's Laurence O'Rourke, the lead author of the study. The team of scientists were motivated to carry out a multi-year search for 'TD2', touchdown point two: "It was important to find the touchdown site because sensors on Philae indicated that it had dug into the surface, most likely exposing the primitive ice hidden underneath." Over the past few years, the location was searched for like a needle in a haystack in the numerous images and data from Philae's landing area. Comet ice in the shape of a skull on 67P. Credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA; O’Rourke et al (2020) The magnetometer gave the decisive indication For a long time, and to no avail, the scientists repeatedly searched for spots of bare ice in the suspected region using high-resolution images acquired by the Optical, Spectroscopic and Infrared Remote Imaging System (OSIRIS) instrument developed by the Max Planck Institute for Solar System Research (MPS) in Göttingen and carried on board the Rosetta orbiter. But it was the evaluation of measurements made by the ROsetta MAgnetometer and Plasma monitor (ROMAP), built for Philae under the direction of the Technical University of Braunschweig, that put the scientists on the right track. In the data, the team investigated changes that occurred when the magnetometer boom, projecting 48 centimetres from the lander, moved when it hit the surface—which showed that it had bent. This created a characteristic pattern in the data from Philae's ROMAP instrument, which showed that the boom moved relative to Philae and allowed the duration of the lander's penetration of the ice to be estimated. The ROMAP data were correlated with data from Rosetta's RPC magnetometer to determine Philae's exact orientation. Analysis of the data revealed that Philae had spent almost two full minutes—not unusual in this very low gravity environment—at the second surface contact point, making at least four different surface contacts as the lander 'ploughed' through the rugged landscape. A particularly remarkable imprint, which became visible in the images, was made when the top of Philae sank 25 centimetres into the ice at the side of an open crevice, leaving visible traces of the sample drill and the lander's top. The peaks in the magnetic field data resulting from the boom movement show that Philae took three seconds to make this particular 'dent'. Philae's contact with the comet put into regional context. Credit: Images: Touchdown 1: ESA/Rosetta/Philae/ROLIS/DLR; all other images: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA; Analysis: O'Rourke et al (2020) A sculpture of bare comet ice in the shape of a skull The ROMAP data supported the discovery of this site with the ice-filled, bright open crevice in the OSIRIS images. When viewed from above, it reminded the researchers of a skull, so they named the contact point 'Skull-top Ridge'. The right 'eye' of the skull was formed where Philae's top side compressed the comet dust, while Philae scratched through the gap between the dust-covered ice blocks like a windmill, only to finally lift off again and cover the last few metres to its final resting place. "At the time the data showed that Philae had made contact with the surface several times and finally landed in a poorly lit spot. We also knew the approximate final landing site from CONSERT radar measurements. However, Philae's exact trajectory and points of contact could not be interpreted so quickly," recalls Philae Project Manager Stephan Ulamec from DLR. The evaluation of the OSIRIS images together with those acquired by the Visible and InfraRed Thermal Imaging Spectrometer (VIRTIS) instrument confirmed that the bright material is pure water ice, which was exposed by the Philae surface contact over an area of 3.5 square metres. During this contact, the region was still in shadow. It was not until months later that sunlight fell on it, so the ice still shone brightly in the Sun and was barely weathered and darkened by the space environment. Only the ice of other volatile substances such as carbon monoxide or carbon dioxide evaporated. Philae leaves traces at contact point two. Credit: Images: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA; Daten: ESA/Rosetta/Philae/ROMAP; Analysis: O'Rourke et al. (2020) Comet 67P is full of voids and without much cohesion This reconstruction of events is, in itself, challenging detective work, but the first direct measurement of the consistency of comet ice also provides important insights. The parameters of surface contact showed that this ancient, 4.5-billion-year-old mixture of ice and dust is extraordinarily soft—it is fluffier than the froth on a cappuccino, the foam in a bathtub or the whitecaps of waves meeting the coast. "The mechanical tension that holds the comet ice together in this chunk of dust is just 12 pascal. That is not much more than 'nothing'," explains Jean-Baptiste Vincent, who is studying the compressive and tensile strength of 'primitive' ice. This ice has been stored in comets for 4.5 billion years as if in a cosmic freezer, bearing witness to the earliest period of the Solar System. The investigation also allowed an estimate of the porosity of the 'rock' touched by Philae. Approximately 75 percent, three quarters of the interior, consists of voids. The 'boulders' omnipresent in the images are thus more comparable to Styrofoam rocks in a film studio fantasy landscape than to real, hard, massive rocks. At another location, a six-metre wide rock, captured in several images, even moved uphill due to the gas pressure of evaporating comet ice. These observations confirm a result of the Rosetta orbiter mission, which gave a similar numerical value for the proportion of voids and showed that the interior of 67P/Churyumov-Gerasimenko should be homogeneous down to a block size of one metre. This leads to the conclusion that the 'boulders' on the comet's surface represent the overall state of its interior as it was formed approximately 4.5 billion years ago. The result is not only scientifically relevant for the characterisation of comets, which alongside asteroids are the most primordial bodies in the Solar System, but also supports planning of future missions to visit comets and collect samples to be returned to Earth. Such missions are currently under consideration. Philae's magnetometer measurements on TD2. Credit: ESA/Rosetta/Philae/ROMAP Where is Philae? Credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA 12 November 2014—the first touchdown on a comet Philae gently separated from its mother spacecraft Rosetta in the afternoon (CET) of 12 November 2014 and descended at walking pace towards Comet 67P/Churyumov-Gerasimenko. As images from DLR's ROsetta Lander Imaging System (ROLIS) camera later showed, the lander, with a volume of approximately one cubic metre, hit the planned Agilkia landing site almost perfectly. However, Philae could not anchor itself on comet 67P because the anchor harpoons provided for this purpose did not activate. Since the comet has only about a one hundred thousandth of the gravitational force at its surface compared to Earth's gravity, Philae bounced off the comet, rose to a height of one kilometre and floated over the region of Hatmehit on the smaller of the two comet half-bodies. After more than two hours, Philae again made contact with comet 67P. The data transmitted to Rosetta during the two hours showed that the lander had come to rest after its turbulent bouncing flight, a violent collision with a cliff edge and two further contacts with the surface. A little later Philae was also able to transmit images of the landing site, christened Abydos, to Earth via Rosetta. Near the end of the mission: Philae found! Credit: Main image and lander inset: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA; context: ESA/Rosetta/NavCam—CC BY-SA IGO 3.0 Comet wide-angle view. Credit: ESA/Rosetta/MPS for OSIRIS Team MPS/UPD/LAM/IAA/SSO/INTA/UPM/DASP/IDA These images quickly showed that the lander was now not, as had been planned, in a favourable location with sufficient sunlight. For the team in the DLR control room, the work really began after the unexpected landing: they operated the lander for almost 60 hours, commanding its 10 onboard instruments and finally turning it slightly towards the Sun. Nevertheless, the power of the primary battery ran out because too little power could be produced. The batteries could not be sufficiently charged because the Sun shone on Philae for just under 1.5 hours during each 12.4-hour comet day. In fact, the Rosetta team of several hundred people spent 22 months puzzling over where Philae actually was. Only a close-up acquired by the OSIRIS camera system, taken a few weeks before the end of the mission on 2 September 2016, showed that Philae was stuck upright in a kind of crevice under an overhang that shielded the sunlight. At the end of the mission, the Rosetta spacecraft was also set down on 67P/Churyumov-Gerasimenko in a final manoeuvre on 30 September 2016.
10.1038/s41586-020-2834-3
Biology
Study: Some woodpeckers imitate a neighbor's plumage
Eliot T. Miller et al, Ecological and geographical overlap drive plumage evolution and mimicry in woodpeckers, Nature Communications (2019). DOI: 10.1038/s41467-019-09721-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-09721-w
https://phys.org/news/2019-04-woodpeckers-imitate-neighbor-plumage.html
Abstract Organismal appearances are shaped by selection from both biotic and abiotic drivers. For example, Gloger’s rule describes the pervasive pattern that more pigmented populations are found in more humid areas. However, species may also converge on nearly identical colours and patterns in sympatry, often to avoid predation by mimicking noxious species. Here we leverage a massive global citizen-science database to determine how biotic and abiotic factors act in concert to shape plumage in the world’s 230 species of woodpeckers. We find that habitat and climate profoundly influence woodpecker plumage, and we recover support for the generality of Gloger’s rule. However, many species exhibit remarkable convergence explained neither by these factors nor by shared ancestry. Instead, this convergence is associated with geographic overlap between species, suggesting occasional strong selection for interspecific mimicry. Introduction The coloration and patterning of organisms is shaped over evolutionary timescales by a variety of factors, both biotic and abiotic, including temperature and humidity 1 , 2 , 3 , 4 , 5 . Gloger’s rule, for example, describes the prominent ecological pattern wherein more pigmented populations are found in more humid areas 1 , 6 , 7 , 8 . Sexual selection can push organisms to become conspicuous, whilst the risk of predation can select for inconspicuous visual signals 9 , 10 , 11 . The external appearances of animals are subject to frequent study because such work has the power to shape our understanding of phenotypic evolution. Yet, our understanding of how factors such as climate and biotic interactions with predators, competitors, and mates combine to influence evolutionary outcomes across large radiations remains rudimentary. This is true even for birds, regular subjects of research on phenotypic evolution 12 , 13 . Here, we employ a phylogenetic comparative framework, coupled with remote-sensing data and a large citizen science dataset, to examine the combined effects of climate, habitat, evolutionary history, and community composition on plumage pattern and colour evolution in woodpeckers (Picidae). This diverse avian clade of 230 bird species is an excellent group in which to examine the evolution of external appearances because they occupy a broad range of climates across many habitats. Woodpeckers also display a wide range of plumages, from species with boldly pied patterns to others with large bright red patches, to still others that are entirely dull olive (Fig. 1 ). Furthermore, woodpeckers exhibit several cases of ostensible plumage mimicry 14 , 15 , highlighted by a recent time-calibrated phylogeny 16 . Although qualitatively compelling, it is unclear if these events can be explained simply as consequences of shared climate, habitat, and evolutionary history. Regardless of the answer to this question, these purported mimicry events and the impressive variation in plumage among woodpecker species provide the raw variation that we examine here to disentangle the contribution of the various abiotic and biotic factors that drive plumage evolution. Fig. 1 Evolutionary relationships and plumage similarity among exemplar species. Climate partially determines variation in woodpecker plumage. Lines lead from tips of phylogeny (left) to centroid of each species’ geographic distribution and are coloured according to mean climate regime of each species. These species shared a common ancestor ~ 6.5 mya. The colour scale depicts a gradient from warm (yellow) to seasonally cold regions (blue). eBird records for these species are plotted in the same colours as large points on the map. All other eBird woodpecker records are overlaid as smaller points and coloured similarly. Plumage dendrogram (right) shows the plumage dissimilarity relationships among the same set of species. Veniliornis mixtus , long classified as a member of Picoides , is inferred to have invaded seasonal climates in the southern hemisphere, and accordingly evolved bold black and white plumage. Picoides fumigatus , long classified as a member of Veniliornis , is inferred to have invaded warm climates near the equator, and accordingly evolved dark, subtly marked plumage. Picoides pubescens and P. villosus are rather distantly related but largely sympatric; they are inferred to have converged on one another in plumage above and beyond what would be expected based on shared climate, habitat, and evolutionary history. Traditional scientific names are used in this figure to aid explanation, but the illustrated species are currently all members of an expanded clade, Dryobates . Illustrations © HBW Alive/Lynx Edicions, map by authors Full size image We find that climate and habitat exert strong influences on woodpecker plumage. Species from humid areas, for example, tend to be darker and less boldly patterned than those from drier regions, and thus offer compelling support of Gloger’s rule. These factors and shared evolutionary history explain some of the variation in woodpecker plumage, but they are insufficient to explain some of the dramatic convergence seen between various sympatric woodpecker species. Instead, sympatry in and of itself appears to drive certain species pairs to converge in plumage, lending credence to the notion that these species are true avian plumage mimics. Results Multidimensional, distance-based approaches To investigate how climate, habitat, social interactions, and evolutionary history determine woodpecker plumage outcomes, we used multidimensional-colour and pattern-quantification tools to measure species’ colouration and patterns, quantifying species’ plumages from a standardized source (Figs. 2 and 3 ) 12 , 17 , 18 . Evidence suggests that pattern and colour are likely processed separately in vertebrate brains, with achromatic (i.e., luminance) channels used to process pattern information 19 , and differential stimulation of cones used to encode chromatic information 20 . While both plumage colour and pattern are inherently multivariate, we reduced this complexity into a composite matrix of pairwise species differences to address whether purported convergences were a mere by-product of shared evolutionary history or, if not, whether shared climate, habitat, or geographic overlap could explain these events. We incorporated the potential for interactions between pairs of species into the analysis by quantifying pairwise geographic range overlap using millions of globally crowd-sourced citizen science observations from eBird; 21 species in complete allopatry have no chance of interacting, while increasing degrees of sympatry should correlate with the probability of evolutionarily meaningful interactions. Fig. 2 Principal components analysis (PCA) of species-averaged woodpecker colour values. Principal component one (colourPC1) explains 45% of the variation in measured colour scores. Higher PC1 scores correspond to greater luminance values, and more yellow and less blue. Principal component two (colourPC2) explains an additional 36% of variation in overall colour scores. Higher PC2 scores correspond to more green and less red colouration. Coloured circles behind each woodpecker species correspond to the average CIE L*a*b scores for the 1000 randomly drawn colour samples from that species. These PCA values are used for species-level analyses (e.g., Fig. 5 ). Illustrations © HBW Alive/Lynx Edicions Full size image Fig. 3 Major axes of plumage pattern variation, summarized with principal components analysis (PCA). From granularity analysis, these values are used for species-level approaches. a Pattern energy spectra for exemplar species characterized by maximally divergent PC scores (i = Dryobates villosus , ii = Mulleripicus funebris , iii = Colaptes fernandinae , iv = Melanerpes erythrocephalus ). These show information on relative contributions of different granularities to overall appearance. b Pattern maps of exemplar species enable visualization of energy at subsets of isotropic band-pass filter sizes (2, 32, 64, 128, 256, and 512 pixels). c Principal component loadings of pattern energy spectra across filter size for all species reveals how pattern elements of different sizes influence PC scores. d Pattern PC1 and PC2, collectively, account for 82.1% of variation across woodpeckers. Species (i–iv) illustrate the extremes of variation along PC1 and PC2: (i) exhibits high energy scores across most pattern element sizes, with small, medium, and large pattern elements; (ii) has low energy scores across the spectrum, with few pattern elements of any size; (iii) has many small pattern elements and few of any other sizes; (iv) has only medium and large size pattern elements. Illustrations © HBW Alive/Lynx Edicions Full size image Variation in climate (multiple distance matrix regression, r = 0.055, p = 0.006), habitat ( r = 0.106, p = 0.007) and, to a lesser degree, phylogenetic relationships ( r = 0.001, p = 0.015) were all correlated with woodpecker plumage similarity scores. These results were robust to phylogenetic uncertainty (Supplementary Fig. 1 ). In short, woodpecker species in similar climates and habitats tend to look alike, even after accounting for shared ancestry. However, beyond the influences of habitat, climate, and evolutionary relatedness, we also found that close sympatry was a strong predictor of plumage similarity for the most similar-looking species pairs (Fig. 4 ). We interpret this result as evidence for multiple instances of plumage mimicry per se, transcending broader patterns of plumage convergence driven by similar environmental conditions. Following this result, we developed a method (see ‘Identification of putative plumage mimics’ in the Methods) to identify the species pairs that powered this result. Using this method, we validated many previously qualitatively identified mimicry complexes, including the Downy-Hairy Woodpecker system (Fig. 1 ) 22 , repeated convergences between members of Veniliornis and Piculus 23 , Dinopium and Chrysocolaptes , Dryocopus and Campephilus 24 , and the remarkable convergence of Celeus galeatus on Dryocopus and Campephilus 15 . Collectively, these distance matrix-based analyses provide a powerful tool to identify and understand the various factors that drive evolutionary patterns of convergence and divergence. Fig. 4 Modified Mantel correlogram shows how correlation of range and plumage changes across varying plumage dissimilarity. For dyadic comparisons with plumage dissimilarities of 0–0.2, geographic range overlap per se is statistically significantly associated with increasing plumage phenotype matching between already similar looking species pairs. The relationship was reversed at intermediate levels of plumage dissimilarity, where geographic range overlap is statistically significantly associated with decreasing plumage phenotype matching between somewhat similar looking species pairs (plumage dissimilarities of 20–30%). Illustrations show examples of species with plumage dissimilarities in the range indicated as compared with the Downy Woodpecker ( Dryobates pubescens ) inset in the legend. The red line shows the observed correlation coefficients, while the shaded grey area shows the expected correlation coefficients given the simulations described in the text. The size of the red circle corresponds to the standardized effect size of the observed correlation coefficient; values greater than +/−1.96 deviate beyond 95% of simulated values. Illustrations © HBW Alive/Lynx Edicions Full size image These previous analyses focused on the whole-body phenotype, however it is possible that environmental and social drivers of plumage operate in unique ways on different plumage patches 12 . To investigate this possibility, we ran additional analyses for each of three different body segments: (1) the back, wings, and tail; (2) the head; and (3) the breast, belly and vent. The whole-body results were largely recapitulated by these body-region-specific results, with subtle but notable differences. In particular, range overlap was particularly strongly associated with driving convergence in back plumage similarity, while genetic and climate similarity were not implicated, and genetic similarity was particularly closely associated with belly and head plumage similarity, while habitat (belly and head) and climate (belly) were not involved. To gain further insight into the evolutionary drivers of particular colours and patterns, we subsequently employed species-level phylogenetic comparative approaches. Species-level phylogenetic comparative approaches Considering the full-body plumage phenotype, we found that precipitation drives global patterns of pigmentation and patterning in woodpeckers. In particular, darker species tend to inhabit areas of higher annual precipitation (phylogenetic generalized least squares [PGLS] r 2 = 0.084, p < 0.001, Fig. 5a ), supporting Gloger’s rule 1 , 8 . In addition, high precipitation is also associated with reduced patterning (PGLS r 2 = 0.170, p < 0.001, Fig. 5c ), augmenting the generality of Gloger’s rule. While this pattern of dark populations occurring in areas of high precipitation is so well known as to be considered a “rule”, few large-scale comparative studies have quantitatively assessed this across a large radiation 8 . The mechanism underlying Gloger’s rule remains debated, but proposed drivers include improved background matching 25 in response to increased predation pressure in humid environments 26 , and defence against feather-degrading parasites 27 . There are some boldly marked woodpecker species in humid areas, but they invariably achieve these conspicuous phenotypes with minimal use of white plumage. This hints at the existence of an evolutionary trade-off wherein Gloger’s rule is due to the ability of melanin to forestall feather wear (e.g., by inhibiting parasites prevalent in humid areas 27 ), which subsequently narrows the breadth of means by which humid forest-inhabiting woodpeckers can achieve bold plumage phentoypes. Alternatively, unconcealed large white plumage patches might simply subject humid forest-dwelling birds to evolutionarily unacceptable levels of predation (the abundance and preferences of predators such as Accipiter hawks would shed more light on this issue, given that increasing body mass is associated with increasingly bold plumage patches in woodpecker, Fig. 5c ). While additional research is necessary to delineate the mechanism(s) responsible, our results expand the generality of Gloger’s rule and show that it may be involved in phenotypic convergence among disparate lineages inhabiting similar forests. Fig. 5 Variable importance scores and model-averaged parameter estimates from phylogenetic generalized least squares regressions. These quantify how colour and pattern vary as a function of climate, habitat, body mass, sexual size dimorphism, latitude and longitude, with summaries of the climate and habitat principal component analyses (PCA). Model-averaged p -values of explanatory factors are colour-coded from yellow to blue; only factors with p -values < 0.05 are coloured yellow and discussed here. a Dark birds are heavier and occur in wetter climates. b Greenish (as opposed to reddish) birds are found in more open habitats. c Less-patterned birds are found in aseasonal climates, open habitats, and temperate forests. d Birds patterned in large plumage elements, such as large colour patches, tend to be larger in body size. e Climate PCA results, illustrating the distribution of woodpeckers in climate space, with qualitative descriptions of the first two PC axes. f Habitat PCA results, showing the distribution of woodpeckers across global habitats, with qualitative descriptions of the first two PC axes. Illustrations © HBW Alive/Lynx Edicions Full size image Seasonality, in addition to average annual precipitation and temperature, also exerts significant influence on woodpecker plumage. The gradient from dark- to light-plumaged woodpeckers (colorPC1) was best explained by a model that included body mass, latitude, and seasonality in precipitation. Darker birds are larger, are found at lower latitudes, and in climates that receive considerable precipitation throughout the year (PGLS r 2 = 0.084, p < 0.001, Fig. 5a ). The gradient from red to green plumaged woodpeckers (colorPC2) was best explained by a model that included variation in temperature seasonality, and that included the dichotomy between open habitats and closed forests. Specifically, green birds tend to be found in climates that experience seasonal temperature fluctuations, and in open habitats (PGLS r 2 = 0.073, p < 0.001, Fig. 5b ). Seasonality also drives woodpecker patterning, and boldly marked birds (patternPC1) tend to be found in seasonal climates, open habitats, and temperate forests (PGLS r 2 = 0.170, p < 0.001, Fig. 5c ). We had suspected that variation along the gradient from species with large plumage elements to those with barring and spotting (patternPC2) might be associated with sexual selection, but after accounting for body mass, patternPC2 was not associated with sexual size dimorphism; instead, more finely marked birds tend to be smaller and found in lower reflectance habitats such as rainforests (PGLS r 2 = 0.043, p = 0.025, Fig. 5d ). Like those results from the multiple distance matrix regressions, these results were robust to phylogenetic uncertainty (Supplementary Figs. 3 – 6 ). Results were largely similar when considering the drivers of plumage variation for specific body parts, particularly for back plumage coloration and patterning (Supplementary Fig. 7 ). Yet these body-part-specific analyses did provide additional insights and investigating the mechanistic bases for these relationships should prove fruitful future research grounds. For example, red-headed species tend to be found in closed habitats, whereas black-, white-, and grey-headed species tend to be found in open habitats (Supplementary Fig. 8 ). In dark-headed species, including those with red heads, females tended to be heavier than males, whereas species with yellow and pale heads tend to have heavier males. Additionally, red-bellied species are most often found in forested habitats, species with boldly patterned bellies tend to have males that are heavier than females, and species with bellies patterned with large plumage patches (as opposed to fine barring) tend to be heavier and live in open habitats (Supplementary Fig. 9 ). Discussion Although climate and habitat appear responsible for some of the convergence in external appearance in woodpeckers, our analyses confirmed the decades-old suggestions 28 that some species have converged above and beyond what would be expected based only on selection pressures from the environments they inhabit. Sympatry, a proxy for the likelihood of evolutionarily meaningful interspecific interactions, was a strong predictor of plumage similarity for species exhibiting large geographic range overlaps (Fig. 4 ). We interpret this finding as evidence that the pattern of convergence we document is true mimicry, i.e., phenotypic evolution by one or both parties in response to a shared signal receiver 3 , 4 . Indeed, our study almost certainly underestimates the degree to which close sympatry leads to mimicry in woodpeckers, since some postulated mimetic dyads are well known to track one another at the subspecific level, which we could not account for here. Moreover, recent taxonomic revision of Chrysocolaptes 29 not yet matched by equivalent efforts in Dinopium meant that we could not completely capture the breadth of plumage matching events in this mimicry complex (e.g., the extraordinary convergence between the maroon-coloured Sri Lankan endemics C . stricklandii and D . benghalense psarodes ). There are two contested questions regarding plumage mimicry: whether it truly occurs 24 , 28 , 30 and, if it does, what process(es) drive the pattern 31 , 32 , 33 , 34 . Here we have shown that plumage mimicry does indeed occur and is pervasive across the woodpecker evolutionary tree, indicating that the processes deserve further study. Given the strong evidence that mimicry occurs in woodpeckers—a taxon with no known chemical defences—we predict renewed research interest in understanding the mechanisms responsible for these patterns. Only a handful of other avian studies 22 , 35 have empirically demonstrated that convergence—on the scale which we document among woodpeckers—is not exclusively a product of shared evolutionary history or environmental space, but this has not deterred more than a century-worth of careful rumination over the mechanisms responsible for the compelling patterns 28 , 32 , 33 , 36 , 37 , 38 . Recently, it has been shown 34 that the smaller species in plumage mimicry complexes may derive a benefit by fooling third parties into believing they are the socially dominant model species, and this remains the best empirically supported hypothesis in birds, but experimental work is needed to adequately quantify the selective advantage mimicry might confer. Relatedly, it remains unknown how distantly related lineages achieve plumage convergence genomically. Are multiple mutations required, each of which increases the degree of plumage convergence? Or might selection act on genetic modules controlled by a few loci shared across woodpeckers? Or might rare hybridization events between sympatric species have resulted in adaptive introgression of relevant plumage control loci 23 , 39 ? In summary, habitat and climate are strong determinants of woodpecker plumage. Shared evolutionary history shapes plumage phenotypes, but selective factors have driven plumage divergence far beyond that expected of simple evolutionary drift. Perhaps most notably, the plumage similarity predicted by shared climate, habitat, and evolutionary history is insufficient to explain the large number of cases we detected of closely sympatric but distantly related woodpecker species converging in colour and pattern. Woodpeckers appear to be involved in globally replicated mimicry complexes similar to those in well-studied groups such as butterflies 39 , and while woodpeckers are among the most conspicuous avian plumage mimics, others such as toucans exhibit qualitatively similar patterns 14 . Assessing how these evolutionary constraints and selective pressures have operated in concert is a research question that has only recently become more tractable with the advent of large, time-calibrated molecular phylogenies, massive distributional databases such as eBird, and powerful computing techniques like pattern analysis. It seems likely that different clades have been more or less influenced by factors such as climate, habitat, and social interactions, and understanding how and why these factors differ among clades should be a particularly fertile line of enquiry. Methods Taxonomic reconciliation and creation of complete phylogenies A time-dated phylogeny containing nearly all known woodpecker species was recently published by Shakya and colleagues 16 . As described below, we used (and verified the use of) illustrations from the Handbook of the Birds of the World (HBW) Alive 17 to quantify woodpecker plumage, and we used eBird 21 , a massively crowd-sourced bird observation database, to define spatial, climate, and habitat overlap between species. Each of these references uses a slightly different taxonomy. Our goal was to use the species-level concepts from the most recent eBird/Clements taxonomy 40 as our final classification system. To reconcile these three taxonomies (HBW, Shakya et al., and eBird/Clements), we obtained a set of 10,000 credible trees, kindly provided by Shakya 16 . We checked to ensure that each tree contained no polytomies, was ultrametric, and included the same set of tip labels as the other trees. After passing these checks, we discarded the first 30% of trees as burn-in, then sampled 1000 of the remaining trees. We extracted a list of the tip labels from the first tree, then determined to which eBird taxon this label was best applied. Across the set of 1000 credible trees we then swapped out the original tip labels for their eBird taxonomic identities. For each credible tree, we then randomly dropped all but one of any taxon represented by more than one terminal. We then worked in the opposite direction and identified all woodpecker taxa according to eBird. This process made it clear which species, as recognized by eBird, were missing from the Shakya tree. Twenty-one such missing taxa were identified: Picumnus fuscus , P . limae , P . fulvescens , P . granadensis , P. cinnamomeus, Dinopium everetti , Gecinulus viridis , Mulleripicus fulvus , Piculus simplex , Dryocopus hodgei , Melanerpes pulcher , Xiphidiopicus percussus , Veniliornis maculifrons , Dendrocopos analis , Dendrocopos ramsayi , Colaptes fernandinae , Chrysocolaptes festivus , C . xanthocephalus , C . strictus , C . guttacristatus , and C . stricklandi . We added these using the R package addTaxa 41 , and taxonomic hypotheses outlined in previous work (reviewed in Shayka et al. 16 ). Eighteen of these taxa have fairly precise hypothesized taxonomic positions which we were able to leverage to carefully circumscribe where they were bound into the tree. As an example, Dinopium everetti was recently split from D . javanense , so it was simply added as sister to the latter species. The precise phylogenetic positions of the remaining three taxa are less well known. For these, we first added C . fernandinae as sister to Colaptes sensu stricto (as previously found 42 ), then added Piculus simplex into the clade Piculus + Colaptes , as previous work showed some members of the former genus to actually belong to the latter 42 . We added X . percussus as sister to Melanerpes striatus ; 16 and we added P . cinnamomeus into Picumnus while ensuring that the Old World P . innominatus remained sister to the rest of the genus (it is very likely the New World Picumnus form a clade). Each of the 1000 resulting trees contained 230 species. As described below, most analyses were run across this set of complete credible trees. However, for other analyses, and particularly for visualization purposes, we also derived a maximum clade credibility tree from this set of complete trees 43 . Finally, for each taxon in the complete tree, we identified the illustration that best represented it in the Handbook of the Birds of the World Alive. When the latter recognized multiple subspecies for a given taxon from the final tree, we used the nominate subspecies as our unit of analysis for colour and pattern (see below). Quantifying plumage colour and pattern from illustrations We calculated plumage colour and pattern scores for males of 230 species of woodpeckers using digital images of colour plates obtained from The Handbook of the Birds of the World Alive 17 . Each image was imported to Adobe Photoshop (Adobe Inc. San Jose, CA) at 300 dots per inch, scaled to a uniform size, and saved as a Tagged Image File (.TIF). Following creation of .TIF files, we ran a custom macro in ImageJ 44 to sample the red (R), green (G), and blue (B) pixel values for each of 1000 random, 9-pixel-diameter circles from each woodpecker image. RGB values were transformed to CIELAB coordinates, which is an approximately perceptually-uniform colour space (distance between points is perceptually equivalent in all directions) 45 , 46 . To calculate pairwise colour dissimilarity scores, we plotted the 1000 colour measurements from the first species (e.g., species A) in three-dimensional CIELAB space, as well as the 1000 measurements for the second species (e.g., species B) in the dyadic comparison. We then calculated the average Mahalanobis distance 47 between the colours representing species A and the colours representing species B. We repeated this process for every possible combination (26,335 unique dyadic combinations) to generate an overall colour dissimilarity matrix. Additionally, to facilitate a more in-depth investigation of the underlying variation in colour among species, as well as how such variation is related to environmental, genetic, and geographic influences we conducted principal components analysis (PCA) on all 230,000 colour measurements. Following PCA, we averaged principal component (PC) scores for each species to create mean PC scores describing the average colour values for each species (Fig. 2 ). PC1 describes a dark-to-bright continuum, as well as a blue-to-yellow continuum (high loadings for L* and b*; Supplementary Table 1 ; Fig. 2 ), while PC2 primarily describes a red-to-green continuum (high loadings for a*; Supplementary Table 1 ; Fig. 2 ). We conducted pattern analyses on the same, scaled .TIF files for each species in ImageJ 44 . First, we split each image into R, G, B slices and then used the G layer for pattern analysis because this channel corresponds most closely to known avian luminance channels 48 , 49 , which is thought to be primarily responsible for processing of pattern information 19 , 20 . We then used the Image Calibration and Analysis Toolbox 50 in ImageJ to conduct granularity-based pattern analysis. In this process, widely used to study animal patterning 51 , 52 , 53 , 54 , images are Fast Fourier band pass filtered into a number of granularity bands that correspond to different spatial frequencies. For each filtered image, the “energy” at that scale is quantified as the standard deviation of filtered pixel values and corresponds to the contribution to overall appearance from pattern elements of that size. Pattern energy spectra were calculated for each species in a comparison across 17 bandwidths (from 2 pixels to 512 pixels, by multiples of √2), which we used for both pairwise pattern comparisons (pattern maps can be created to visualize differences; Fig. 3a ), and to categorize overall plumage pattern with PCA (Fig. 3c, d ). Pattern difference values were calculated by summing absolute differences between energy spectra at each bandwidth 50 ; after principal components analyses, the first three PCs explained ~93% of the variance in overall pattern energy (Fig. 3c ; Supplementary Table 5 ) across species (Fig. 3d ). Pattern PC1 has large positive loadings for most element sizes/granularities, indicating that species with high PC1 scores have numerous pattern elements of different sizes (whereas species with low PC1 are relatively homogenous and with little overall patterning; Fig. 3d ). Pattern PC2 has large positive loadings for small pattern elements, and negative loadings for intermediate and large pattern sizes such that species with high PC2 scores have lots of small pattern components, and species with low PC2 scores have more intermediate and large pattern size contributions (Fig. 3d ). As a check on our overall results, and to provide added insights into the factors driving plumage evolution in different regions of the body, we manually traced three regions of the body in the HBW illustrations and separated these into sets of images corresponding to: the back, including the wing and tail; the head, including the neck; and the breast and belly. We sent these separated illustrations through the same colour and pattern analytical pipeline described above. The ensuing colour and pattern spaces, and their associated loadings, are presented in Supplementary Figs. 10 – 18 . The extent to which these different regions of the body function as independent plumage modules is questionable—plumage evolution in one region of the body is likely correlated with that in others. Hence, while we consider these analyses to offer some scientific insight into plumage evolution, we emphasize that the whole-body plumage analyses represent our preferred set of results. Future work would do well to study correlated plumage evolution across different regions of the body 55 , and some work is now being undertaken in that research area 56 . Photographic quantification of plumage colour and pattern To validate the use of colour plates for quantifying meaningful interspecific variation in plumage colour and pattern among woodpeckers, we employed digital photographic and visual ecology methods to quantify the appearance of museum specimens and compared these results to those obtained using the whole-body colour plates. Specifically, we used ultraviolet and visible spectrum images to create standardized multispectral image stacks and then converted these multispectral image stacks into woodpecker visual space. Photos were taken with a Canon 7D camera with full-spectrum quartz conversion fitted with a Novoflex Noflexar 35 mm lens, and two Baader (Mammendorf, Germany) lens filters (one transmitting only UV light, one transmitting only visible light). We took profile-view photograph pairs (one visible, one UV) under full-spectrum light (eyeColor arc lamps, Iwasaki: Tokyo, Japan, with UV-coating removed), then converted these image stacks into woodpecker visual space using data from Dendrocopos major 57 and average visual sensitivities for other violet-sensitive bird species 58 . The inferred peak-sensitivity (λmax) for the short-wavelength sensitive 1 (SWS1) cone of Great Spotted Woodpeckers, based on opsin sequence, is 405 nm 57 . After generating images corresponding to the quantum catch values (i.e., stimulation of the different photoreceptor types), we performed granularity-based pattern analyses with the Image Calibration and Analysis Toolbox 50 in ImageJ 44 using the image corresponding to the stimulation of the avian double-cone, responsible for luminance detection 48 , 49 , because this photoreceptor type is assumed to be involved in processing pattern information from visual scenes 19 , 20 . Additionally, because relative stimulation values do not generate perceptually-uniform colour spaces 59 , 60 , we implemented visual models 61 to generate Cartesian coordinates for the colour values from each of 1000 randomly selected, 9-pixel diameter circles for each specimen and viewpoint (as we did with colour plates). Cartesian coordinates in this perceptually-uniform woodpecker colour space were then used to calculate pair-wise Mahalanobis distances 47 for each dyadic combination of measured specimens. As with our colour plate-based analysis, we Z -score transformed colour and pattern distances (mean = 0, SD = 1), then combined these distances to create a composite plumage dissimilarity matrix incorporating overall plumage colour and pattern. Based on specimens available at the Cornell University Museum of Vertebrates, we endeavoured to measure up to three male specimens from at least one species of every woodpecker genus. We were able to measure 56 individuals from 23 woodpecker species (Supplementary Table 9 ). To compare the museum-based results to those from the colour plates, we derived species-level pairwise distances. We did so by finding the mean plumage distance between all specimens of one species and all of those of another, and repeating for all possible species pair comparisons. We repeated this process for both the colour only dissimilarity matrix, and the combined colour and pattern matrix. We subset the larger, plate-based colour-only and colour-plus-pattern matrices to the corresponding species, and compared the relevant matrices with Mantel tests. Our results from the museum specimens substantiated those from the illustrations—we found close correlations between colour dissimilarity (measured from specimens vs. measured from illustrations; Mantel test, r = 0.74, p < 0.001) and overall plumage dissimilarity (measured from specimens vs. measured from illustrations, Mantel test, r = 0.72, p < 0.001). eBird data management, curation, and analysis On 24 November 2017 we queried the eBird database for all records of each of the 230 species in our final woodpecker phylogeny. We excluded records for which we had low confidence in the associated locality information. Specifically, we excluded: (1) historical records, which are prone to imprecise locality information and are not associated with effort information, (2) records from (0°, 0°), (3) records that were considered invalid after review by a human (thus, flagged but unreviewed records were included), and (4) records that came from transects of longer than 5 km in length. Because eBird has grown exponentially in recent years, we connected directly to the database to ensure maximal data coverage for infrequently reported species. We made this analytical decision because the automatic filters that flag unusual observations can be imprecise in regions of the globe infrequently visited by eBirders; flagged observations remain unconfirmed (and not included in products such as the eBird basic dataset) until they are reviewed, and backlogs of unreviewed observations exist in some infrequently birded regions. This approach allowed us to increase our sample size for infrequently observed species. In contrast, other species are very well represented in the database. To reduce downstream computational loads, we used the R package ebirdr ( ) to downsample overrepresented species in a spatially stratified manner. Specifically, for each of the 230 species, we laid a grid of 100 × 100 cells over the species’ extent, and randomly sampled and retained 60 points per cell. For most species, this had little to no effect, and fewer than 10% of points were thinned and removed from analysis; for a small number of well-sampled North American species, this excluded over 90% of points from analysis (Supplementary Data 1 ). In sum, this process reduced the original dataset from 13,513,441 to 1,037,628 records. We used the R package hypervolume 62 to create pseudo-range maps around each species’ point locations. Hypervolumes account for the density in the underlying points and can have holes in them, and are therefore much better suited to describing species’ ranges than are, e.g., minimum convex polygons 62 . For every dyadic comparison (i.e., for every species pair comparison), we used hypervolume to calculate the Sørenson similarity index between the species’ inferred geographic ranges. We summarized these similarities in a pairwise matrix, which we subsequently converted to a dissimilarity matrix such that a value of 1 represented complete allopatry (no overlap in geographic distributions), and a value of 0 represented perfect sympatry (complete overlap in geographic distributions). We used the raster package to match each species’ point locations to climatic values using WorldClim bioclimatic data 63 . These data describe the annual and seasonal climatic conditions around the globe. After querying species’ climatic data, we bound the resulting files together and ran a single large correlation matrix PCA across all climate variables except bio7, which is simply the difference between bio5 and bio6. We retained species’ scores along the different PC axes and used scores along the first two PC axes to calculate species-level hypervolumes in climate space. These first two axes explained 85% of the variance in the climates occupied by woodpeckers. The first axis described a gradient from places that are generally warm throughout the year, to areas that show seasonal variation in temperature and large diurnal shifts in temperature. The second axis described a gradient from areas that receive precipitation in seasonal pulses, have some hot months and have large swings in temperature over the course of a day, to areas that always receive lots of rain. Again, for each dyadic comparison, we calculated a Sørenson similarity index, and then converted the resulting values to a dissimilarity matrix. Querying habitat data We used ebirdr , which harnesses GDAL ( ), to bind species’ point locations into ~50 MB-sized tables, then converted the resulting tables into KML (Keyhole Markup Language) files, which we uploaded and converted into Google Fusion Tables ( ). This particular file size was chosen after we employed a trial-and-error process to determine the most efficient query size for Google Earth Engine (see below). Once accessible as a Fusion Table, we fed the tables into custom Google Earth Engine scripts. For every eBird observation, these scripts identified the MODIS satellite reflectance values 64 from the observation location within a 16-day window of the observation. We queried data specifically from the MODIS MCD43A4 Version 6 Nadir Bidirectional reflectance distribution function Adjusted Reflectance (NBAR) data set, a daily 16-day product which “provides the 500 m reflectance data of the MODIS ‘land’ bands 1–7 adjusted using the bidirectional reflectance distribution function to model the values as if they were collected from a nadir view” ( ). At the time of query, this dataset was available for the time period 18 February 2000 to 14 March 2017, which corresponded to the time period in which most of our eBird records were recorded. The year of all other records was adjusted up or down to fall within the available satellite data, e.g., observations from 10 November 2017 became 10 November 2016. This method is appealing in that it incorporates species’ spatiotemporal variation in habitat availability and use, although for most woodpecker species such variation is minimal. After querying species’ habitat data, we downloaded and combined the resulting files from Google Earth Engine, dropping any records that were matched to incomplete MODIS data. ebirdr contains functions to automatically combine and process these files from Google Earth Engine (although the functions currently employ Google Fusion Tables, which will be discontinued in December 2019). We then ran a single large correlation matrix PCA across all 7 MODIS bands. Before doing so, we natural log-transformed bands 1, 3, and 4, as a few extreme values along these bands hampered our initial efforts to ordinate this dataset. We retained the first two PC axes, which explained 81% of the variance in the habitats occupied by woodpeckers. The first described a gradient from closed forests to open, reflective habitats. The second described a gradient from regions with high visible and low infrared reflectance to those with low visible and high infrared reflectance. This dichotomy is used to identify snow in MODIS snow products ( ). Thus, at the species-average level, the second habitat PC axis functionally described a gradient between seasonally snow-covered (temperate) forests and tropical woodland. Again, for each dyadic comparison, we calculated a Sørenson similarity index, and then converted the resulting matrix to a dissimilarity matrix. Multiple distance matrix regression After the steps described above, we had data from four variables hypothesized to explain plumage variation across woodpeckers in the form of four pairwise distance matrices: genetic distances, climate dissimilarity, habitat dissimilarity, and geographic range dissimilarity. We combined the plumage colour and the plumage pattern dissimilarity matrices into a single matrix by independently standardizing each using z -scores, then calculating the element-wise sum of each dyadic comparison. We then related the four explanatory matrices to the single dependent plumage dissimilarity matrix using multiple distance matrix regression, with 999 permutations 65 . To account for phylogenetic uncertainty, we iterated this process over each of the 1000 complete phylogenies. The resulting model was highly significant (multiple distance matrix regression, median p across all complete phylogenies = 0.030), but fairly low in explanatory power (median r = 0.170), reflecting the massive variation incorporated into these five 230 × 230 matrices. It bears emphasizing that this correlation coefficient represents not, e.g., the degree to which two variables are correlated, but rather the degree to which the dissimilarity between various clouds of points can explain the dissimilarity in other clouds of points; low explanatory power is to be expected. Three of the four dissimilarity matrices were significantly and positively associated with plumage dissimilarity: increasing genetic distance (multiple distance matrix regression, median p = 0.020), climate dissimilarity (median p = 0.007), and habitat dissimilarity (median p = 0.006) all lead to increasing plumage dissimilarity. The distributions of correlation coefficients across the cloud of credible trees for these explanatory variables are shown in Supplementary Fig. 1 . In this analysis, geographic range dissimilarity was not significantly associated with plumage dissimilarity. Modified Mantel correlogram The likelihood that sympatric species evolve plumage mimicry is not thought to be monotonically related to range overlap. Instead, hypotheses to explain plumage mimicry propose that only certain species pairs that are both closely sympatric and ecologically similar will converge dramatically in appearance 14 . Thus, we did not expect a continuous relationship between geographic range dissimilarity and plumage dissimilarity; rather, we expect a threshold relationship wherein plumage convergence occurs in dyads with high geographic overlap. We therefore implemented a modified Mantel correlogram approach to test whether such a threshold existed 66 . For this, we manually created a series of matrices where we converted all elements in the plumage dissimilarity matrix to values of 1, except for dyads with plumage dissimilarity scores in a certain range. Specifically, the dissimilarity scores within a given range for a given analysis (ranges tested: 0–0.1, 0.1–0.2, 0.2–0.3, 0.3–0.4, 0.4–0.5, 0.5–0.6, 0.6–0.7, 0.7–0.8, 0.8–0.9, and 0.9–1) were set to a value of 0, and all other dissimilarity scores were set to a value of 1. We then sequentially input these matrices as the dependent variable into the same multiple distance matrix regression described above, repeatedly calculating the significance and partial correlation coefficient of the geographic range dissimilarity matrix with that of plumage dissimilarity. This approach allowed us to examine how the correlation between plumage and range dissimilarities varied across a range of plumage dissimilarities, while simultaneously incorporating the influences of evolutionary relationships, and climate and habitat dissimilarities. We found that geographic range dissimilarity was significantly associated with plumage dissimilarity for the most similar looking species pairs (Fig. 4 ). Dyadic comparisons with plumage dissimilarities of 0–0.2 include such pairs as Dryobates pubescens and Dryobates villosus (purported plumage mimics), Gecinulus grantia and Blythipicus pyrrhotis (which are quite similar looking), and Picus awokera and Melanerpes striatus (not closely similar, but do share colour and pattern elements). Put differently, geographic range overlap per se is statistically significantly associated with increasing plumage phenotype matching between already similar looking species pairs. Notably, the relationship was reversed at intermediate levels of plumage dissimilarity; geographic range overlap is statistically significantly associated with decreasing plumage phenotype matching between somewhat similar looking species pairs (plumage dissimilarities of 20–30%). Dyadic comparisons with dissimilarities in this range include Campethera abingoni and C . maculosa and Veniliornis spilogaster and Celeus obrieni . Although it is true that this signal could be interpreted as evidence that allopatry in and of itself drives plumage divergence between somewhat similar looking species pairs, this seems biologically implausible. A more likely reason for this signal is substantial plumage differentiation between pairs of birds at intermediate levels of sympatry 67 . The fact that some degree of sympatry is associated with rapid plumage divergence is expected by theory 68 , and is likely due to strong selection to avoid unsuccessful hybridization (i.e., reinforcement), or to avoid accidentally targeting heterospecifics for aggression 69 . Whether the relaxation of plumage divergence in closer sympatry could be attributed to shared habitats or climates, or to some other selective pressure, was heretofore unknown 67 . We show that in woodpeckers, after accounting for other likely selective forces, either one or both of the species in pairs that have attained close sympatry may evolve towards the phenotype of the species with which they co-occur. To further assess the strength of this striking result, we devised a simulation to determine whether such a pattern might result from chance alone. To do so, we repeatedly derived five matrices with the same intercorrelations among them as our observed dependent (plumage dissimilarity) and four independent matrices (genetic, habitat, climate, and range dissimilarity). When input into the multiple distance matrix regression described above, the resulting matrix-specific coefficients and overall power of the simulated independent matrices to explain variation in the simulated woodpecker plumage dissimilarity matrix was identical to that in the observed matrices. By using these same matrices in the modified Mantel correlogram approach described above, we were able to test whether the pattern observed in Fig. 4 (red line) could result by chance alone. After 200 iterations of the simulation, we calculated the standardized effect size of the correlation coefficient of each thresholded plumage dissimilarity matrix with range dissimilarity as the difference between the observed value and the mean of the simulations, divided by the standard deviation of the simulated correlation coefficients. Standardized effect sizes greater than +/−1.96 reflect observed correlation coefficients that deviated beyond 95% of simulated values. These simulations strongly support our finding that close sympatry—above and beyond evolutionary relatedness, shared climate, and shared habitat preferences—drives otherwise unexpectedly high levels of plumage convergence in woodpeckers. In short, close sympatry appears to be associated with occasional plumage mimicry in woodpeckers. We recognize that Mantel tests, and presumably by extension variations such as that described here, can suffer from inflated type I error rates 70 , 71 . Future work should seek to further establish the relevance of sympatry to driving plumage mimicry in birds with alternative approaches. Identification of putative plumage mimics We developed a method to identify high-leverage dyadic comparisons in Mantel tests and multiple distance matrix regressions. We used this to identify species pairs that have converged above and beyond that expected by shared climates and habitats. The process works as follows. In the first step, the observed correlation statistic is calculated. In our case, that was the correlation coefficient of a thresholded plumage dissimilarity matrix (values from 0–0.2 set to 0, all others set to 1) with the geographic range dissimilarity matrix. The statistic can also be the correlation coefficient from a regular or partial Mantel test; we confirmed that the method yielded similar results when we employed it with a partial Mantel test between the continuous plumage dissimilarity matrix, geographic range dissimilarity, and genetic distance. In the second step, each element (dyad) in the relevant matrix is modified in turn, and the relevant correlation statistic calculated and retained after each modification. We tested three methods of modifying dyads, i.e., three different approaches to this second step. All yielded similar results. (A) The value can be randomly sampled from the off-diagonal elements in the matrix. (B) The value can be set to NA and the correlation statistic calculated using all complete observations. (C) For the thresholded matrix, the test element can be swapped for the other value; zeros become ones, and ones become zeros. In the third step, only necessary for approach A, the process is iterated multiple times, and the modified correlation coefficient for every element, at each iteration, is stored as a list of matrices. In the fourth step, again only necessary for approach A, these matrices are summarized by taking the element-wise average. In the fifth step, the leverage of each dyad is calculated by subtracting the observed correlation statistic from each element in the averaged matrix. Finally, the matrix can be decomposed into a pairwise table and sorted by the leverage of each dyad. In our case, dyads that have high leverage, and are large contributors to the positive correlation between close plumage similarity and geographic range overlap, have the largest negative values (i.e., modifying their observed plumage dissimilarity score most diminished the observed positive correlation between range and plumage). We used this method to identify the most notable plumage mimics across woodpeckers, after accounting for shared evolutionary history, climate, and habitat use. Many purported mimicry complexes were responsible, including the Downy-Hairy system 22 , and repeated convergences between members of Veniliornis and Piculus 23 , Dinopium and Chrysocolaptes , and Dryocopus and Campephilus 24 . Convergence between the Helmeted Woodpecker ( Dryocopus = Celeus galeatus ) and Campephilus robustus was also detected 15 , as was convergence between members of Thripias and Campethera , Meiglyptes and Blythipicus , and Hemicircus and Meiglyptes . Phylogenetic least squares regression We derived species’ average scores along the first two axes of a plumage colourPCA (Fig. 2 ), a plumage pattern PCA (Fig. 3 ), the climate PCA described above, the habitat PCA described above, and species’ average latitude (absolute value) and longitude of distribution. Additionally, we mined body mass data from Dunning 72 . For those species for which mass was listed separately for males and females, we calculated sexual size dimorphism sensu Miles et al. 73 . These authors additionally reported dimorphism measures from a number of species not available in Dunning 72 . We then combined these datasets, resulting in sexual size dimorphism measures for 94 of 230 species. Sexual size dimorphism in woodpeckers is generally small compared to other avian groups such as the Icteridae, and they have not traditionally been considered a clade characterized by strong sexual selection pressures. During the process of combining datasets, we noticed that one of the most well-known of sexually size-dimorphic species, Melanerpes striatus , was characterized in both databases as having larger females than males. This is incorrect—males are notably larger than females—and we replaced the values with the midpoint of ranges given in ref. 17 . We used Rphylopars 74 to impute missing body mass and size dimorphism data, which we did using a Brownian motion model and the observed variance-covariance matrix between all traits except for plumage colour and pattern. Treating climate, habitat, latitude, longitude, natural log body mass, and sexual size dimorphism as explanatory variables, we used multi-model inference to identify PGLS regression models that explained each of the four PCA plumage axes of interest. We also visualized pairwise correlations and distributions of these traits using corrplotter 75 (Supplementary Fig. 2 ). We used a model averaging approach to determine which explanatory variables strongly influenced plumage (Fig. 5 ). To test the robustness of our conclusions to phylogenetic uncertainty, for each dependent variable (colorPC1, colorPC2, patternPC1, and patternPC2), we identified all explanatory variables with model-averaged coefficients that did not overlap zero. We then fit a series of 1000 PGLS regressions per dependent variable to the identified variables where, for each regression, we used a different one of the complete phylogenies. Variation in the coefficient estimations was small, as shown in Supplementary Figs. 3 – 6 . In the main text, when reporting pseudo- r 2 and values for the PGLS regressions, we report the median values from these 1000 models. Reporting summary Further information on experimental design is available in the Nature Research Reporting Summary linked to this article. Data availability All data supporting the findings of this study are available within the paper and its supplementary information files. Code availability All computer code necessary to run these analyses is available in the purpose-built R package ebirdr , available at .
In the first global test of the idea, scientists have found evidence that some woodpeckers can evolve to look like another species of woodpecker in the same neighborhood. The researchers say that this "plumage mimicry" isn't a fluke—it happens among pairs of distantly related woodpeckers all over the world. The study, published in the journal Nature Communications, was conducted by researchers at the Cornell Lab of Ornithology, SUNY Buffalo State, the University of British Columbia, and Manchester University. "Habitat, climate, and genetics play a huge role in the way feather color and pattern develop," explains lead author Eliot Miller at the Cornell Lab. "Species in similar environments can look similar to one another. But in some cases, there's another factor influencing the remarkable resemblance between two woodpecker species and that's mimicry. It's the same phenomenon found in some butterflies which have evolved markings that make them look like a different bad-tasting or toxic species in order to ward off predators." Study authors combined data on feather color, DNA sequences, eBird reports, and NASA satellite measures of vegetation for all 230 of the world's woodpecker species. It became clear, Miller says, that there have been repeated cases of distantly-related woodpeckers coming to closely resemble each other when they live in the same region of the globe. "In North America, the classic lookalike pairing is Downy Woodpecker and the larger Hairy Woodpecker," Miller says. "Our study suggests that these two species have evolved to look nearly identical above and beyond what would be expected based on their environment. Yet, these two species evolved millions of years apart Other North American lookalikes are Black-backed and Three-toed Woodpeckers. In Europe, Greater and Lesser Spotted Woodpeckers bear a striking resemblance, as do the Lineated, Robust, and Helmeted Woodpeckers of South America. Though not part of the study, Miller's take on the reason for woodpecker dopplegangers is that downies that look like the larger, more aggressive Hairy Woodpeckers might make other birds, such as nuthatches and titmice, think twice about competing with the downy for food. Some evidence supporting this idea has been found in observational studies but field experiments would be needed to more conclusively test this hypothesis. The data turned up some other interesting connections between woodpecker appearance and habitat. Many of the woodpeckers the scientists looked at in tropical regions have darker feathers. This adds to a growing body of evidence in support of "Gloger's Rule," which states that organisms tend to be darker colored in more humid areas. They also found that: red-headed woodpecker species tend to live in forested habitatsblack, white, and gray colored species tend to live in open habitatswoodpeckers with red on their bellies are most often found in forestswoodpeckers with large patches of color on their bellies were most often found in open habitats Additional studies would be needed to try to ferret out why some plumage patterns seem to be linked to habitat types. "It's really fascinating," says Miller. "And it's pretty likely this is happening in other bird families, too. I first got interested in this question a decade ago from looking through bird books. I wondered how the heck some distantly related species could look so much alike—what are the odds that it could happen just by chance?"
10.1038/s41467-019-09721-w
Medicine
New light shone on inflammatory cell death regulator
Nature Communications (2020). DOI: 10.1038/s41467-020-16819-z Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-16819-z
https://medicalxpress.com/news/2020-06-shone-inflammatory-cell-death.html
Abstract MLKL is the essential effector of necroptosis, a form of programmed lytic cell death. We have isolated a mouse strain with a single missense mutation, Mlkl D139V , that alters the two-helix ‘brace’ that connects the killer four-helix bundle and regulatory pseudokinase domains. This confers constitutive, RIPK3 independent killing activity to MLKL. Homozygous mutant mice develop lethal postnatal inflammation of the salivary glands and mediastinum. The normal embryonic development of Mlkl D139V homozygotes until birth, and the absence of any overt phenotype in heterozygotes provides important in vivo precedent for the capacity of cells to clear activated MLKL. These observations offer an important insight into the potential disease-modulating roles of three common human MLKL polymorphisms that encode amino acid substitutions within or adjacent to the brace region. Compound heterozygosity of these variants is found at up to 12-fold the expected frequency in patients that suffer from a pediatric autoinflammatory disease, chronic recurrent multifocal osteomyelitis (CRMO). Introduction Necroptosis is a lytic form of programmed cell death associated with the production of pro-inflammatory cytokines, the destruction of biological membranes and the release of intracellular damage associated molecular patterns (DAMPs) 1 . Necroptosis depends on the activation of the mixed lineage kinase domain-like (MLKL) pseudokinase by receptor interacting protein kinase 3 (RIPK3) 2 , 3 , 4 . RIPK3-mediated phosphorylation of MLKL triggers a conformational change 4 , 5 that facilitates the translocation to, and eventual irreversible disruption of, cellular membranes. While the precise biophysical mechanism of membrane disruption is still a matter of debate, common features of contemporary models are the formation of an MLKL oligomer and the direct association of the executioner four-helix bundle domain (4HB) of MLKL with biological membranes 6 , 7 , 8 , 9 , 10 . In mouse cells, the expression of the murine MLKL 4HB domain alone (residues 1–125), 4HB plus brace helices (1–180), or the expression of phosphomimetic or other single site pseudokinase domain (PsKD) mutants is sufficient to induce membrane translocation, oligomerization and membrane destruction 4 , 9 . While capable of disrupting synthetic liposomes when produced recombinantly, similarly truncated and equivalent single site (PsKD) mutant forms of human MLKL do not robustly induce membrane-associated oligomerization and cell death without forced dimerization 11 , 12 , 13 . Furthermore, both mouse and human MLKL mutants have been reported that have the capacity to form membrane-associated oligomers, but fail to cause irreversible membrane disruption and cell death 9 , 13 . Recent studies have revealed that necroptosis downstream of MLKL phosphorylation and membrane association can be modulated by processes that engage the endosomal sorting complex required for transport (ESCRT) family of proteins. One model proposes a role for ESCRT in limiting necroptosis via plasma membrane excision and repair 14 while other models limit plasma membrane disruption by ESCRT-mediated release of phosphorylated MLKL in extracellular vesicles 15 , 17 , 17 and/or the internalization of phosphorylated MLKL for lysosomal degradation 17 . In mice, the absence of MLKL does not appear to have obvious deleterious developmental or homeostatic effects 4 , 18 . However, genetic deletion of Fadd , Casp8 or Ripk1 , leads to inappropriate activation of MLKL and ensuing necroptosis during embryogenesis that is incompatible with life beyond embryonic day (E)10.5, E10.5 and 1–3 days post-natally, respectively 19 , 20 , 21 , 22 , 23 , 24 , 25 . Exploring the precise physiological consequences of inappropriate MLKL activation in these scenarios is complicated by the fact that FADD, Caspase-8 and RIPK1 also play important roles in cellular processes other than modulation of MLKL-induced necroptotic cell death 23 , 26 , 27 , 28 , 29 , 30 . Aberrant levels of MLKL-dependent cell death contribute to disease in several genetic and experimental mouse models 23 , 31 , 32 , 33 , 34 , 35 . In humans, MLKL mRNA and protein levels are positively correlated with survival of patients with pancreatic adenocarcinoma, cervical-, gastric-, ovarian- and colon- cancers (reviewed by ref. 36 ). Interestingly, high levels of phosphorylated MLKL are associated with reduced survival in esophageal and colon cancer patients 37 . Two missense MLKL somatic mutations identified in human cancer tissue have been found to confer a reduction in necroptotic function in cell-based assays 4 , 13 . Very recently, siblings suffering from a novel neurodegenerative disorder were reported as homozygous for a rare haplotype involving a frameshift variant in MLKL , as well as an in-frame deletion of one amino acid in the adjacent fatty acid 2-hydroxylase ( FA2H ) gene 38 . The significant enrichment of an ultra-rare MLKL stop-gain gene variant p.Q48X has been reported in Hong Kong Chinese patients suffering from a form of Alzheimer’s disease 39 , however more common germline MLKL gene variants are only weakly associated with human disease in GWAS databases. We have identified a single base pair germline mutation of mouse Mlkl that encodes a missense substitution within the MLKL brace region and confers constitutive activation independent of upstream necroptotic stimuli. Given this mutant Mlkl allele is subject to the same developmental and environmental controls on gene expression as wild-type Mlkl , the postnatal lethality in these mice provides insight into the physiological and pathological consequences of dysregulated necroptosis. In parallel, these findings inform the potential functional significance of three common human MLKL polymorphisms that encode non-conservative amino acid substitutions within, or in close proximity to, the brace helix that is mutated in the Mlkl D139V mouse. Results Generation of a constitutively active form of MLKL Mpl −/− mice, owing to genetic deletion of the major receptor for thrombopoietin, have only 10% the wild-type number of peripheral platelets. An ENU mutagenesis screen was performed to identify mutations that ameliorate thrombocytopenia in Mpl −/− mice via thrombopoietin independent platelet production 40 . A G 1 founder, designated Plt15 , had a modestly elevated platelet count of 189 × 10 6 per mL compared with the mean for Mpl −/− animals (113 ± 57 × 10 6 per mL) and yielded 19 Mpl −/− progeny. Ten of these mice had platelet counts over 200 × 10 6 per mL, consistent with segregation of a dominantly acting mutation (Fig. 1a ). Linkage analysis and sequencing identified an A to T transversion in Mlkl that was heterozygous in all mice with an elevated platelet count (Fig. 1b ). The Mlkl Plt15 mutation results in a non-conservative aspartic acid-to-valine substitution at position 139 within the first brace helix. In the full-length mMLKL structure, D139 forms a salt bridge with an arginine residue at position 30 (α2 helix) of the MLKL four-helix bundle (4HB) domain 4 (Fig. 1c ). This salt bridge represents one of a series of electrostatic interactions between residues in helix α2 of the MLKL 4HB domain and the two-helix ‘brace’ region. D139 of mouse MLKL is conserved in all MLKL orthologues in vertebrata reported to date (Fig. 1d ). We have shown that the exogenous expression of the 4HB domain of murine MLKL alone is sufficient to kill mouse fibroblasts whereas exogenous expression of full-length MLKL does not, suggesting an important role for this ‘electrostatic zipper’ in suppressing the killing activity of the MLKL 4HB 9 . To determine if MLKL D139V exhibited altered ability to induce necroptotic cell death relative to MLKL WT , we stably expressed these full-length proteins under the control of a doxycycline-inducible promoter in immortalized mouse dermal fibroblasts (MDF) isolated from Wt, Mlkl −/− , Ripk3 −/− or Ripk3 −/− ;Casp8 −/− mice. While expressed at comparable levels, MLKL D139V induced markedly more death than MLKL Wt , on each of the genetic backgrounds tested (Fig. 1e–f , Supplementary Fig. 1a ), and formed a high molecular weight complex observable by BN-PAGE in the absence of exogenous necroptotic stimuli (Supplementary Fig. 1b ). This indicates that MLKL D139V is a constitutively active form of MLKL, capable of inducing necroptotic cell death independent of upstream signaling and phosphorylation by its activator RIPK3. Consistent with this interpretation, exogenous expression of MLKL D139V in Ripk3 −/− ;Casp8 −/− MDFs was sufficient to induce the organelle swelling and plasma membrane rupture characteristic of TNF-induced necroptosis when examined by Transmission Electron Microscopy (Fig. 1g ). Fig. 1: Murine MLKL D139V is a constitutively active form of MLKL. a Platelet counts from Mpl −/− mice (open circles, n = 80, 60) and offspring from matings between Plt15 mice and Mpl −/− mice (closed orange circles, n = 19, 113) on a C57BL/6 or mixed C57BL/6:129/Sv background used for linkage analysis (Mixed N 2 ). b A missense mutation (D139V) in the second exon of Mlkl was identified in Plt15 mutant mice. DNA sequence shown for wild type (top), a heterozygous mutant (middle), and a homozygous mutant (bottom). c Aspartate 139 contributes to an ‘electrostatic zipper’ joining brace helix 1 and the 4HB α2 helix of mouse MLKL (PDB code 4BTF) 4 . d Sequence logo of MLKL brace domain generated from multiple sequence alignment of all Vertebrata MLKL sequences (257) available on OrthoDB. e Mouse dermal fibroblasts (MDFs) of indicated genotypes were stably transduced with Mlkl Wt and Mlkl D139V and expression induced with doxycycline (dox, white bars) or not induced (black bars) for 21 h. PI-positive cells were quantified by flow cytometry. Means ± SEM are plotted for n = 4–8 experiments (a combination of biological repeats and independent experiments) for each genotype with the exception of R3 −/− C8 −/− + Mlkl Wt ( n = 2, ±range). f Western blot analysis of whole cell lysates taken 6 h post doxycycline induction. g Transmission electron micrographs of MDFs stimulated as indicated. Images selected for ( f ) and ( g ) are representative of 2–3 independent analyses with similar results. TBZ; TNF + Birinapant + Z-VAD-fmk. Full size image Mlkl D139V causes a lethal perinatal inflammatory syndrome To define the phenotypic consequences of constitutively active MLKL in the absence of any confounding effects resulting from Mpl -deficiency, all subsequent studies were performed on a Mpl +/+ background. Homozygous Mlkl D139V/D139V pups were born at expected Mendelian frequencies (Supplementary Table 1 ) and were ostensibly normal macroscopically and histologically at E19.5 (Supplementary Fig. 2a–d ). However, by 3 days of age, although outwardly indistinguishable from littermates (Fig. 2a ), they exhibited reduced body weight (Supplementary Fig. 2b ) and failed to thrive, with a maximum observed lifespan of 6 days under conventional clean housing conditions. Like Mlkl Wt/D139V mice, Mlkl null/D139V compound heterozygotes were present at the expected frequency at P21 and developed normally to adulthood (Supplementary Table 2 ). Thus, the constitutive activity of MLKL D139V was not affected by the presence of normal MLKL protein suggesting it is the absolute allelic dose of Mlkl D139V that determines perinatal lethality. To confirm that the phenotype of the ENU derived Mlkl D139V mice was due to the Mlkl D139V missense mutation, we independently generated Mlkl D139V mice using CRISPR-Cas9 genomic editing. Homozygote CRISPR- Mlkl D139V/D139V mice also died soon after birth (Supplementary Table 3 ). Fig. 2: Homozygous Mlkl D139V neonates exhibit dispersed upper body inflammation. a Macroscopic appearance of Mlkl Wt/Wt , Mlkl Wt/D139V and Mlkl D139V/D139V mice at postnatal day 3. b Coronal section of mouth and neck region of postnatal day 2 litter mates stained with hematoxylin and eosin (H&E). Dilated blood vessels and edema are indicated by arrows. c Serial mandible sections from postnatal day 3 litter mates stained with H&E and anti-CD45. Inset black boxes are magnified in right panel. SL, sublingual gland. SM, submandibular gland. Images representative of n = 3–4 P3 pups per genotype. d H&E stained sections from mediastinum of postnatal day 2 litter mates. Thymic cortical thinning and pericardial infiltration are indicated by arrows. For full anatomical annotations for ( b ) and ( d ) see Supplementary Fig. 2h . ( b ) and ( d ) representative of n = 5–6 P2 pups examined with similar characteristics. Scale bars for ( b – d ) range from 50 to 1000 μm as indicated. Multiplex measurement of plasma cytokine levels at E19.5 ( e ) and postnatal day 3 ( f ). Each symbol represents one independent pup sampled; Mlkl Wt/W t – blue circles, Mlkl Wt/D139V - red squares, Mlkl D139V/D139V - green triangles, with bar height and error bars representing mean ± SD respectively for n = 3–to 19 pups as indicated. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.005 calculated using an unpaired, two-tailed t -test. Full size image Hematoxylin-Eosin stained-sections from both P2 and P3 Mlkl D139V/D139V pups revealed multifocal acute inflammation characterized by neutrophilic infiltration, dilated blood vessels and edema (Fig. 2b ) in the dermis and subcutis of the head and neck. These inflammatory features were not observed in Mlkl Wt/Wt or Mlkl Wt/D139V littermates, nor in Mlkl −/− mice of the same age (Supplementary Fig. 2i ). Cells of hematopoietic origin, revealed by immunohistochemical staining for CD45, were sparsely distributed throughout the lower head and neck and confined predominantly to a clearly delineated developing lymph node in Mlkl Wt/Wt and Mlkl Wt/D139V littermates (Fig. 2c ). In contrast, CD45 + cells were more numerous and distributed throughout the cutis, subcutis and salivary glands of Mlkl D139V/D139V pups (Fig. 2c ). A mixture of diffuse and focal inflammatory infiltration was also observed within the mediastinum and pericardial space of all P2/P3 Mlkl D139V/D139V pups examined, as was a paucity of thymic cortical lymphocytes (Fig. 2d , Supplementary Fig. 2e ), phenotypes not evident in E19.5 embryos (Supplementary Fig. 2d ). No other consistent lesions were observed by histopathology. Consistent with this inflammatory phenotype, significantly elevated levels of several pro-inflammatory cytokines and chemokines were evident in the plasma of both E19.5 and P3 Mlkl D139V/D139V pups (Fig. 2e, f ). Blood glucose levels were normal (Supplementary Fig. 2f, g ). Hematopoietic defects in Mlkl D139V mice Although blood cell numbers were unchanged in Mlkl D139V/D139V pups at E19.5 relative to Mlkl Wt/Wt and Mlkl Wt/D139V littermates, by P3 significant deficits were evident in total white blood cell count (due predominantly to reductions in lymphocyte numbers) and platelet numbers (Fig. 3a–c , Supplementary Fig. 3a ). Similarly, the numbers of hematopoietic stem and progenitor cells were present at normal proportions in fetal livers of E18.5 Mlkl D139V/D139V pups, although increased levels of intracellular ROS were uniformly evident in live cells, (Fig. 3d, e , Supplementary Fig. 3b,c ). By P2, deficits in CD150 + CD48 + and CD150 + CD48 − populations were present (Fig. 3f ), accompanied by increased AnnexinV binding in live cells (which indicates phosphatidyl serine exposure) of all lineages (Fig. 3g ). In adult Mlkl Wt/D139V mice, numbers of hematopoietic stem and progenitor cells were unaffected (Fig. 3h ); however, upon myelosuppressive irradiation, recovery of hematopoietic cell numbers was delayed and characterized by increased expression of ROS and Annexin V (Supplementary Fig. 3d, e ). When challenged with the cytotoxic drug 5-fluorouracil (5-FU), blood cell recovery in Mlkl Wt/D139V mice was similarly delayed (Fig. 3i ). In competitive transplants in which test Mlkl Wt/D139V or Mlkl Wt/Wt marrow was co-injected with wild-type competitor marrow in 10:1 excess, as expected, Mlkl Wt/Wt marrow contributed to 90% of recipient blood cells 8 weeks after transplantation and maintained that level of contribution for 6 months (Fig. 3j ). In contrast, Mlkl Wt/D139V marrow performed poorly, contributing to 25% and 51% of recipient blood cells at these times (Fig. 3j ). Similarly, while wild-type fetal liver cells contributed to the vast majority of blood cells in irradiated recipients up to 6 months after transplantation, cells from Mlkl D139V/D139V embryos failed to compete effectively during this period (Fig. 3k ). Heterozygote Mlkl Wt/D139V fetal liver cells contributed poorly in the first month following the graft but recovered to contribute more after six months (Fig. 3k ). Thus, while tolerated under steady-state conditions, heterozygosity of Mlkl D139V is deleterious under conditions of hematopoietic stress. Bone marrow- derived HSCs from Mlkl Wt/D139V adults and fetal liver- derived HSCs from Mlkl Wt/D139V and Mlkl D139V/D139V pups also formed fewer and smaller colonies in the spleens of lethally irradiated recipient mice after 8 days (Supplementary Fig. 3f ). Fig. 3: Alterations in hematopoietic cells and defective emergency hematopoiesis in Mlkl D139V mice. a – c Absolute white blood cell (WBCB), lymphocyte and platelet numbers in peripheral blood of E19.5 and P3 pups, n = 6, 27, 44, 41, 10, and 11 as indicated. d Proportions of HSC (Lineage - Sca-1 + c-kit + (LSK) CD150 + CD48 − ), MPP (LSK CD150 − CD48 − ), HPC-1 (LSK CD150 − CD48 + ) and HPC-2 (LSK CD150 + CD48 + ) 82 , n = 5 per genotype and ( e ) relative levels of ROS ( n = 4, 9, 5) ( f ) P2 bone marrow LSK populations ( n = 9, 18, and 11) and ( g ) relative AnnexinV binding ( n = 2, 11, 7). ( h ) HSC subtypes in adult bone marrow, n = 9 per genotype. a – h Each symbol represents one independent animal; Mlkl Wt/W t – blue circles, Mlkl Wt/D139V - red squares, Mlkl D139V/D139V - green triangles, with bar height and error bars representing mean ± SD respectively, or range when n = 2. Red and white blood cells and platelets in Mlkl Wt/Wt (blue circles) and Mlkl Wt/D139V (red squares) mice after treatment with 150 mg per kg 5FU or saline. Means ± SEM from one experiment in which three mice were sampled at each time point for each treatment group, similar results were obtained in an independent cohort. j Bone marrow from Mlkl Wt/Wt or Mlkl Wt/D139V mice on CD45 Ly5.2 background was mixed with wild-type CD45 Ly5.1 competitor bone marrow and transplanted into irradiated CD45 Ly5.1/Ly5.2 recipients. Peripheral blood mononuclear cells (PBMCs) quantified after 56 and 180 days. Mean ± SEM are shown (3 donors per genotype, 3–5 recipients per donor). k Fetal liver cells (CD45 Ly5.2 ; Mlkl Wt/Wt , Mlkl Wt/D139V or Mlk D139V/D139V ) were transplanted into lethally irradiated recipients (CD45 Ly5.1/Ly5.2 ) together with competitor bone marrow (CD45 Ly5.1 ). Contribution to PBMCs 28 days and 180 days after transplantation. Mean ± SEM are shown (2–10 donors per genotype, 2–6 recipients per donor). Host contribution (CD45 Ly5.1/Ly5.2 ) is depicted in gray, competitor (CD45 Ly5.1 ) in white, and test (CD45 Ly5.2 ) in black for ( j ) and ( k ). * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.005 calculated using an unpaired, two-tailed t -test. Full size image Mlkl D139V fibroblasts are less sensitive to necroptosis To examine if the constitutive activity of exogenously expressed MLKL D139V results in an enhanced propensity for necroptosis in cells that express MLKL D139V under the control of its endogenous promoter, we immortalized MDFs from Mlkl Wt/Wt , Mlkl Wt/D139V and Mlkl D139V/D139V littermates and from Mlkl −/− E19.5 pups. We observed no significant differences in basal cell death levels, nor any differences in the sensitivity of these cells to an apoptotic stimulus such as TNF plus Smac mimetic (Fig. 4a , Supplementary Fig. 4a ). Surprisingly and in apparent contradiction to our initial observations using exogenous expression systems, endogenous expression of this Mlkl mutant revealed a significant and consistent decrease in sensitivity to TNF-induced necroptosis using three different pan-caspase inhibitors Q-VD-OPh, zVAD-fmk and IDN-6556/emricasan in a Mlkl D139V dose-dependent manner (Fig. 4a , Supplementary Fig. 4a ). MDFs isolated from Mlkl D139V/D139V homozygotes were up to 60% less sensitive to TNF-induced necroptosis compared with Mlkl Wt/Wt MDFs, but were not as resistant as Mlkl −/− MDFs (Fig. 4a ). Fig. 4: MLKL D139V undergoes constitutive post-translation turn-over. MDFs were isolated from Mlkl Wt/Wt , Mlkl Wt/D139V , Mlk D139V/D139V or Mlkl −/− pups , immortalized and stimulated as indicated for 21 h for quantification of PI-positive cells using flow cytometry ( n = 4, 4, 4, and 6) ( a ), or for 4 h for western blot analysis ( b ). Mlkl −/− MDFs were stably transduced with doxycycline-inducible FLAG-MLKL WT and FLAG-MLKL D139V constructs to examine MLKL protein stability after doxycycline withdrawal ( c ) and in the presence of indicated compounds (FLAG-MLKL D139V ) ( d ). e Immortalized MDFs from ( a ) were stimulated as indicated for 21 h for quantification of PI-positive cells using flow cytometry ( n = 2–3, 3–4, 4, 2–3). f E14.5 fetal liver cells from Mlkl Wt/Wt , Mlk D139V/D139V or Mlkl −/− embryos were plated in the presence of indicated dose of IFN-β and colonies enumerated after 7 days ( n = 4–6). ( a , e and f ) represent mean ± SEM (A,E) or ±SD ( f ). b – e Representative images of at least three similar experiments. * p ≤ 0.05, ** p ≤ 0.01, *** p ≤ 0.005 calculated using an unpaired, two-tailed t -test. Full size image While there were no obvious differences in the levels of MLKL Wt and MLKL D139V protein following doxycycline induced exogenous expression (Fig. 1f ), MLKL was virtually undetectable by Western blot in Mlkl D139V/D139V pup-derived fibroblasts immortalized and cultured ex vivo (Fig. 4b ). There was, however, no significant reduction in Mlkl gene transcript levels in these cells (Supplementary Fig. 4b ) suggesting that this reduction was post-transcriptional. A reduction in MLKL D139V protein levels was also evident in whole E14.5 embryo protein lysates and in single cell clones derived from HOXA9 factor dependent myeloid cell lines derived from Mlkl D139V/D139V E14.5 embryos (Supplementary Figs. 4c, j ). Lysates from E14.5 embryos also clearly show that Mlkl Wt/D139V heterozygotes have intermediate levels of MLKL, reflecting the intermediate sensitivity of Mlkl Wt/D139V MDFs to necroptotic stimuli (Supplementary Fig. 4c and Fig. 4a ). MLKL D139V protein turnover requires proteasome activity Measuring the half-life of endogenously expressed MLKL D139V is not possible using conventional ‘pulse chase’ methods because this mutant protein induces necroptotic cell death, so we capitalized on our previous observation that an N-terminally FLAG-tagged MLKL 4HB forms a high molecular weight membrane-associated complex just like the untagged form, but, unlike the untagged version, does not kill cells 9 . Consistent with this observation, N-FLAG full-length mouse MLKL D139V did not induce cell death when inducibly expressed in Mlkl −/− MDFs (Supplementary Fig. 4f ). Using this system, we were able to measure the cellular turn over of MLKL by inducing N-FLAG-MLKL WT or N-FLAG-MLKL D139V expression in Mlkl −/− MDFs for 15 h using doxycycline then washing and culturing them in the absence of doxycycline for a further 2–24 h. In the absence of a stimulus (UT), the levels of N-FLAG-MLKL WT remained consistent over the 24-h period (Fig. 4c ), indicating that non-activated wild-type MLKL is a stable protein in MDFs. However, when these cells were treated with a necroptotic stimulus (TSI) to activate MLKL, the levels of wild-type MLKL rapidly declined even though these cells were unable to undergo a necroptotic cell death. Consistent with the fact that untagged MLKL D139V behaves as an auto-activated form of MLKL (Fig. 1e ), the half-life of N-FLAG-MLKL D139V (4–6 h) was similar to the WT version stimulated with TSI (Fig. 4c ). Thus, the absence of endogenously expressed MLKL D139V in E14.5 embryo lysates and cultured fibroblasts can be attributed to the reduced post-translational stability of this mutant auto-activated form of the protein. To determine which cellular mechanism(s) are required for the clearance of activated MLKL D139V , we included a series of proteasome, lysosome and specific protease inhibitors during the ‘chase’ period after doxycycline was withdrawn (schematic in Fig. 4d ). The doses of all inhibitors were carefully titrated and combined with pan-caspase inhibitor IDN6556 to minimize any toxicity-associated apoptotic cell loss during the chase period. To exclude any confounding RIPK3-mediated activation of the necroptotic pathway by proteasome inhibitors 41 (Supplementary Fig. 4f ), the same experiment was also performed in Mlkl −/− , RIPK3 −/− MDFs (Supplementary Fig. 4d ). Even at the very low doses used, addition of the proteasome inhibitor PS341 was accompanied by reduced clearance of N-FLAG-MLKL D139V and the stabilization of higher molecular weight species that resemble mono- and poly-ubiquitinated MLKL (Fig. 4d , Supplementary Fig. 4d, i ). This PS341 mediated protection of activated MLKL was also evident when the same assay was performed for phospho(p)S345-N-FLAG-MLKL WT (Supplementary Fig. 4e ). The less potent proteasome inhibitor MG132 did not stabilize MLKL D139V to levels that could be resolved by western blotting of total MLKL in this assay but did facilitate some stabilization of (p)-N-FLAG-MLKL WT . Chloroquine, Bafilomycin and NH 4 Cl also partially protected against (p)-N-FLAG-MLKL WT clearance, supporting the potential role for lysosome mediated degradation of natively phosphorylated MLKL WT 15 , 17 , but this was not observed for constitutively activated N-FLAG-MLKL D139V using this approach (Fig. 4d , Supplementary Fig. 4d ). Based on these findings we hypothesized that this MLKL-clearance mechanism limits the capacity of MLKL D139V to kill Mlkl Wt/D139V and Mlkl D139V/D139V cells in culture and in vivo by maintaining protein levels below a critical threshold. To test whether this protective mechanism could be overwhelmed, we incubated MDFs with agents that have been shown to induce Mlkl expression (TNF, interferons (IFN) β and γ) 42 , 43 , 44 , or inhibit its turnover (proteasome and lysosome inhibitors). MLKL D139V protein in untreated Mlkl D139V/D139V MDFs was undetectable by Western blot but became faintly detectable following addition of these stimuli (Fig. 4b and Supplementary Fig. 4g ). This correlates with moderate but statistically significant increases in cell death (particularly when compared with the lack of sensitivity to conventional necroptotic stimuli (Fig. 4a )), when exposed to IFN-β alone and in combination with proteasome or lysosome inhibitors (Fig. 4e ). A similar allele-dose dependent sensitivity is also evident in primary MDFs (Supplementary Fig. 4h ). To examine if this mechanism may explain the reduced capacity of Mlkl D139V/D139V fetal liver cells to reconstitute an irradiated host (Fig. 2k ), ex vivo colony forming assays were performed on fetal liver cells derived from Mlkl Wt/Wt and Mlkl D139V/D139V E14.5 littermates, alongside E14.5 livers taken from Mlkl −/− mice. Mlkl D139V/D139V cells showed significantly increased sensitivity to the inhibitory effects of IFN-β, with reduced colony formation at low doses of cytokine that affected Mlkl Wt/Wt and Mlkl −/− colony formation only marginally (Fig. 4f ). Factor dependent myeloid cells generated through HOXA9 immortalization of E14.5 liver HSCs also demonstrated high rates of cell death under conventional FDM culture conditions when derived from Mlkl Wt/D139V or Mlkl D139V/D139V embryos (Supplementary Fig. 4k ). Together, these experiments provide further evidence for the existence of steady-state MLKL surveillance and turnover mechanisms that suppress cell death by lowering the abundance of activated MLKL below a killer threshold at the cellular level 6 , 14 , 15 , 16 and provide an in vivo precedent for both the existence of this phenomenon and the lethal consequences of its dysregulation in the form of the Mlkl D139V mouse. To test whether the lethal inflammation in Mlkl D139V/D139V neonates was mediated by the direct or indirect activation of the inflammasome by active MLKL we crossed this line with the Caspase 1/11 null mouse strain 45 , 46 , 47 . This did not enhance the lifespan of Mlkl D139V/D139V pups (Table 1 ). The combined genetic deletion of Casp8 and Ripk3 did not rescue or extend the life of Mlkl D139V/D139V mice, indicating that postnatal lethality is not mediated by bystander extrinsic apoptotic cell death that may occur secondary to initial waves of MLKL D139V -mediated necroptosis (Table 1 ). The genetic deletion of Tnfr1, Myd88 or Ifnar individually did not provide any extension to the lifespan of Mlkl D139V homozygote pups (Table 1 ). These data indicate that the removal of any one of these routes to NF-κB- and interferon-mediated gene upregulation, inflammation or apoptotic cell death is not sufficient to protect mouse pups against a double allelic dose of Mlkl D139V . Table 1 Postnatal lethality in Mlkl D139V/D139V homozygotes is independent of Tnfr1, Myd88, Ripk3 , Casp8, Casp1, and Casp11 . Full size table Common human missense MLKL variants map to the brace region Given the severe inflammatory phenotype of murine Mlkl D139V/D139V neonates and the significant defects in stress hematopoiesis observed in murine Mlkl Wt/D139V adults, we explored the prevalence of brace region variation in human MLKL . Examination of the gnomAD database 48 , which contains human MLKL exome or genome sequence data from a total of over 140,000 individuals revealed that the second and third highest frequency human MLKL missense coding variants; rs34515646 (R146Q) and rs35589326 (S132P), alter the same brace helix (Table 2 , Fig. 5a ). The 4th most common human MLKL polymorphism, rs144526386 (G202*V) is a missense polymorphism identified exclusively in the context of a shorter splice isoform of MLKL (*) named ‘MLKL2’ 49 (Table 2 , Fig. 5b ). The full-length canonical transcript of MLKL encodes a 471 amino acid protein, while MLKL2 is an alternatively spliced isoform of MLKL that is 263 amino acids long. MLKL2 lacks a large portion of the pseudokinase domain which functions to repress the killing potential of the 4HB domain 6 , 7 , 8 , 9 and recruit co-effectors like RIPK3 and HSP90 13 , 50 , 51 , 52 . Glycine202* is encoded by an extension to exon 9 that is unique to the MLKL2 splice isoform (Fig. 5a, b ). Table 2 Human MLKL brace helix polymorphism frequency human MLKL SNP. Full size table Fig. 5: Three of the four highest frequency missense human MLKL SNPs encode non-conservative amino acid substitutions within or adjacent to the brace helix region. a S132 and R146 (magenta) are located on either side of D140 (yellow—equivalent to mouse D139) in the first human MLKL brace helix. Alternate amino acids encoded by human polymorphisms indicated in parentheses. b G202 is predicted to be on an α helix unique to MLKL2 isoform and to form an interface along with S132 and R146. The mouse equivalent of human rs35589326 (hMLKL S132P ), mMLKL S131P , spontaneously forms membrane-associated high molecular weight complexes following Blue Native (BN) PAGE ( c ) and kills MDFs ( d ) in the absence of extrinsic necroptotic stimuli when expressed in mouse dermal fibroblasts for 6 ( c ) and 21 hrs respectively ( d ). C; cytoplasmic fraction, M; crude membrane fraction, TSI; TNF, Smac-mimetic and IDN6556, Chlor: Chloroquine. c Representative of two independent experiments with similar results. Error bars in ( d ) indicate the mean ± SEM of 4–5 independent experiments. e Schematic showing brace helix variant combinations identified as alleles in trans in three CRMO patients. f MTRs are mapped onto the structure of MLKL to show regions that have low tolerance to missense variation in the human population (red) and regions that have increased tolerance to missense variation (blue), normalized to the gene’s MTR distribution. g Multiple sequence alignment (MSA) conservation scores are mapped onto the structure of MLKL to show regions that are highly conserved through evolution (red) and regions that are less conserved through evolution (blue). Full size image While the amino acid substitution MLKL R146Q is classified as ‘tolerated’ and ‘benign’ by SIFT/POLYPHEN 2 algorithms 53 , 54 (Supplementary Table 1 ), R146 of human MLKL shows NMR chemical shift perturbations in the presence of the negatively charged IP3 and IP6 phospholipid head groups, indicating a possible role in membrane association and disruption 11 , 55 . Ser-132 lies before the first structured residue of the first brace helix in human MLKL (Fig. 5a ) 13 , 56 , 57 . A Serine-to-Proline substitution at this position is predicted to significantly impact the conformation of the immediately adjacent W133 (brace helix) and in turn, the proximal W109 within the 4HB domain (Supplementary Fig. 5a ). When mapped to a model of MLKL splice-isoform 2 49 Glycine 202* is predicted to be on an isoform 2-specific helix and to form an interface along with S132 and R146 of brace helix 1. While the precise structural consequence of these three brace polymorphisms is unknown, modeling of human MLKL predicts that disruption in the brace region favors adoption of an activated conformation 13 . Consistent with this prediction, the murine equivalent of the human S132P variant, mMLKL S131P , formed high molecular weight membrane-associated complexes and killed MDFs in the absence of a necroptotic stimulus (Fig. 5c, d ) when expressed at close to endogenous levels (Supplementary Fig. 5b ). Similarly to mMLKL D139V , unstimulated mouse dermal fibroblasts generated from the first generation of heterozygote and homozygote mutant pups of a recently generated mMlkl S131P CRISPR modified mouse line demonstrated a clear reduction in MLKL protein levels relative to those prepared from wild-type littermates (Supplementary Fig. 5c ), though the cellular clearance is not as complete as observed for mMLKL D139V . Together, these data indicate that constitutive activation and reduced protein stability is not a unique, idiosyncratic feature of the mMLKL D139V , but also a feature of a closely situated MLKL brace mutant, mMLKL S131P . MLKL brace variants occur in trans more frequently in CRMO To investigate if human MLKL brace region polymorphisms play a role in human autoinflammatory disease we examined their frequency in cohorts suffering from ankylosing spondylitis (AS), chronic recurrent multifocal osteomyelitis (CRMO), Guillain Barré Syndrome (GBS) and Synovitis, Acne, Pustulosis, Hyperostosis and Osteitis (SAPHO) Syndrome. The individual minor allele frequencies of R146Q, S132P, and G*202V are not enriched in these disease cohorts relative to healthy controls when population distribution is accounted for (Supplementary Tables 4 and 5 ). However, these alleles occur in trans (making ‘compound heterozygotes’—schematic in Fig. 5e ) in 3 out of 128 CRMO patients. This is 29 times the frequency that these combinations are observed in healthy NIH 1000 genomes samples (where there are only two compound heterozygotes for these polymorphisms out of 2504 healthy individuals sequenced), or at 10–12 times the frequency when only European CRMO patients and two separate healthy European control populations were compared (Table 3 ). Table 3 Human MLKL brace helix compound heterozygotes in CRMO vs healthy controls. Full size table Discussion In contrast to apoptosis, necroptosis is widely considered to be an inflammatory form of cell death. However, definitive evidence for this proposition has yet to emerge. Because MLKL is activated by inflammatory stimuli such as TNF it is very difficult to separate cause from effect. The serendipitous identification of an auto-activating mutant of MLKL ( Mlkl D139V ) in mice has allowed us to explore the consequences of inappropriate necroptosis in the absence of such confounding factors. Furthermore, it has led to significant insights into the critical adult hematopoietic and perinatal developmental processes that are most sensitive to excessive MLKL activation, and into physiological mechanisms that have evolved to neutralize activated MLKL. In the absence of a robust immunohistochemical marker for RIPK3-independent necroptosis, it is not possible to pinpoint exactly which cell type/s undergo necroptosis in Mlkl D139V mice. Nevertheless, the presence of high levels of circulating pro-inflammatory cytokines in Mlkl D139V/D139V pups at E19.5 relative to Mlkl Wt/Wt and Mlkl Wt/D139V littermates suggests that necroptosis and ensuing inflammation begins in the sterile in utero environment. This is not enough to overtly retard prenatal development or affect hematopoietic cell populations. However, upon birth and/or exposure to the outside environment the capacity of homozygous Mlkl D139V/D139V pups to suppress MLKL D139V activity is overwhelmed and they die within days of birth. This is clearly a dose-dependent effect because both Mlkl D139V/Wt and Mlkl D139V/null heterozygous mice are viable. Postnatal death cannot be prevented by combined deficiencies in Ripk3 and Casp8 nor by deficiency of other important inflammatory genes including Tnfr1, Myd88 or Ifnar . In light of the elevated levels of circulating G-CSF, IL-6 and IL-5 observed, the role of these key mediators in the initiation or potentiation of pre- and perinatal inflammation in Mlkl D139V/D139V pups will be the subject of future investigations. The Mlkl D139V mutation was initially identified for its capacity to moderately increase platelet production independent of the thrombopoietin receptor Mpl. While the mechanism underlying this observation remains unclear, it follows observations by others that another member of the necroptotic pathway, RIPK3, plays a role in platelet activation 58 . The reduced platelet levels observed in Mlkl D139V/D139V pups is unlikely to be the sole cause of death given much more severe thrombocytopenia is not lethal in Mpl −/− mice 40 . Difficulty with suckling due to inflammatory infiltration of the head and neck and resulting failure to thrive is one possible explanation for the lethality in Mlkl D139V/D139V pups. However, the narrow window of mortality for these pups and marked pericardial immune infiltration make heart failure another potential cause of sudden neonatal death. The Mlkl D139V mouse reveals that maintaining MLKL levels below a threshold can prevent necroptotic activation. This strain is a potential tool for the mechanistic and physiological examination of MLKL-mediated extracellular vesicle generation or other cell death-independent roles related to inflammation that is unconfounded by RIPK3 activation. While others have recently shown that an ESCRT dependent repair or extracellular vesicle extrusion can help protect membranes from limited MLKL damage 14 , 15 , 16 , and that p-MLKL can be internalized and degraded by the lysosome 17 our data also suggest a role for the proteasome in the disposal of activated MLKL, be it directly, or in its capacity to generate free ubiquitin. This creates the possibility that these mechanisms or the previously described ESCRT mechanisms intersect in some way. Finally, the ability of these mechanisms to hold single gene-dose levels of active MLKL in check without deleterious consequences in vivo supports the idea that direct inhibition of activated MLKL may be an effective means to therapeutically prevent unwanted necroptotic cell death. Similarly, the Mlkl D139V mouse and assorted relevant crosses may prove to be a useful tool for the further examination of whether ROS production is co-incident with-, causative of- or consequential to- necroptotic plasma membrane disruption in varied tissue types and under highly physiologically relevant contexts (recently reviewed by 59 ). While any mouse MLKL-human MLKL comparisons must be made cautiously in light of the species-specific structural and mechanistic differences 5 , 12 , 13 , it is notable that out of over 140,000 individuals surveyed, there is only one recorded case of a human carrying a substitution equivalent to the mMlkl D139V mouse ( hMLKL D140V ; rs747627247) in the gnomAD database, and this individual is heterozygous for this variant. To our surprise, 3,841 individuals in gnomAD (55 of which are homozygotes) carry a very closely situated MLKL brace region variant – MLKL S132P . Our CRISPR-generated Mlkl S131P mouse equivalent supports the connection between constitutive MLKL activation and decreased MLKL protein stability. Preliminary observations show that this variant manifests in a much milder and context specific phenotype in mice than mMlkl D139V , which is consistent with its high frequency presence in the human population. Overlaid with structural, biochemical, cell and animal-based evidence of function, it is tempting to speculate that these human MLKL brace region variants lead to altered MLKL function and/or regulation in what is most likely a highly tissue-, context- or even pathogen specific way 60 , 61 , 62 . While increased numbers and examination of independent cohorts will be required to confirm the statistical enrichment of human MLKL brace variants occurring in trans in the autoinflammatory disease CRMO, this patient cohort offers a tantalizing clue into their potential as modifiers of complex, polygenic inflammatory disease in present day humans. Methods Mice All mice were backcrossed to C57BL/6 mice for >10 generations or generated on a C57BL/6J background. Mlkl −/− , Tnfr1 −/− , Myd88 −/− , IFNAR1 −/− , Ripk3 −/− , Casp8 −/− , and Casp1/Casp11 −/− mice were generated as described 4 , 45 , 46 , 63 , 64 , 65 , 66 , 67 . Mice designated as E19.5 were obtained by Caesarean section from mothers that received progesterone injections at E17.5 and E18.5. Independent mouse strains that carry the D139V or S131P mutation in the Mlkl gene (MLKL D139V CRISPR) were generated using CRISPR/Cas9 as previously described 68 . For D139V, one sgRNA of the sequence GGAAGATCGACAGGATGCAG (10 ng per μL), an oligo donor of the sequence ATTGGAATACCGTTTCAGATGTCAGCCAGCCAGCATCCTGGCAGCAGGAAGATCGACAGGTTGCAGAAGAAGACGGgtgagtctcccaaagactgggaaagagtaggccagggttgggggtagggtgg (10 ng per μL) and Cas9 mRNA (5 ng per μL) were injected into the cytosol of C57BL/6J zygotes. Mice were sequenced across the mutated region to confirm incorporation of the altered codon and analysis was performed after at least 2 back-crosses to C57BL/6. The same procedure was followed for the generation of MLKL S131P CRISPR mice, using sgRNA (CTGTCGATCTTCCTGCTGCC) and oligo donor (CTGTTGCTGCTGCTTCAGGTTTATCATTGGAATACCGTTTCAGATGTCAGCCAGCCAGCACCATGGCAGCAGGAAGATCGACAGGATGCAGAGGAAGACGGgtgagtctcccaaagactggga). Sex was not recorded for mice that were sampled at E19.5, P2 and P3. Experiments using adult mice were performed with a combination of both males and females between 8 and 12 weeks of age. Mice were housed in a temperature and humidity controlled specific pathogen free facility with a 12 h:12 h day night cycle. The WEHI Animal Ethics Committee approved all experiments in accordance with the NHMRC Australian code for the care and use of animals for scientific purposes. Linkage analysis We mapped the chromosomal location of the Plt15 mutation by mating affected mice to 129/Sv Mpl −/− mice to produce N 2 (backcross) and F 2 (intercross) generations. A genome-wide scan using 20 N 2 mice with the highest platelet counts (287 ± 74 × 10 6 per mL, compared with 133 ± 75 × 10 6 per mL for the overall population, Fig. 1a ) localized the mutation to a region of chromosome 8 between D8Mit242 and D8Mit139 and linkage to this region was then refined. Analysis of the F 2 population revealed a significant reduction in the frequency of mice homozygous for C57BL/6 alleles in this interval (e.g., D8Mit200 3/81 F 2 mice homozygous C57BL/6 , p = 2.2 × 10 −5 χ 2 -test), suggesting the Plt15 mutation results in recessive lethality. The refined 2.01 Mb interval contained 31 annotated genes, only five of which appeared to be expressed both in the hematopoietic system and during embryogenesis ( ): Dead box proteins 19a and 19b ( Ddx19a and Ddx19b ), Ring finger and WD repeat domain 3 ( Rfwd3 ), Mixed lineage kinase domain like ( Mlkl ), and WD40 repeat domain 59 ( Wdr59 ). Sequencing identified a single mutation, an A to T transversion in Mlkl that was heterozygous in all mice with an elevated platelet count. Reagents Antibodies; Rat-anti mRIPK3 and rat anti-mMLKL 8F6 (selected for affinity to residues 1–30 of mouse MLKL) and rat anti-MLKL 3H1 4 (MLKL brace region) were produced in-house. Anti-Pro Caspase 8 (#4927) and GAPDH (#2113) were purchased from Cell Signaling Technology. Anti-mouse MLKL pS345 (ab196436) and anti-Actin (ab5694) were purchased from Abcam. Anti-VDAC (AB10527) was purchased from Millipore. Fc-hTNF was produced in house and used at a final concentration of 100 ng per mL. Recombinant mouse IFN-γ and β were purchased from R&D Systems (Minneapolis, MN, USA) Q-VD-OPh and zVAD-fmk were purchased from MP Biomedicals (Seven Hills, NSW, Australia). Smac mimetic also known as Compound A, and the caspase inhibitor IDN-6556 were a gift from TetraLogic (Malvern, PA, USA). Propidium iodide, doxycycline, and bafilomycin were purchased from Sigma-Aldrich (Castle Hill, NSW, Australia). Cell line generation and culture Primary mouse dermal fibroblasts were prepared from skin taken from the head and body of E19.5 pups delivered by C-section or from the tails of adult mice 69 . Primary MDFs were immortalized by stable lentiviral transduction with SV40 large T antigen. Immortalized MDFs were stably transduced with exogenous mouse MLKL cloned into the pFTRE 3 G vector, which was generated by Toru Okamoto, and allows doxycycline- inducible expression as described 4 . The following oligonucleotides were used for the assembly of constructs; mMlkl fwd; 5′-CGCGGATCCGCGCCACCatggataaattgggacagatcatcaag-3′, mMlkl rev; 5′-CGGAATTCttacaccttcttgtccgtggattc-3′, N-FLAG mMlkl fwd; 5′-CGCGGATCCAA gccacc atg gcg cgc cag gac-3′ N-FLAG mMlkl rev; 5′-CGCGGATCC tta cac ctt ctt gtc cgt gga ttc-3′ mMlkl D139V fwd; 5′-gaagatcgacaggTtgcagaggaagac-3′ mMlkl D139V rev; 5′-gtcttcctctgcaAcctgtcgatcttc-3′ mMlkl S131P fwd; 5′-gccagcctgcaCcctggcagcag-3′ mMlkl S131P rev; 5′-ctgctgccaggGtgcaggctggc-3′ Cells were maintained in culture as previously described 44 . 4-hydroxy-tamoxifen regulated HOXA9 immortalised Factor Dependent Myeloid cells were generated from mouse E14.5 fetal liver cells and cultured as described previously 70 . Cell death assays Flow Cytometry based cell death assays were performed using 5 × 10 4 MDFs per well in 24 or 48-well tissue culture plates 4 . Doxycycline (20 ng per mL) was added together with death stimuli. Fc-hTNF was produced in house and used at 100 ng per mL, Compound A Smac mimetic and IDN6556 were used at 500 nM and 5 μM respectively. zVAD-fmk and QVD-OPh were used at 25 and 10 μM respectively. Mouse and human interferons γ and β were used at 30 ng per mL, PS341 and MG132 at 2 and 200 nM respectively and Bafilomycin at 300 nM. For Incucyte automated imaging, MDFs were plated at a density of 8 × 10 3 cells per well of a 96-well plate and permitted to attach for 3 h. FDMs were plated at a density of 5 and 10 × 10 3 cells per well of a 48-well plate. 0.2 μg per mL propidium iodide was in media alongside stimuli as indicated. Images were recorded at intervals of 1 and 2 h using an IncuCyte S3 and numbers of PI positive cells per mm 2 at each time point quantified and plotted using IncuCyte S3 software. MLKL turn-over assays 5 × 10 4 MDFs per well were plated in 24-well tissue culture plates and allowed to settle. Doxycycline (20 ng per mL) +/− TNF, Smac Mimetic and IDN6556 was added. After 15 h, ‘no dox’ and ‘0’ wells were harvested. Media was removed from remaining wells and cells were washed with PBS and fresh media containing IDN6556 was re-added. Wells were then harvested 2, 4, 6, 8, and 24 h from this point. Cells were harvested by direct lysis in reducing SDS-PAGE loading buffer. MLKL protection assays 5 × 10 4 MDFs per well were plated in 24-well tissue culture plates and allowed to settle. Doxycycline (20 ng per mL) was added. After 18 hrs, ‘no dox’ and ‘T 0 ’ samples were harvested. Media was removed and cells washed before addition of fresh media containing TSI or IDN alone for 3 h. Cells were washed again and media restored with IDN6556 alone (UT), or IDN6556 + inhibitor (MG132 (200 nM), PS341 (10–40 nM), Chloroquine (50 μM), Bafilomycin (300 nM), Ca-074 Me (20 μM), TLCK (100 μM) and AEBSF (100 μM)) for a further 21 h. Cells were harvested by direct lysis in reducing SDS-PAGE loading buffer. UBA pull downs 2 × 10 6 MDFs stably transduced with doxycycline inducible N-FLAG-mMLKL WT or N-FLAG-mMLKL D139V expressing constructs were seeded and settled O/N before stimulation with 1 μg per mL doxycycline +/− TSI for 5 hrs. Cells were lysed in Urea-based UBA-pull down buffer, ubiquitylated proteins enriched and Usp21-treated as described previously 71 . Transmission electron microscopy Murine dermal fibroblasts prepared from mice of the indicated genotypes were untreated or stimulated with the indicated agents for the indicated hours. Then, cells were fixed with 2% glutaraldehyde in 0.1 M phosphate buffer, pH 7.4, postfixed with 2% OsO 4 , dehydrated in ethanol, and embedded in Epok 812 (Okenshoji Co.). Ultrathin sections were cut with an ultramicrotome (ultracut N or UC6: Leica), stained with uranyl acetate and lead citrate, and examined with a JEOL JEM-1400 electron microscope. The viability of a portion of these cells was determined by measuring LDH release as described previously 72 . Mouse histopathology Caesarian-sectioned E19.5 and Day P2/3 pups were euthanized by decapitation and fixed in 10% buffered formalin. Five-micrometers coronal sections were taken at 200-μm intervals for the full thickness of the head, 5-μm sagittal sections were taken at 300-μm intervals for the full thickness of the body. A thorough examination of these sections was performed by histopathologists Aira Nuguid and Tina Cardamome at the Australian Phenomics Network, Melbourne. Findings were confirmed by Veterinary Pathologist Prof. John W. Finney, SA Pathology, Adelaide and clinical Pathologist Prof. Catriona McLean, Alfred Hospital, Melbourne. Measurement of relative thymic cortical thickness Representative images of thymus sections were analysed to determine relative cortical thickness using ImageJ. Briefly, medullary areas were identified on the basis of H and E staining and removed from the larger thymus structure using the Image J Image Calculator function to isolate the cortical region. The thickness of the cortical region, defined by the radius of the largest disk that can fit at a pixel position, was determined using the Local Thickness plugin in ImageJ ( ). Immunohistochemistry Following terminal blood collection, P0 and P3 pups were fixed for at least 24 h in 10% buffered formalin and paraffin embedded before microtomy. Immunohistochemical detection of cleaved caspase 3 (Cell Signaling Technology #9661) and CD45 (BD) was performed as described previously 23 . Cytokine quantification All plasma was stored at −80 °C prior to cytokine analyses. Cytokines were measured by Bioplex Pro mouse cytokine 23-plex assay (Bio- Rad #M60009RDPD) according to manufacturer’s instructions. When samples were designated ‘<OOR’ (below reference range) for a particular cytokine, they were assigned the lowest value recorded for that cohort (as opposed to complete exclusion or inclusion as ‘zero’ which would artificially inflate or conflate group averages respectively). Values are plotted as fold change relative to the mean value for the Wt/Wt samples, and p values were calculated in Microsoft Excel using a two-tailed TTEST, assuming unequal variance. Data is only shown for cytokines that displayed statistically significant differences between genotypes at either of or both day E19.5 and day P3. Hematological analysis Blood was collected from P0 and P3 pups into EDTA coated tubes using heparinized glass capillary tubes from the neck cavity immediately after decapitation. After centrifugation at 500 G for 5 min, 5–15 μL of plasma was carefully removed and this volume was replaced with PBS. Blood cells were resuspended and diluted between 8–20-fold in DPBS for automated blood cell quantification using an ADVIA 2120 hematological analyzer within 6 h of harvest. Blood was collected from adult mice retro-orbitally into tubes containing EDTA and analyzed using an ADVIA120 automated hematological analyzer (Bayer). Transplantation studies Donor bone marrow or fetal liver cells were injected intravenously into recipient C57BL/6-CD45 Ly5.1/Ly5.2 mice following 11 Gy of gamma-irradiation split over two equal doses. Recipient mice received neomycin (2 mg per mL) in the drinking water for 4 weeks. Long term capacity of stem cells was assessed by flow cytometric analysis of donor contribution to recipient mouse peripheral blood and/or hematological organs up to 6 months following engraftment. Recovery from cytotoxic insult was assessed by automated peripheral blood analysis at regular times following treatment of mice with 150 mg per kg 5-fluorouracil (5-FU). Flow cytometry To analyze the contribution of donor and competitor cells in transplanted recipients, blood cells were incubated with a combination of the following antibodies: Ly5.1-PE, Ly5.2-FITC, Ly5.2-biotin or Ly5.2 PerCPCy5.5 (antibodies from Becton Dickenson, Ca). If necessary, cells were incubated with a streptavidin PECy5.5 (BD), mixed with propidium iodide (Sigma) and analysed on a LSRI (BD Biosciences) flow cytometer. To analyse the stem- and progenitor cell compartment, bone marrow cells were incubated with biotinylated or Alexa700 conjugated antibodies against the lineage markers CD2, CD3, CD4, CD8, CD34, B220, CD19, Gr-1, and Ter-119. For stem and progenitor cell detection antibodies against cKit, Sca-1, CD48, AnnexinV, CD105, FcγRII/III or CD135 in different combinations (see antibody list for details). Finally FluoroGold (AAT Bioquest Cat#17514) was added for dead cell detection. Cells were then analysed on LSRII or Fortessa1 (BD Biosciences) flow cytometers. Reactive oxygen species (ROS) detection ROS was detected by using Chloromethyl-H 2 DCFDA dye according to the manufacturer’s instructions (Invitrogen Cat#C6827). In brief, bone marrow cells were loaded with 1 μM Chloromethyl-H 2 DCFDA for 30 min at 37 °C. Loading buffer was then removed, and cells were placed into 37 °C StemPro-34 serum free medium (ThermoFisher Cat#10639011) for a 15-min chase period. After incubation cells were placed on ice and stained with surface antibodies suitable for FACS analysis. Cells were analysed using a LSRII flow cytometer (Becton Dickinson). E14.5 fetal liver colony forming assays 1 × 10 4 fetal liver cells were plated as 1 mL cultures in 35 mm Petri dishes in DMEM containing 10% FCS, 0.3% agar and 10 4 U per mL GM-CSF. IFN-β was added to the cultures in increasing concentrations from 0 to 30 ng per mL. Colony formation was scored after 7 days of incubation at 37 °C, fully humidified with 10% CO 2 . Quantitative PCR RNA was prepared using Trizol (Invitrogen) according to the manufacturer’s instructions and 10 μg was used for first strand cDNA synthesis using SuperScript II (Life Technologies). 0.5 μg of cDNA was then used in a TaqMan PCR reaction with Universal PCR mastermix and murine Mlkl (Mm1244222_n1) and GAPDH (Mm99999915_m1) Taqman probes (ThermoFisher) on an ABI 7900 Fast Real-Time PCR instrument (Applied Biosystems). Mlkl expression relative to GAPDH control was determined using SDS version 2.3 program (Applied Biosystems) and expressed as ΔCT values. Statistics (mouse and cell-based assays) Please consult figure legends for description of error bars used. All data points signify independent experimental repeats, and/or biologically independent repeats. All p values were calculated in Microsoft Excel or Prism using an unpaired, two-tailed t -test, assuming unequal variance and not adjusted for multiple comparison. Asterices signify that p ≤ 0.05 (*), p ≤ 0.01(**) or p ≤ 0.005 (***). All comparisons were made between Mlk Wt/Wt and Mlk D139V/D139V groups only (with the exception of data derived from adult mice, which were comparisons between Mlk Wt/Wt and Mlk Wt/D139V groups only. Whole-exome sequencing DNA from CRMO probands and their family members (when available) was purified from saliva or blood and prepared for whole-exome sequencing (WES). The samples underwent WES at several different times, enriched using the Agilent SureSelect Human All Exon V4, V5 or V6 + UTR (Agilent Technologies) before sequencing at either Otogenetics, Inc (Atlanta, GA), Beckman Coulter Genomics (Danvers, MA), or at the University of Iowa Genomics Core (Iowa City, IA). The fastq files were quality-checked and processed to vcf format as described 73 . Variants for all samples were called together using GATK’s Haplotype Caller 74 and were recalibrated and hard-filtered in GATK as described 73 . Variants were annotated with minor allele frequencies (MAFs) from 1000 genomes 75 , ExAC and gnomAD 48 and with information regarding the effect of each variant using SNPSift/SNPEff 76 . The databases used for annotation were dbNSFP2.9 77 (for MAFs) and GRCh37.75 for protein effect prediction. Ancestry determination Ancestry was determined for each CRMO proband using the LASER software package 78 . A vcf file including ten probands at a time was uploaded to the LASER server and the TRACE analysis was selected using the Worldwide panel. For probands with indeterminate ancestry using the Worldwide panel, the European and Asian panels were used. Principal component values for each proband were plotted using R Statistical Software and the code provided in the LASER package. MLKL variant quantification 1000 Genomes: Vcf files from 1000 genomes were annotated and filtered as described previously 79 . Values for MLKL variants rs35589326 (S132P), rs34515646 (R146Q), and rs144526386 (G202V) as well as all MLKL coding variants were queried and tabulated for allele and genotype count for participants of all ancestry ( n = 2504), and for those of European ancestry ( n = 503). Compound heterozygous variants were evident due to the phasing of all variants in the 1000 genomes dataset. CRMO: Allele and genotype counts for all MLKL coding variants were tabulated in probands of European ancestry ( n = 101) and for all probands ( n = 128). Compound heterozygous variants were identified using parental sequence data. AS: DNA from all subjects in AS cohort were genotyped using the Illumina CoreExome chip following standard protocols at the Australian Translational Genomics Centre, Princess Alexandra Hospital, Brisbane. Bead intensity data was processed and normalized for each sample and genotypes called using the Illumina Genome Studio software. All the samples listed in the table have passed quality control process 80 . GB: Genotyping was performed in an ISO15189-accredited clinical genomics facility, Australian Translational Genomics Centre (ATGC), Queensland University of Technology. All samples were genotyped by Illumina HumanOmniExpress (OmniExpress) BeadChip 81 . QUT controls: A collection of healthy control data of verified European ancestry from various cohort studies, complied by the Translational Genomics Group, QUT and typed on an Illumina CoreExome microarray. Includes data from The UK Household Longitudinal Study, led by the Institute for Social and Economic Research at the University of Essex and funded by the Economic and Social Research Council. The survey was conducted by NatCen and the genome-wide scan data were analysed and deposited by the Wellcome Trust Sanger Institute. University of Essex. Institute for Social and Economic Research, N. S. R., Kantar Public. Understanding Society: Waves 1–8, 2009–2017 and Harmonised BHPS: Waves 1–18, 1991–2009. [data collection]. 11th Edition. UK Data Service., (2018). Patient recruitment All genomic data was derived from patients recruited with consent as described previously 48 , 80 , 81 , and with the approval of human ethics review boards of all Institutes that participated in human genetics studies; University of Iowa Carver College of Medicine, Queensland University of Technology, Australian National University, Shanghai Renji Hospital, JiaoTong University of Shanghai, The Hospital for Sick Children and the University of Toronto, University of Sydney, Australian Institute of Sport, University of Freiburg, Princess Alexandra Hospital, Memorial Hermann Texas Medical Centre, The University of Queensland, Oregon Health and Science University), Groupe Française d’Etude Génétique des Spondylarthrites (GFEGS) and the University of Oxford. Statistical analysis (human data) Statistical comparisons were performed at the level of allele frequency or the level of compound heterozygote sample frequency using either a Fisher’s exact test or a Chi-Squared test with Yates correction as specified under each table. Compound heterozygous variants were quantified and compared at the individual rather than the allelic level, where individuals with and without qualifying variants were compared at the allelic level. Web resources gnomAD – OrthoDB - CADD - Clustal Omega - WEBLOGO - Missense Tolerance Ratio (MTR) Gene Viewer - UK biobank - Understanding Society - Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The biological tools generated for MLKL during this study are available from the corresponding authors on reasonable request. MLKL gene variants in CRMO can be accessed from Harvard Dataverse, V1 [ ]. The source data underlying Figs. 1 a, d, e–g, 2 a–f, 3 a–k, 4 a–f, 5c, d and Supplementary Figs. 1a , b, 2a –h, 3a , c, d–f, 4b, d–j are provided as a Source Data file. Source data are provided with this paper.
Walter and Eliza Hall Institute researchers have made significant advances in understanding the inflammatory cell death regulatory protein MLKL and its role in disease. In a trio of studies published today in the journal Nature Communications, the team used advanced imaging to visualize key steps in the activation of MLKL, revealing previously unseen details about how this protein drives an inflammatory form of cell death called necroptosis. They also showed for the first time that inherited variants of MLKL are connected to a human inflammatory disease. By examining sequence variations in human MLKL and comparing the structure of different animals' MLKL proteins, the team also provided evidence for MLKL having been subject to evolutionary pressures, potentially through its role in protecting against infections. The multidisciplinary research was led by Dr. Andre Samson, Dr. Joanne Hildebrand, Dr. Maria Kauppi, Ms Katherine Davies, Associate Professor Edwin Hawkins, Associate Professor Peter Czabotar, Professor Warren Alexander, Professor John Silke and Associate Professor James Murphy. Understanding inflammatory cell death Cell death is a way that the body protects itself from diseases, by removing unwanted or dangerous cells. In some situations—such as viral or bacterial infections—dying cells trigger inflammation to protect neighboring cells from the infection. This form of cell death is called 'necroptosis', and is tightly controlled by specific proteins within cells. Associate Professor James Murphy said the protein MLKL was an important regulator of necroptosis. "While MLKL and necroptosis protect our bodies from infections, excessive necroptosis has been linked with inflammatory conditions such as inflammatory bowel diseases," he said. "Our research team has taken several complementary approaches to better understand how MLKL functions—which could improve the understanding and treatment of diseases involving excessive necroptosis." One study, led by Dr. Andre Samson, used advanced imaging technologies to watch the MLKL protein in cells as they underwent necroptosis. Dr. Samson said this identified two important 'checkpoints' in necroptosis. "We could see how MLKL changed its location as necroptosis occurred, clumping and migrating to different parts of the cell as the cell progressed towards death," he said. "Intriguingly, we could see activated MLKL gather at the junctions between neighboring cells—potentially suggesting a way for one dying cell to trigger necroptosis in surrounding cells, which could be a form of protection against infections." Walter and Eliza Hall Institute researchers have used lattice light sheet microscopy to visualise cells dying by necroptosis, a form of inflammatory cell death. Credit: Walter and Eliza Hall Institute (adapted from video published in Samson et al, Nature Communications) Role of MLKL in inflammatory diseases Dr. Joanne Hildebrand and Dr. Maria Kauppi examined links between alterations in the MLKL protein and inflammatory conditions. Dr. Hildebrand said Institute researchers isolated a variant of MLKL that caused a lethal inflammatory condition in laboratory models. "We discovered this form of MLKL contained a single mutation in a particular region of the protein that made MLKL hyperactive, triggering necroptosis and inflammation," she said. "By searching genome databases, we discovered similar variants in the human MLKL gene are surprisingly common—around ten per cent of human genomes from around the world carry altered forms of the MLKL gene that result in a more-easily activated, more inflammatory version of the protein. The team speculated that the pro-inflammatory variant of MLKL might be associated with inflammatory diseases. "We looked more closely at databases of genomes of people with inflammatory diseases to understand the prevalence of MLKL variants. Indeed, people with an autoinflammatory condition chronic recurrent multifocal osteomyelitis (CRMO) were much more likely to carry two copies of a pro-inflammatory variant of the MLKL gene than people without an inflammatory disease. This is the first time changes in MLKL have been associated with a human inflammatory disease," Dr. Hildebrand said. Evolutionary pressure on MLKL Dr. Hildebrand said the high frequency of MLKL variants in humans around the world suggested that the more inflammatory variants of the protein might have offered an evolutionary benefit at some point of human history. "Perhaps having a more inflammatory form of MLKL meant some people could survive infectious diseases better than those people who only had the less-easily activated form of the protein," she said. In a separate paper, Ms Katherine Davies led research investigating the three-dimensional structure of MLKL in different vertebrate species, using the Australian Synchrotron and CSIRO Collaborative Crystallisation Centre. Dr. Davies said usually when one protein is found in different vertebrate species, the proteins in the different species have a similar structure that has been conserved during evolution. "To our surprise, the structures of MLKL were quite different between different vertebrate species—even between closely related species such as rats and mice. In fact, rat MLKL is so different from mouse MLKL that the rat protein cannot function in mouse cells—which is surprising as many proteins are interchangeable between these two species," Dr. Davies said. "We think that evolutionary pressures such as infections may have driven substantial changes in MLKL as vertebrates evolved. Animals with variant forms of MLKL may have been able to survive some pressures better than other animals, driving changes in MLKL to accumulate, much faster than for many other proteins. "Together with the data for human variations in MLKL, this suggests MLKL is critical for cells to balance beneficial inflammation, which protects against infections, with harmful inflammation that causes inflammatory diseases," Dr. Davies said. Long-term research yields rewards Associate Professor James Murphy said the team's research started through studying the inflammatory variant of MLKL more than 13 years ago—at a time when MLKL's role in necroptosis was not known. "Our most recent discoveries, made by a multidisciplinary research team, have provided a massive advance to the field of necroptosis, adding substantial detail to our understanding of MLKL. This will provide an enormous boost to a range of research into inflammatory diseases. Our team and others are already working to develop new medicines that could temper MLKL-driven inflammation, which we hope could be a new approach to treating a range of inflammatory diseases," Associate Professor Murphy said.
10.1038/s41467-020-16819-z
Earth
New research shows protective value of mangroves for coastlines
Cheryl L. Doughty et al. Impacts of mangrove encroachment and mosquito impoundment management on coastal protection services, Hydrobiologia (2017). DOI: 10.1007/s10750-017-3225-0 Journal information: Hydrobiologia
http://dx.doi.org/10.1007/s10750-017-3225-0
https://phys.org/news/2017-06-mangroves-coastlines.html
Abstract The ecosystem services afforded by coastal wetlands are threatened by climate change and other anthropogenic stressors. The Kennedy Space Center and Merritt Island National Wildlife Refuge in east central Florida offer a representative site for investigating how changes to vegetation distribution interact with management to impact coastal protection. Here, salt marshes are converting to mangroves, and mosquito impoundment structures are being modified. The resulting changes to vegetation composition and topography influence coastal protection services in wetlands. We used a model-based assessment of wave attenuation and erosion to compare vegetation (mangrove, salt marsh) and impoundment state (intact, graded). Our findings suggest that the habitat needed to attenuate 90% of wave height is significantly larger for salt marshes than mangroves. Erosion prevention was significantly higher (470%) in scenarios with mangroves than in salt marshes. Intact berms attenuated waves over shorter distances, but did not significantly reduce erosion. Differences in coastal protection were driven more by vegetation than by impoundment state. Overall, our findings reveal that mangroves provide more coastal protection services, and therefore more coastal protection value, than salt marshes in east central Florida. Other coastal regions undergoing similar habitat conversion may also benefit from increased coastal protection in the future. Access provided by Universität des es, -und Working on a manuscript? Avoid the common mistakes Introduction Many of the world’s coastal systems are threatened by climate change and anthropogenic development. Climate change impacts are prevalent in coastal systems where increasing temperatures, rising seas, and coastal storm events alter ecosystem structure, function, and resiliency (Day et al., 2008 ; Wong et al., 2014 ). In addition, climate change induces secondary impacts, for example, in the form of shifts in species’ ranges (Parmesan & Yohe, 2003 ; Walther, 2004 ). For coastal ecosystems, this phenomenon is exemplified by the poleward expansion of mangroves into predominantly salt marsh ecosystems (Saintilan et al., 2014 ). Although the mechanisms which drive or limit mangrove expansion have been well studied globally (Saintilan et al., 2014 ; Osland et al., 2016a ), less is known about the ecosystem service impacts that may arise from such dramatic habitat conversion. Climate change impacts may be amplified in coastal areas where humans both impact and depend on ecosystem services a great deal. In general, coastal wetlands are regarded as one of the most economically valuable ecosystems (de Groot et al., 2012 ). As of late, there is increasing interest in the potential of natural ecosystems, or “green infrastructure,” to provide protective services (Gómez-Baggethun & Barton, 2013 ; Lovell & Taylor, 2013 ; Saleh & Weinstein, 2016 ). Natural coastal ecosystems prevent damage and shield coastal communities from waves and storm surge associated with storm events (Costanza et al., 2008 ; Gedan et al., 2011 ; Arkema et al., 2013 ; Guannel et al., 2016 ). However, habitat conversion and coastal development is changing the structure and function of coastal ecosystems, which will alter their capacity to deliver these services. The protective services provided by coastal systems are determined by several scale-dependent factors. At the local scale (meters to 100s of meters), wetland vegetation protects coastlines by attenuating waves, reducing erosion, and promoting sediment deposition (Gedan et al., 2011 ). These benefits are provided by physical plant structures that induce wave-breaking and dampen wave energy through flow separation and friction (Koch et al., 2009 ; Gedan et al., 2011 ; Duarte et al., 2013 ). The coastal protection capabilities of salt marsh and mangrove vegetation can vary greatly due to individual plant characteristics such as morphology, biomass, stem rigidity, and structural complexity, which differentially affect wave attenuation, sediment accretion, and erosion prevention (Koch et al., 2009 ; Duarte et al., 2013 ). Vegetation characteristics at the landscape level also play a role in coastal protection. For example, greater stem densities and stand size have been shown to increase wave attenuation and shoreline stabilization in both mangroves (Alongi, 2008 ) and salt marshes (Shepard et al., 2011 ). However, the ability of vegetation to stabilize the coast can vary non-linearly with habitat size and is also dependent on physical factors like coastal geomorphology and topography (Alongi, 2008 ; Koch et al., 2009 ; Shepard et al., 2011 ; Duarte et al., 2013 ). Due to the high spatial variability in the factors influencing coastal protection services, additional site-specific investigations and comparative studies are needed to fill existing data gaps and to improve how coastal protection services are evaluated (Woodward & Wui, 2001 ; Koch et al., 2009 ). Evidence that coastal habitats provide protective services has been shown through field assessments, laboratory investigations, and modeling approaches for a number of vegetation types and geomorphic settings (Wamsley et al., 2010 ; Gedan et al., 2011 ; Shepard et al., 2011 ; Duarte et al., 2013 ; Barbier, 2016 ). However, there have not been any studies that have compared coastal protection capabilities of mangroves and salt marsh in the same location and geomorphic setting. As a result, these studies may have limited application to ecotonal regions where mangrove and salt marsh habitats converge. Here, mangrove morphologies diverge from tropical norms (Morrisey et al., 2010 ), habitat distributions are dynamic (Saintilan & Rogers, 2015 ; Osland et al., 2016b ), and anthropogenic influence is often high (Halpern et al., 2008 ; Crain et al., 2009 ). In recent decades, mangrove abundance in coastal wetlands along the eastern coast of Florida has been increasing and mangrove range limits have been shifting northward, in concert with global trends (Osland et al., 2013 ; Cavanaugh et al., 2014 , 2015 ; Williams et al., 2014 ; Rodriguez et al., 2016 ). In addition to salt marsh-to-mangrove conversion, the wetlands of this region are impacted by changing management regimes regarding mosquito control and hydrology (Rey & Kain, 1989 ). The large-scale implementation of mosquito impoundment structures in the coastal counties of eastern Florida dates back to the 1950s (Brockmeyer et al., 1996 ). A mosquito impoundment is a wetland that has been diked to allow for the management of water levels, which control salt marsh mosquito populations (Rey & Kain, 1989 ). Many of these impoundments have since been modified or restored following negative impacts to vegetation and fish communities (Brockmeyer et al., 1996 ; Rey et al., 2012 ). More recent restoration efforts aim to completely remove impoundments by grading and backfilling berm material into perimeter ditches created during impoundment construction (Rey et al., 2012 ). In addition to active restoration efforts, mosquito impoundment berms that are not maintained will erode naturally over time due to rain and wave action, which may be hastened by rising coastal water levels and oceanic storms associated with ongoing global climate change. Both active removal and natural degradation of impoundment berms return the modified topography to a state similar to that of a natural wetland (Rey et al., 2012 ). Wave and storm surge attenuation are dependent on local bathymetry and topography, which are altered by features such as raised elevation berms (Resio & Westerink, 2008 ; Wamsley et al., 2010 ; Saleh & Weinstein, 2016 ). The energy and height of incoming waves is determined by near-shore slope and water levels (Dean & Bender, 2006 ; Resio & Westerink, 2008 ). Once onshore, waves can break at greater heights on steeper slopes and at smaller heights on milder slopes (Dean & Bender, 2006 ). Therefore, intact berms may intercept waves of greater heights with increased energy. Furthermore, incoming wave energy is positively correlated with erosion rates (Wamsley et al., 2009 ). The underlying topography of coastal ecosystems therefore influences the ability of vegetation at the shoreline to attenuate waves and prevent erosion. Florida’s Merritt Island National Wildlife Refuge (MINWR) and Kennedy Space Center (KSC) provide an opportunity to conduct a site-specific ecosystem service assessment in a salt marsh–mangrove ecotone. At this location, we conducted a comparative study of the coastal protection services of two dominant vegetation types (mangroves and marshes) in the same geomorphic setting. We assessed coastal protection for both mangrove and salt marsh vegetation by modeling wave attenuation and avoided erosion for two scenarios of mosquito impoundment berm state (intact berms and graded berms). In addition, we parameterized our models with local, field-based measurements of vegetation structure we collected for both mangroves and salt marsh. Modeling scenarios (vegetation type + berm state) were used to test our hypotheses that (1) mangroves will provide more coastal protection services than salt marsh, and (2) low-grade topographies characteristic of natural wetlands will increase the ability of either vegetation type to attenuate waves and prevent erosion. Materials and methods Site description KSC and the overlying MINWR provide an ideal study site for the investigation of the impacts of mangrove expansion and mosquito impoundment on coastal protection (Fig. 1 ). The wildlife refuge, located on the Cape Canaveral Barrier Island Complex, is surrounded by three estuarine water bodies: the Mosquito Lagoon, the Banana River Lagoon, and the Indian River Lagoon (IRL). The lagoon system near the study site is micro-tidal due to the physical separation from the ocean and the small size and distance of the two encompassing inlets: Ponce de Leon Inlet (North) and Sebastian Inlet (South) (Smith, 1983 , 1993 ). The region is an ecologically and economically valuable area—it contains a number of national assets and exists in one of the most diverse estuaries in North America (Mikkelsen & Cracraft, 2001 ; Hall et al., 2014 ) with more than 2,200 different species of animals and 2,100 species of plants (St. Johns River Water Management District (SJRWMD, ). This area also has the highest number of threatened and endangered species on federal property in the contiguous US (Breininger et al., 1994 ; NASA, 2010 ). In addition to KSC and MINWR, there are two other federal facilities in the area: the Cape Canaveral Air Force Station and the Canaveral National Seashore (CNS). The region has roughly 11 billion USD of national assets for access to space (Hall et al., 2014 ). Tourism in the area, associated with KSC, MINWR, and CNS, real-estate, and the natural resources of the IRL and its watershed, produced 3.7 billion USD annually in benefits to the regional economy (Hazen and Sawyer, 2008 ). Fig. 1 The Merritt Island National Wildlife Refuge (MINWR), FL, USA. Colored areas represent 2010 distributions of salt marsh ( green ) and mangrove ( orange ). The coastal field sites are shown as black squares and the modeling sites are shown as black circles . Impoundment berms ( dark gray ) shown here represent secondary NASA infrastructure adjacent to wetland areas. Major waterways within MINWR are labeled. NOAA/USGS stations providing long-term verified water level data are also shown ( yellow ) Full size image Average water levels within the IRL range from −0.3 to 0.0 m with seasonal and annual variation (Fig. 1 in Appendix—Electronic Supplementary Material). Regional sea level and lagoon level have steadily increased since 1996, and since 2009 water level rise appears to be accelerating in accordance with other areas of the US east coast (Yi et al., 2015 ), though additional years of monitoring are needed to validate this trend (Lewandowsky et al., 2016 ). Differences between sea level and lagoon levels can be attributed to direct and indirect inputs of rainfall through deposition, runoff, and groundwater seepage (Hall et al., 2001 ). National Oceanographic and Atmospheric Administration (NOAA) reports of historical hurricane tracks indicate that MINWR and the surrounding area have experienced 94 tropical storms from 1852 to 2012 (Table 1 ; ). Table 1 Datasets used in this study Full size table MINWR is currently contained within the salt marsh–mangrove ecotone of Florida (28.3311°N–30.2333°N), which is undergoing rapid climate-driven vegetation changes (Cavanaugh et al., 2014 ; Doughty et al., 2016 ). Here, the wetland vegetation exists largely in a dynamic mosaic of monospecific patches across the landscape (e.g., Fig. 2 a). Focal wetland species found in MINWR include three mangrove species ( Rhizophora mangle , Avicennia germinans , and Laguncularia racemosa ) and four salt marsh species ( Spartina bakeri , Distichlis spicata , Batis maritima , and Sarcocornia ambigua ) (Poppleton et al., 1977 ; Schmalzer, 1995 ). Wetland soils within MINWR consist of organic debris and/or silty clays over sand and irregularly stratified mixed sand and shell (Huckle et al., 1974 ; Schmalzer et al., 2001 ). Fig. 2 Diagram of modeling scenarios and elevational cross-shore profiles. Top panel provides a representation of three model scenarios for a modeling site location in MINWR ( black circle ). Habitat distributions are shown for mangroves ( orange ) and salt marsh ( green ). The cross-shore profile ( solid line ) shown in the top panel corresponds to the elevational profile ( gray line ) in the bottom panel . Impoundment berm ( black ) scenarios are shown in the profile for intact ( gray ) and graded ( red ) berms Full size image Field-based assessment of vegetation characteristics Vegetation characteristics (described below) were measured for the focal plant species at three field sites located within MINWR and a fourth field site to the south in the Pine Island Conservation Area (Fig. 1 ; Table 1 in Appendix—Electronic Supplementary Material). These measurements were used to parameterize the coastal protection models for this specific study area. At each field site, we established a set of three 1 m 2 plots within each of four focal salt marsh species stands; overall, we established 10 sets of plots for salt marsh species ( n set = 10, n plots = 30). For mangroves, we established a set of three 9 m 2 plots within each of three mangrove species stands; overall, we established nine sets of plots for mangroves species ( n set = 9, n plots = 27). Due to the mosaic distribution of species at the field sites, sampling plots were spatially random and not all focal species were available at each site. Varying plot sizes were used to measure vegetation characteristics at an appropriate scale for each vegetation class. In salt marsh plots, we quantified the following vegetation characteristics: stem density (m −2 ), average stem height (m), and average stem diameter (mm). In mangrove plots, we quantified the density (m −2 ), average height (m), and average diameter (cm) of mangrove stems (trunks) and roots. The quantification of root system characteristics included prop roots for R. mangle and pneumatophores for A. germinans . Vegetation characteristics measured in situ captured the broad range of species and morphologies present within each vegetation class. To address the variation of vegetation characteristics, we used the MINWR site-wide average of each characteristic for each vegetation class in the coastal protection models. Coastal protection modeling Site selection Twenty points within the study area were selected as modeling sites (Fig. 1 ). The existing vegetation at these sites spans both marsh- and mangrove-dominated habitats and many sites include both habitats across the cross-shore profile (e.g., Fig. 2 a). Additional selection criteria included position at the land–water border and proximity to large waterways. We excluded sites near small canals or tidal creeks from the analysis because water depths were insufficient for modeling wave evolution (described below). Points were dispersed latitudinally to ensure a randomized sampling of wetlands throughout MINWR. Model scenarios We parameterized our model using data on wetland habitat distribution and topography that were developed specifically for MINWR. Habitat distribution and elevation (NAVD88 m) data were provided in the form of a 2010 land cover map (1 m) and a seamless DEM topobathy map (3 m) with a vertical accuracy of ±0.2 m (Table 1 ). The land cover map is the product of 25 years of aerial photo interpretation and ground-truthing. The DEM was derived from three datasets. The Florida Department of Emergency Management and NOAA collected LiDAR data for Brevard County and Volusia County between 2006 and 2008 for estimation of terrestrial elevations. The NOAA coastal relief models DEM (90 m) was utilized for offshore areas in the Atlantic Ocean and lagoon bathymetry was provided by the SJRWMD as an interpolated layer generated from depth soundings taken at 15.2-m intervals spaced 150–300 m apart for >23,000 measurements (Steward et al., 2005 ). All processing and analysis of land cover and elevation data were performed using ArcGIS v10.3 (ESRI, Redlands, CA). The land cover and elevation datasets provided the most up-to-date habitat distributions and topography in MINWR. To address our hypotheses, we modified these datasets to simulate changes to habitat distributions and impoundment state. For the vegetation type scenarios, we first selected all wetland classes (mangrove, salt marsh, wetland scrub–shrub) in the 2010 land cover map and simulated the conversion of all wetlands to a single class, i.e., all wetlands were regarded as either salt marsh or mangrove (Fig. 2 ). We modeled 100% habitat conversion rather than a series of step-wise proportions to provide a bookended comparison of coastal protection in two habitat types. Note that the model used in this study does not aim to model predicted vegetation distributions. Rather, we are comparing wetland classes (mangroves and salt marsh) in a common geomorphic setting. Impoundment state scenarios were developed by modifying the elevation data provided by the topobathy DEM. For each of the modeling sites, we created a cross-shore profile perpendicular to the shoreline which spanned from the inland extent of the wetland habitat to 100 m offshore. Elevation data were extracted along the cross-shore profile at an interval of 1 m using ArcGIS v10.3. In addition, each point in the elevation profile was attributed with impoundment presence (Y or N) and habitat type (mangrove or salt marsh; Fig. 2 ). To simulate the “graded impoundment” scenario, we manually edited the section of the elevational profile where impoundment berms were detected. “Graded impoundments” were set to an elevation of 0.09 m to be consistent with target elevations used in recent restoration efforts conducted by the St. John’s Water Management District (e.g., United States Fish and Wildlife Service, 1999 ). We modeled wave attenuation and erosion reduction at each of the modeling sites ( n = 20) for a total of four experimental modeling scenarios: (1) Mangroves + Intact Berm, (2) Mangroves + Graded Berm, (3) Salt Marsh + Intact Berm, and (4) Salt Marsh + Graded Berm (Fig. 2 b, c). To provide a baseline for control, we also modeled wave attenuation and erosion reduction for the current, mixed salt marsh and mangrove distributions over unmodified elevational and graded profiles (Fig. 2 a). Model parameters which varied among modeling sites include near-shore bathymetry, onshore topography, berm height, and vegetation distribution. In all, we ran six model scenarios at each modeling site ( n = 20) for a total of 120 model runs. Wave attenuation The modeling framework used in this study was largely based on the Natural Capital Project’s Integrated Valuation of Ecosystem Services and Tradeoff (InVEST)’s model for “Wave Attenuation & Erosion Reduction: Coastal Protection” (Sharp et al., 2016 ; ). Guannel et al. ( 2015 ) provide a more detailed description of the development of the InVEST tool, including the selection of wave models and the definition of model parameters. The pre-packaged InVEST model was not used in order to better customize coastal protection modeling for our specific study site. Wave attenuation for each of the model runs was determined using a wave evolution model which incorporates dissipation due to wave breaking and vegetation (Guannel et al., 2015 ). First, wave energy is calculated along the cross-shore profile as $${-} D = \frac{1}{8}\rho g\left[ {\frac{{C_{\text{g}} H^{2} }}{x}} \right],$$ (1) where ρ is the water density, g is the gravitational acceleration, H is the wave height, and C g is the wave group velocity over a distance ( x ). Equation 1 represents wave energy as an inverse function of wave dissipation ( D ). Water level data for the enclosed lagoon and open ocean surrounding MINWR were obtained from the nearest available NOAA and USGS tide stations (Fig. 1 ; Fig. 1 in Appendix—Electronic Supplementary Material). NOAA-verified water level data were used to calculate average changes in water levels within MINWR that occurred during historical major storm events (Table 1 ; ). Based on these data, we selected a conservative wave height of 0.5 m, which is expectedly lower for a protected, enclosed lagoon as compared to wave heights experienced along high-energy coastal environments. D represents the dissipation of wave energy, which is the summation of dissipation caused by wave breaking and vegetation: $$D = D_{\text{break}} + D_{\text{veg}} .$$ (2) For the purpose of this comparative modeling effort, we excluded dissipation due to bottom friction by assuming low turbidity over smooth, muddy substrates void of corals (Guannel et al., 2015 ). Dissipation due to wave breaking ( D break ) was determined using the wave transformation models developed by Alsina & Baldock ( 2007 ): $$D_{\text{break}} = A\frac{{H^{3} }}{h}\left[ {\left( {\left( {\frac{{H_{\text{b}} }}{H}} \right)^{3} + \frac{{3H_{\text{b}} }}{2H}} \right) \cdot \exp \left( { - \left( {\frac{{H_{\text{b}} }}{H}} \right)^{2} } \right) + \frac{3\sqrt \pi }{4}\left( {1 - {\text{erf}}\left( {\frac{{H_{\text{b}} }}{H}} \right)} \right)} \right],$$ (3) where A is the sediment scale factor (Sharp et al., 2016 ), h is the local water depth, and erf is the Gauss error function (Alsina & Baldock, 2007 ). H b is the maximum wave height at which breaking occurs and is determined by $$H_{\text{b}} = \frac{0.88}{k}\tanh \left( {\gamma \frac{kh}{0.88}} \right),$$ (4) where k is the wavenumber determined by wavelength ( L ): $$k = \frac{2\pi }{L},$$ (5) and the breaking index value ( γ ) is calculated as follows (Battjes & Stive, 1985 ): $$\gamma = 0.5 + 0.4\tanh \left( {33\frac{{H_{\text{o}} }}{{L_{\text{o}} }}} \right),$$ (6) where H o and L o are the wave height and wavelength at the deepest point of the cross-shore profile. Dissipation due to the presence of vegetation was estimated using the empirical model developed by Mendez & Losada ( 2004 ): $$D_{\text{veg}} = \frac{1}{2\sqrt \pi }\rho NdC_{\text{d}} \left( {\frac{kg}{2\sigma }} \right)^{3} \frac{{\sinh^{3} k\alpha h + 3\sinh k\alpha h}}{{3k\cosh^{3} kh}}H^{3} ,$$ (7) where N is the vegetation stem density and d is the vegetation stem diameter. The model parameter α represents the fraction of the water column occupied by vegetation; because the vegetation in question is emergent, α was set to 1. The drag coefficient ( C d ) used here is depth averaged and taxa specific (Mendez & Losada, 2004 ; Sharp et al., 2016 ). Default values for drag coefficients were used for salt marshes ( C d = 0.01) and mangroves ( C d = 1) (see Pinsky et al., 2013 ; Guannel et al., 2015 ; Sharp et al., 2016 ). For mangroves, D veg represents the sum of wave dissipation from trunk and root components ( D veg(mangrove) = D trunk + D roots ). Model inputs of wave conditions and sediment characteristics were held constant for all modeling locations in order to compare coastal protection services between vegetation types and differing impoundment elevation profiles. Wave attenuation was calculated as the reduction in wave height resulting from wave breaking and the presence of vegetation for the onshore portion of the elevational profile for each model run. Model outputs for all scenarios were compared using a two-way analysis of variance (ANOVA) in the R statistical software v3.2.3 (R Foundation for Statistical Computing, Vienna, Austria). Comparisons of vegetation type and berm state were tested used two-way paired t tests. Data were transformed when assumptions of normality and homogeneity were not met. Non-parametric statistical analyses were used as needed. Erosion prevention Output from the wave attenuation model was used to calculate the amount of erosion caused by waves on the onshore portion of the elevational profile. First, we calculated wave run-up ( R 2 ) as determined by the empirical model developed by Stockdon et al. ( 2006 ): $$R_{2} = 1.1\left( {0.35m\sqrt {H_{\text{p}} L_{\text{o}} } + 0.5\sqrt {0.563m^{2} H_{\text{p}} L_{\text{o}} + 0.004H_{\text{p}} L_{\text{o}} } } \right),$$ (8) which provides an estimate of the maximum onshore distance that waves can achieve over inundated lands (Sharp et al., 2016 ). Here, m is the foreshore slope and H p is the offshore wave height (0.5 m). Next, erosion was calculated as the hourly rate of scour ( E m ; cm h −1 ) following Whitehouse et al. ( 2000 ): $$E_{{\text{m}}} = \left\{ {\begin{array}{ll} {\frac{{36(\tau _{{\text{o}}} - \tau _{{\text{e}}} )m_{{\text{e}}} }}{{C_{{\text{m}}} }}}, & {{\text{if}}\,\tau _{{\text{o}}} - \tau _{{\text{e}}} > 0} \\ {0}, & {{\text{if}}\,\tau _{{\text{o}}} - \tau _{{\text{e}}} \le 0}, \\ \end{array} } \right.$$ (9) where m e is an erosion constant (0.0001 m s −1 ) and C M is the dry density of the substrate (70 kg m −3 ); default values were used following Sharp et al. ( 2016 ). The erosion shear stress constant ( τ e ) is calculated as $$\tau_{\text{e}} = E_{1} C_{\text{M}}^{{E_{2} }} ,$$ (10) where E 1 = 5.42e −6 and E 2 = 2.28 are the coefficients determined by Sharp et al. ( 2016 ) following Whitehouse et al. ( 2000 ). The shear stress induced by waves ( τ o ) is computed as $$\tau_{\text{o}} = \frac{1}{2}\rho f_{\text{w}} U_{\text{bed}}^{2} ,$$ (11) where the wave-induced bottom velocity ( U bed ) at a given water depth ( h ) is $$U_{\text{bed}} = 0.5H\sqrt {\frac{g}{h}} ,$$ (12) and the wave-induced friction coefficient ( f w ) is determined by the kinematic viscosity of seawater ( v ; 1.17e −6 m 2 s −1 ) and wave frequency ( σ ): $$f_{\text{w}} = 0.0521\left( {\frac{{\sigma U_{\text{bed}}^{2} }}{v}} \right)^{ - 0.187} .$$ (13) Hourly erosion rate ( E m ; cm h −1 ) was calculated for the vegetation distribution for a given scenario, as well as for a baseline of no vegetation present for each model run. Hourly erosion estimates were then converted to the amount of erosion ( R ; m 2 ) estimated to occur within 1 day over a specified distance ( L ; m). The no-vegetation baseline allowed for the estimation of the amount of erosion prevented (RA) by the presence of wetland habitat: $${\text{RA}} = R_{{{\text{No}}\;{\text{veg}}}} - R_{\text{Veg}} .$$ (14) Erosion prevention modeling was conducted for all scenarios; statistical significance between scenarios was tested using a two-way ANOVA. For comparisons within vegetation type and berm state, two-way paired t tests were used. Results Vegetation characteristics Field-based assessments of MINWR vegetation captured a broad range of species and morphologies within each vegetation class (Table 2 in Appendix—Electronic Supplementary Material). Salt marsh vegetation in MINWR occurred at an average density of 2,215 ± 256 stems m −2 (±s.e.m.) with an average canopy height of 0.43 ± 0.02 m and an average stem diameter of 4.3 ± 0.2 mm. The mangroves within MINWR ranged in height from 1.0 to 7.0 m with an average height of 2.4 ± 0.3 m. Mangrove densities occurred at an average of 8 ± 1.5 stems m −2 , but ranged from 2 to 37 stems m −2 depending on tree architecture. Average trunk diameter was found to be 2.9 ± 0.2 cm. Mangrove root structures, i.e., R. mangle prop roots and A. germinans pneumatophores, had an average density of 143.5 ± 21.2 m −2 , and average root height and diameter were 44.0 ± 5.6 and 1.4 ± 0.2 cm, respectively. Coastal protection: wave attenuation and erosion prevention We modeled wave attenuation and erosion for vegetation type and impoundment state scenarios to provide a comparison of coastal protection value. Results presented here reflect a conservative wave height of 0.5 m selected for an enclosed, shallow, and perched lagoon (see Fig. 1 in Appendix—Electronic Supplementary Material for water level data). In addition, time-dependent outputs were calculated for the duration of a single day. The model results are presented for each of the six scenarios (vegetation type + berm state). Wave attenuation was estimated as the reduction in wave height caused by the presence of vegetation compared to a no-vegetation baseline. Salt marsh and mangrove habitats were found to attenuate waves over significantly different distances ( P < 0.001; Fig. 3 ). The best fit for these data was achieved with an exponential decay regression ( \(r_{\text{mangrove}}^{2} = 0.93;\;r_{{{\text{salt}}\;{\text{marsh}}}}^{2} = 0.83\) ). From the exponential decay model, we infer that salt marsh habitats attenuate 90% of wave height over an average onshore distance, or buffering distance, of 17.5 ± 1.6 m (±s.d.). The distance over which mangrove habitats attenuated 90% of waves, the buffering distance, was significantly less at 1.7 ± 0.3 m. The habitat widths resulting in 90% reductions in wave height indicate that mangrove habitat width needs to be an order of magnitude less than that of salt marshes to reduce the same percentage of wave height. Graded impoundment berms attenuated 100% of waves at a buffering distance of 8.0 ± 8.6 m, whereas intact berms provided the same wave attenuation at a significantly shorter distance of 2.8 ± 2.8 m (Wilcoxon signed rank test, P < 0.001). Fig. 3 Percent wave height attenuation in mangrove ( orange ) and salt marsh ( green ) vegetation types over onshore distances. Exponential decay models for each vegetation type are shown with 95% confidence intervals ( gray ). Data represented are for graded impoundment berm scenarios only Full size image Erosion prevention, or total erosion avoided, was estimated as the total wetland area (m 2 ) preserved over the time period of 1 day compared to a no-vegetation baseline. Erosion prevention was significantly higher (470%) in modeling scenarios with mangroves than those with salt marsh habitats (Wilcoxon signed rank test, P < 0.001) with prevented losses of 0.044 ± 0.036 and 0.007 ± 0.007 m 2 of land, respectively, compared to a baseline of no habitat present (Fig. 4 ). The average total erosion was significantly higher in salt marshes (0.07 ± 0.05 m 2 ) than mangroves (0.04 ± 0.03 m 2 ) (Wilcoxon signed rank test, P < 0.001). Similarly, daily erosion rates (cm day −1 ) were significantly higher in salt marshes (0.09 ± 0.07 cm day −1 ) compared to mangroves (0.04 ± 0.03 cm day −1 ) (Wilcoxon signed rank test, P < 0.001). Differences in the amount of total erosion prevented (m 2 ) were not significantly impacted by berm state. Daily erosion rates (cm day −1 ) for intact versus graded berms, were found to be 0.06 ± 0.05 and 0.07 ± 0.06 cm day −1 , respectively (Wilcoxon signed rank test, P < 0.001). Fig. 4 Total daily erosion avoided for all modeling scenarios. Colors represent vegetation distributions for mangrove ( orange ), salt marsh ( green ), and mixed ( gray ) habitats. Error bars represent standard deviation. Different letters above bars represent significantly different erosion (Kruskal–Wallis one-way ANOVA) Full size image Sensitivity analysis of model parameters We conducted a sensitivity analysis to better understand how coastal protection may vary under a range of storm conditions. To do so, we varied the model parameter wave height by ±50% to determine how this affects model outputs of wave attenuation and erosion. Increasing wave height by 50% caused an average increase of 198% in the distance needed to attenuate 90% of wave height. The amount of total erosion avoided decreased by 77% and the daily erosion rate increased by 40% when the wave height parameter was increased. When the wave height parameter was decreased by 50%, the distance needed to attenuate 90% of waves decreased by 73%, total avoided erosion decreased by 27%, and daily erosion rates also decreased by 10%. Overall, wave attenuation distances were more sensitive to variations in wave height than total erosion or erosion rate. Discussion Habitat conversion alters wave attenuation and erosion Overall, in support of our first hypothesis, mangroves provided more wave attenuation and erosion prevention than salt marshes in our modeling scenarios. Mangroves were found to attenuate waves over significantly shorter distances than salt marsh. This was the case for all the wave heights used in the sensitivity analysis. Because topography and wave conditions were held constant across the scenarios, differences in wave attenuation can be largely attributed to vegetation characteristics such as height, stem density, and stem diameter. The wave breaking due to vegetation was amplified in mangrove habitats that were characterized by greater canopy heights and stem diameters, but lower stem density than salt marsh. Root structures also contributed to greater wave dissipation in mangroves. In addition to differences in vegetation characteristics, coefficients of drag for each vegetation type likely influenced wave attenuation (Pinsky et al., 2013 ; Guannel et al., 2015 ). Ultimately, we found that wave attenuation did not vary linearly with onshore distance for each habitat type (Fig. 3 ), which supports the findings of Koch et al. ( 2009 ) and indicates that after a certain distance no additional wave attenuation is provided by an increase in habitat size. Specifically, mangrove habitat reduces 90% of wave height over an average distance of 1.7 ± 0.3 m, a distance 10 times smaller than salt marsh habitat (17.5 ± 1.6 m) needed to reduce the same amount of wave height. Our results are substantially less than buffering distances, or “green belts,” reported at ~100 m for other mangroves (Danielsen et al., 2005 ; Alongi, 2008 ; Spalding et al., 2014a ) and ~300 m salt marshes (Möller & Spencer, 2002 ), which is likely due to the relatively small wave heights that were modeled for our protected, enclosed study site. Larger waves modeled in the sensitivity analysis indicate that buffering distance could increase by 198% with a 50% increase in wave height. However, the relatively small buffering distances determined in our analysis support the paradigms that even small wetlands afford wave protection and that the presence of any vegetation is better than none (Gedan et al., 2011 ). Mangroves prevented 470% more erosion than salt marsh habitats in our model. Belowground characteristics of wetland vegetation likely play a role in reducing erosion in mangrove habitats. For example, sheer strength of wetland soils increases with belowground root biomass (Gedan et al., 2011 ), which was found to be higher for mangroves in MINWR in a previous study (Doughty et al., 2016 ). Evidence from the salt marsh–mangrove ecotone of Texas also suggests that rooting volume and structure play a significant role in sediment accretion, which increases soil strength and may prevent erosion (Comeaux et al., 2012 ). Furthermore, long-term rates of erosion in salt marshes are driven not by extreme episodic events, but rather the variations in mean wave conditions (Leonardi et al., 2015 ). The conservative wave height used in this study to represent lagoonal storm conditions provides additional insights into how coastal protection services may change with gradually rising water levels. The coastal protection services provided by wetland vegetation, which ameliorate wave attack, erosion, and storm surge, are highly context dependent (Gedan et al., 2011 ). A global meta-analysis conducted by Gedan et al. ( 2011 ) detected no significant differences in wave attenuation between mangrove and salt marsh habitats. However, our findings suggest that ecologically significant differences in coastal protection services can be found when examining these two vegetation types in comparable coastal settings. Stem density, as well as the species-specific presence of pneumatophores, varies greatly at local scales and has been shown to impact wave attenuation in mangroves (Mazda et al., 1997 , 2006 ; Gedan et al., 2011 ; Horstman et al., 2014 ). In salt marshes, spatial variability in wave energy dissipation is also dependent on species-specific morphological and mechanical characteristics (Shepard et al., 2011 ; Tempest et al., 2015 ). Thus, it is important to note that wave attenuation within vegetation types at our site is likely to be more variable than the modeling results provided here for MINWR-averaged vegetation characteristics. As a result, there is a need to validate this modeling effort with additional quantitative field assessments of wave attenuation. Impoundment removal has limited impacts on coastal protection Impoundment berm state impacted coastal protection services, but these effects were dampened when vegetation type was considered. When testing the effects of berm state alone, our findings suggest that intact berms provided slightly more coastal protection as evidenced by shorter buffering distances, although erosion rates were not significantly lower than graded berm scenarios. We hypothesized that although berms would stop waves at small distances, the energy of these waves would be higher and would result in increased erosion, similar to how seawalls lead to increased erosion (Cooper & Pethick, 2005 ). Among berm state scenarios, higher total erosion was found for graded berm scenarios due to the longer distances of onshore wave transmission over which erosion was summed. When combined with the effects of vegetation, however, impoundment berm state had little overall impact on coastal protection. This suggests that the grading of berms associated with mosquito impoundment restoration efforts, which improve overall ecosystem health and function, will not negatively impact the coastal protection services provided by wetlands. Insights to coastal protection value From our comparative modeling effort, we determined how the coastal protection services provided by each vegetation type and berm state differ. To provide insights into the potential economic value of these ecosystem services, we can estimate the value of the land area that was saved from erosion over a given transect during the course of a year using the property value of the MINWR/KSC. We used an average of 6 days of major storm events per year for MINWR based on NOAA historical hurricane data from 2000 to 2012. The total property value of MINWR and the underlying KSC was estimated at 11 billion USD based on the cost of NASA infrastructure and facilities (Hall et al., 2014 ). For the purpose of this exercise, we assumed a uniform land value of approximately 20 USD m −2 , which was determined by dividing the total property value by the total land area of MINWR. Despite oversimplifying the value of several land use classes for this unique study area, we consider our value estimate conservative because it reflects only economic assets with a defined monetary value. When comparing vegetation types alone, mangroves could provide an average of 10.6 ± 8.6 USD m −2 year −1 in coastal protection value compared to 1.6 ± 1.7 USD m −2 year −1 in salt marshes (Wilcoxon signed rank test, P < 0.001). Berm state did not significantly influence coastal protection value. Comparisons across modeling scenarios (vegetation type + berm state) reveal that mangrove habitats with graded berms provide significantly higher coastal protection value (11.0 ± 8.7 USD m −2 year −1 ), followed by mangrove habitats with intact berms (10.1 ± 8.8 USD m −2 year −1 ), salt marsh habitats with graded berms (1.7 ± 1.8 USD m −2 year −1 ), and lastly salt marsh habitats with intact berms (1.6 ± 1.7 USD m −2 year −1 ) (ANOVA, square root-transformed data, P < 0.001). Because of the simplified approach in estimating these values, we emphasize that coastal protection value is similar to wave attenuation and erosion prevention in that it is likely context dependent (Gedan et al., 2011 ) and non-linear (Koch et al., 2009 ). Across all modeling scenarios, the overall coastal protection value for 1 year is 562% higher when mangroves are the dominant vegetation type as compared to salt marsh. In recent decades, coastal wetlands have demonstrated great economic importance in protection against severe storm events (Woodward & Wui, 2001 ; Scavia et al., 2002 ; Costanza et al., 2008 ). In the US alone, coastal protection services provide an estimated $23.2 billion per year (Costanza et al., 2008 ). Overall, coastal wetlands have a significant impact on reducing economic losses and deaths associated with major storm events (Badola & Hussain, 2005 ; Barbier, 2007 ; Costanza et al., 2008 ; Das & Vincent, 2009 ). While our findings are specific to MINWR/KSC, they provide insights for other salt marsh–mangrove ecotones across the globe. In all, coastal systems are being increasingly recognized for the services they provide, but ecosystem service valuation methods need to be standardized and currently cannot capture the total value of ecosystem services in terms of economic, intellectual, social, and natural capital (Barbier & Heal, 2006 ). Coastal protection in a changing world Climate change and anthropogenic impacts render the fate of coastal wetlands, and the coastal protection services they afford, uncertain (Barbier et al., 2011 ; Spalding et al., 2014b ). Continued mangrove encroachment may alter the ability of ecotonal wetlands to endure the increased inundation and storm events predicted with climate change. Continued sea level rise in this region and around the world is extremely likely, whereas the frequency and intensity of coastal storm events are still debated (Bender et al., 2010 ; Rosenzweig et al., 2014 ; Wong et al., 2014 ). However, any increases in storm frequency will directly result in the increased likelihood of severe flooding and erosion events (Duarte et al., 2013 ). Increases in wave height arising from either sea level rise or storm events will likely lead to higher erosion rates and ultimately lower the total amount of erosion protection provided by wetlands as evidenced by our sensitivity analysis. This will necessitate additional buffering distances in both habitat types. Ongoing climate-driven mangrove expansion may also influence sediment accretion and elevation maintenance, with implications for coastal protection, and these biogeomorphic changes will vary with coastal setting and rates of sea level rise (Krauss et al., 2011 ; Comeaux et al., 2012 ; Rogers et al., 2014 ; Woodroffe et al., 2016 ). In addition to large-scale habitat conversion, chronic warming may also differentially impact growth and carbon allocation in salt marshes and mangroves (Coldren et al., 2016 ), leading to further divergence in plant structures, which may also impact coastal protection services. Conclusions In this study, we provide some important insights into the potentially positive impacts of mangrove expansion to coastal protection value. However, it is important to note that the full range of ecological consequences of mangrove range expansion has yet to be explored. It is well documented that both marsh and mangrove coastal ecosystems provide many ecosystem services (Barbier et al., 2011 ), and the conversion of these habitats may cause unforeseen trade-offs in the benefits that we derive from our coasts. Therefore, these caveats should be considered when discussing the implications that our results pose for coastal management. Furthermore, our investigation provides insights into only a few of the anthropogenic and climate-related factors facing coastal wetland ecosystems. However, climate change-induced habitat conversion and shifting management regimes are pressures facing coastal systems across the world. Luckily, coastal wetlands are dynamic and adaptable (Duarte et al., 2013 ), an advantage for maintaining coastal protection and preserving ecosystem function under changing climate regimes. Mangrove encroachment into marsh ecosystems around the world may signify the dynamism of coastal wetlands and could, according to our study, improve coastal protection in some regions. Despite future uncertainty in climate change impacts, integrating ecosystem-based approaches and ecological engineering may offer a way to mitigate and adapt to the effects of rising seas and intensifying storms (Jones et al., 2012 ; Cheong et al., 2013 ; Sutton-Grier et al., 2015 ).
The threat to coastal regions posed by climate change, overdevelopment and other human caused stressors is well-established. Among the most prized and valuable land throughout the world, shorelines everywhere are imperiled by sea level rise, beach erosion and flooding. But a recently published NASA-funded research study in which Villanova University Biology Professor Samantha Chapman played a key role has discovered a new, natural phenomenon that could offer an economic and ecological solution to coastal wetland protection—the spread of mangrove trees. Mangroves are tropical trees that grow in coastal intertidal zones, notable for their dense tangles of prop roots which serve as highly effective shields for coastlines by reducing the force of breaking waves, decreasing erosion and increasing sediment deposition. These trees are rapidly moving northward in Florida due to the lack of hard freezes. Once there, they change habitats previously dominated by salt marshes into mangrove swamps. The new study, titled, "Impacts of mangrove encroachment and mosquito impoundment on coastal protection services," compares the coastal protection value of salt marshes with mangroves along Florida's East central coast and the overlying area of The Merritt Island National Wildlife Refuge (MINWR) in which NASA's Kennedy Space Center (KSC) is located. The study, published in Hydrobiologia, turned up some remarkable results in comparing the protective value of salt marshes to mangroves. Mangrove expansion was the clear winner in providing superior coastal protection over salt marshes. Mangrove habitats provide a staggering 800 per cent more coastal protection than salt marshes. In all, mangrove habitats could provide $4.9 million worth of coastal protection more than manmade barriers. In the U.S. alone, the study points out, wetland coastal protection services provide an estimated $23.2 billion per year of protection against economic losses as well as deaths associated with major storm events. Although the impact of impending climate change is uncertain, the study concludes that "Integrating ecosystem-based approaches and ecological engineering may offer a way to mitigate and adapt to the effects of rising seas and intensifying storms."
10.1007/s10750-017-3225-0
Medicine
Human behaviour follows probabilistic inference patterns
Philipp Schustek et al, Human confidence judgments reflect reliability-based hierarchical integration of contextual information, Nature Communications (2019). DOI: 10.1038/s41467-019-13472-z Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-13472-z
https://medicalxpress.com/news/2019-12-human-behaviour-probabilistic-inference-patterns.html
Abstract Our immediate observations must be supplemented with contextual information to resolve ambiguities. However, the context is often ambiguous too, and thus it should be inferred itself to guide behavior. Here, we introduce a novel hierarchical task (airplane task) in which participants should infer a higher-level, contextual variable to inform probabilistic inference about a hidden dependent variable at a lower level. By controlling the reliability of past sensory evidence through varying the sample size of the observations, we find that humans estimate the reliability of the context and combine it with current sensory uncertainty to inform their confidence reports. Behavior closely follows inference by probabilistic message passing between latent variables across hierarchical state representations. Commonly reported inferential fallacies, such as sample size insensitivity, are not present, and neither did participants appear to rely on simple heuristics. Our results reveal uncertainty-sensitive integration of information at different hierarchical levels and temporal scales. Introduction As sensory evidence is inherently ambiguous, it needs to be integrated with contextual information to minimize the uncertainty of our perception of the world and thus allow for successful behavior. Suppose that we observe just a few passengers exiting an airplane at an airport whose city hosts a soccer final. If we find that four of them are supporters of the red team and two support the blue team, we might conclude that there were more supporters of the red team in the airplane. This inference, based on incomplete sensory evidence, can be improved by contextual information. For instance, there might be many more blue than red supporters in the world. Then, despite our initial observation, we might want to revise our inference and rather conclude, based on the context, that the airplane carried more blue than red supporters. While in the previous example context was certain and by itself able to resolve observational ambiguity, contextual information is very often ambiguous. For instance, we might just know that there is an event in the city that attracts more of a certain type of people, but we do not know which type. Extending our example, we would first need to infer the context (whether the event attracts more people of the red or blue type) by observing samples of passengers leaving several airplanes. By using the inferred context, we can better estimate whether the next plane carries more of one type of people given only on a small sample of its passengers. Thus, in real-life, both observations and context commonly provide incomplete information about a behaviorally relevant latent variable. In these cases, inference should be based on probabilistic representations of both observational and contextual information 1 , 2 , 3 , 4 , 5 , 6 . Indeed, recent work has shown that humans can track a contextual binary variable embedded in noise that partially informs about what specific actions need to be performed to obtain reward 7 . Additionally, humans can infer the transition probability between two stimuli where the transition probability itself undergoes unexpected changes, defining a partially observable context 8 . These results and other studies suggest that a refined form of uncertainty representation is held at several hierarchical levels by the brain 9 , 10 , 11 , 12 , 13 , 14 . However, in this previous research, the reliability of the context has rarely been manipulated directly and independently 15 from the reliability of the current observation 1 , 7 , 8 . Therefore, it is unclear up to what degree contextual inference reflects its uncertainty and interacts with the inferred reliability of the current observation as it would be expected from representing it with a joint probability distribution over both observations and context. While some effects predicted by hierarchical probabilistic inference have been previously reported in isolation, no study has—to our knowledge—thoroughly assessed a body of behavioral predictions of hierarchical probabilistic inference and tested them against alternative heuristics model. To address the above question, we developed a reliability-based hierarchical integration task that allows us to directly control reliability in order to evidence characteristic patterns of probabilistic inference. Our task was intuitively framed to our participants using the analogy of flight arrivals to an airport whose city hosts an event, rather than relying on an abstract or mathematical description of the dependencies between the latent variables. The goal was to decide whether the flight just landed carried more passengers of the red or blue type based on the observation of only a small sample of passengers leaving the airplane, and to report the confidence in that decision. However, as the event is known to tend to attract more of either of the two types of passengers, knowledge of this context, if inferred correctly, would be useful to solve the task. The crucial ingredient of our task is that inference of the context is based on the observation of small samples of passengers exiting previously arrived planes, making the context partially, but not fully, observable. By manipulating both the tendency and the sample size, we can control the reliability of previous observations upon which inference about the context should be based. Overall, this task structure creates hierarchical dependencies among latent variables that should be resolved by bottom-up (inferring the context from previous observations) and top-down message passing (inferring the current state by combining current observations with the inferred context) 6 . We find that participants can track and use the inferred reliability of previous observations suggesting that they build a probabilistic representation of the context. The inferred context is integrated with the current observations to guide decisions and confidence judgments about the value of a latent variable at a lower hierarchical level. Decision confidence is found to closely correspond to the actual accuracy of making correct decisions. As a clear signature of probabilistic inference over the context, we find that the sample size of previous observations is used by our participants to infer the reliability of the context. This in turn has a strong effect on decision confidence of a lower-level variable that depends on the context. The observed behavior in our participants eludes previously reported biases in judgments and decision making 16 , such as sample size insensitivity 17 , 18 , 19 , and also resists explanations based on simpler heuristics 20 , 21 . Overall, all the reported effects in both tasks are consistent, quantitatively and qualitatively, with the optimal inference model. Thus, our results support the view that humans may form mental representations akin to hierarchical graphs 22 that support reliability-based inference to guide confidence estimates of our decisions. Results The airplane task probes inference of latent variable We designed two experiments to test whether humans can use the reliability of contextual information to guide decisions and confidence judgments about a latent variable at a lower hierarchical level. While in some previous studies, instructions to the participants were quite abstract and often appealed to mathematical terms 21 , here we attempted to facilitate understanding of the complex relationships of the task variables by instructing participants in intuitive and naturalistic terms. Thus, we described the task to our participants by using the analogy of airplanes arriving at an airport whose unknown passenger proportions were to be estimated. In the first experiment (Experiment 1), the context is neutral and stable across all the trials encompassing the session, while in the second experiment (Experiment 2) context varies across blocks of a few trials but remains constant within each block. We instructed our participants that the context consists of a tendency of the encountered airplanes to carry more passengers of either of the two types. Formally, Experiment 1 corresponds to the classical urn problem with unknown fractions of red and blue balls, and Experiment 2 corresponds to a hierarchical extension where the urns are themselves correlated and partially observable (see Methods). As no feedback was given that instructed our participants how they ought to make their confidence reports, the experiments probe their internal capacity to estimate uncertainties. The effects of sample size on confidence reports In Experiment 1, participants were told that the airplanes arriving to an airport carry both blue- and red-type passengers, in an unknown proportion, and that these proportions would be uncorrelated from one plane to the next. Thus, in this case, no context was assumed that would make our participants believe that the passenger proportions across consecutives planes would be interdependent. After observing a small sample of passengers randomly exiting the plane, displayed as red and blue filled circles on the screen (Fig. 1a , first frame), participants were asked to report both whether the airplane carried more blue or red passengers, i.e., its passenger majority, and their confidence in this decision by moving a line along a horizontal bar (second frame). Importantly, there was no direct feedback about normative confidence reports: participants received a binary feedback after each response, i.e., whether they correctly identified the latent passenger majority. In addition, indirect feedback was provided at regular pauses every five trials through some aggregated performance score based on the ideal observer which was solely intended to maintain our participants engaged in the task. While such feedback could in principle be marginally used to adapt one’s responses, participants did not seem to modify their responses accordingly: first, feedback was hardly indicative of the optimal policy (see Methods); second, participants performed the task well from trial one and did not improve over time (see Supplementary Fig. 6a ). Fig. 1 Posterior-based confidence features sample size effects a Task: The colored dots (sample) represent two kinds of passengers (blue and red) that disembarked a very large airplane. The participants are subsequently asked to report the confidence in their decision that the airplane carried more blue or red passengers (blue majority) by horizontally moving the cursor line (orange). In this case, because the sample suggests a blue majority, the response cursor should be on the right. b Sample size increases posterior-based confidence in a blue majority suggested by the blue majority of the sample. Confidence (right) is computed as expected accuracy from the area under the curve for the inferred proportion (middle) from the observed sample (left). Although the proportion of blue passengers (green line, middle) is the same for all three samples (rows), the inferred distribution depends on sample size. The larger the confidence, the closer the response line on the previous panel should be to the rightmost border. c Confidence in blue majority should increase with the proportion (%) of blue samples for all sample sizes, but it does so with a higher slope for larger sample sizes (color coded). d Consequently, the slope parameter of fitted sigmoidal functions increases with sample size. Full size image An ideal observer (Fig. 1b ) should infer a distribution over an airplane’s proportion of blue (or, equivalently, red) passengers based on the observed proportion of blue passengers and the sample size. The proportion of blue samples (passengers), called “sample proportion”, is computed as N B / N , where N B ( N R ) is the number of observed blue (red) passengers, respectively, and N = N B + N R is the sample size. The inferred distribution over passenger proportions concentrates around passenger proportions suggested by the sample (Fig. 1b , green vertical line) 17 , and its width reduces the larger the sample size is. The decision whether the majority is blue or red is uniquely based on the proportion of blue samples, but the confidence report should be based on both the sample proportion and the sample size. Specifically, in this example, decision confidence of the ideal observer is the belief that the majority is blue, which equals the area under the distribution summing up the probability of all possible blue passenger proportions that are larger than one half 23 , 24 (Fig. 1c, d ). The result is that confidence in a blue majority increases with sample size because the distribution is more concentrated around the observed proportion of blue passengers. More generally, a central feature of probabilistic inference is sample size dependence, which here magnifies the confidence in the airplane majority that is suggested by the sample proportion. We tested whether human participants ( n = 24) obeyed this critical pattern or whether they neglected size 17 , 19 . Confidence in a blue majority was found to increase with the proportion of blue samples. As predicted, this increase was larger the larger the sample size is (Pearson correlation, pooled across participants, ρ = 0.31, p = 4.08 × 10 −6 ) (Fig. 2a, b ). These results were found for most of our participants individually (21 out of 24; permutation test, p < 0.05; Supplementary Fig. 2 ). Consistently, confidence judgments were highly predictive of the probability that the chosen majority was correct (Pearson correlation, ρ = 0.81, p = 1.27 × 10 −45 , see Supplementary Fig. 1 for details), suggesting that participants performed the task well and gave confidence reports that follow from an internal measure of uncertainty. Fig. 2 Human confidence estimates vary with sample size as predicted by probabilistic inference. a Confidence in a blue majority increases with the proportion of blue samples (solid lines), and it does so more steeply the larger the sample size is (color coded). Optimal model is represented in light colors. b The slope of the confidence curve in a increases with sample size. Participants feature a quantitatively similar increase as the optimal model (solid line). Error bars indicate SEM across participants. Full size image To further confirm that sample size was an important feature of our participants’ confidence reports, we performed a model comparison in which we contrasted the optimal inference model with two heuristic models, the ‘ratio’ and the ‘difference’ model. The ratio model assumes that confidence is a function of the sample proportion alone. This could be the result of a simpler approach in which the population estimate is a point estimate corresponding to the sample proportion which is a more suitable approach in the limit of large samples that are representative of their population 16 , 17 . The difference model estimates confidence based on the difference of blue and red samples, N B – N R . As the ratio heuristic, this statistic is informative of decision correctness but additionally covaries with sample size, as the ideal observer model, but not directly through sample size. It would correspond to the optimal model if the true proportion would only take two possible values (e.g., 60% blue passengers or 40% blue passengers), i.e. if subjects would discard the variability of the true proportion within each of the two categories (see ‘Analytical approximation for Experiment 1’ in Supplementary Methods). To account for possible distortions on the response and/or calibrations of heuristics estimators, all model estimates (either from the optimal or heuristics models) were passed through a logistic function that mapped the estimate onto the unity interval. The logistic response mapping was fitted for each model and participant individually (Methods). The comparison between the optimal model and the ratio model shows that the latter is clearly rejected because of its incapacity to take sample size into account (Supplementary Fig. 4 ). Even though the confidence estimates of the difference model are sensitive to sample size, they typically do not correspond to the notion of uncertainty that our participants report: the difference model predicts a linear relationship between the sample size and the slope of the confidence curve, while subjects displayed a clear sublinear relationship (see Supplementary Fig. 3 ). We can thus dissociate the experimental reports from these simple but covariant heuristics and conclude that the response patterns of our participants suggest a probabilistic inference approach. Moreover, as the difference model (corresponding to the optimal response when the variability of true proportion is discarded) can be ruled out, our results suggest that our participants’ inference process incorporated not only uncertainty about the passenger majority on the plane (blue or red) but also about its magnitude (the proportion). Reliability-based hierarchical integration of ideal observer In Experiment 2, participants were told that several airplanes with unknown passenger proportions would arrive at an airport, as before, but that consecutive airplanes would feature correlated passenger proportions because of an event in the city that attracts more travelers of one type. Thus, if the sample of a previous airplane is highly suggestive of a blue airplane proportion, then the participant could not only infer that this previous airplane carries a blue majority, but also that the next airplane is more likely to carry a blue majority, even before observing a sample of passengers leaving it. Importantly, in Experiment 2, there was no feedback about decision correctness of each trial’s airplane majority, only an overall score after each block (see Methods and Supplementary Fig. 6b ). Inference of an ideal observer in our task should start with inference of the current context (whether there is a tendency to observe passengers from airplanes with blue or red majorities). Next, this contextual information should be integrated with the current sample to report confidence and decide whether the current airplane it is more likely to hold a red or blue majority (Fig. 3 ). Fig. 3 Schematic of the hierarchical structure for learning empirical priors Participants are told that across a block of five trials (1, 2,…T ≤ 5) they will see passengers from five different airplanes arriving to the same airport. As before, they are asked to report their decision confidence whether the current airplane carried more red or blue passengers. The schematic illustrates the hypothetical examples of an ideal observer that estimates confidence based on the proportion of blue samples of the current airplane T and on the samples observed in previous trials. The generative model of the observations is as follows. a Within a block of five trials, the context, called block tendency, is first selected, which corresponds to choosing either a positively (magenta) or negatively (cyan) skewed distribution over airplane proportions. This context (distribution) is maintained throughout the block of five trials, but on each trial a new blue majority (blue-red horizontal bars indicating the passenger proportion in each airplane) is randomly sampled from that distribution. In the example, the context favors airplanes of red majorities. b Sample generation given the airplane majority is the same as for the previous task. c The internal representation of the agent (orange background) mirrors the dependence structure in the environment (green background). Probabilistic inference is performed by message passing between the nodes which internally represent the inferred block tendency and the airplane’s passenger proportion of each trial (see Methods). Previous trials ( t < T ) provide evidence about the block tendency through the messages m t ( b ). They are probabilistically integrated into an overall belief about the block tendency M ( b ) which provides top-down constraints on the inference of a new airplane’s blue proportion (orange node). The confidence in a blue majority of the current airplane T held by the ideal observer (response bar, right) should follow from both the current sample proportion and the inferred block tendency from previous samples. Full size image Thus, the generative structure of the observations that were shown to the participants is hierarchical, with a higher-level variable that determines the context for a block of always five trials, which either favors red or blue airplane majorities, and which in turn generates airplane majorities at the lower hierarchical level across the sequence of trials in the block (Fig. 3a ). Both hierarchical levels feature hidden variables that are not observable by the participants. From the generated airplane proportions, samples are drawn, which correspond to the actual observations of the participants (Fig. 3b ). Note that the generative process is purely top-down, from the context (high-level hidden variable) to airplane passenger proportions (low-level hidden variables) and then to the samples (observables). However, inference by the ideal observer should first run bottom-up from previously observed samples to infer the value of the contextual variable (Fig. 3c ; open nodes) and then top-down from this inferred context (bottom open node) to the variable representing the passenger proportion of the current airplane (orange node). For the ideal observer, this can be formulated as message passing between the hidden variables (Methods). It is worth emphasizing that the task is about inferring the passenger majority of the current airplane, at the lower hierarchical level, rather than asking for the context. Note also that, in contrast to change-point detection paradigms 25 subjects were explicitly told that a new context had to be inferred at the beginning of each block. As with Experiment 1, we studied how an ideal observer would behave under specific manipulations of the reliability of the currently observed sample through its sample size and the reliability of the context as controlled by the sample size of previously observed airplanes. As with the previous experiment, we first point to patterns of behavior that should be indicative of reliability-based probabilistic inference in our hierarchical task. First, we expect that confidence in blue majority of the current airplane grows with the proportion of blue samples (Fig. 4a ), as in the previous task. However, in addition, we also expect that confidence in a blue majority should be higher in blocks whose actual tendency favors blue airplane majorities, which is indeed the pattern that an ideal observer would show (Fig. 4a ). This is because, averaged across trials, the ideal observer can infer what the block tendency is, which on average should be aligned to the true block tendency, resulting in a higher confidence in blue majorities. Fig. 4 Characteristic behavioral patterns of probabilistic inference in the hierarchical inference task a Confidence in a blue majority of the current airplane (current trial) should increase with the proportion of blue samples, as in the previous task, but in addition confidence should be larger in a block that favors blue majorities (cyan) than in a block favoring red majorities (magenta). b Information of the block tendency should gradually increase the confidence in the corresponding trial majority. Thus, responses can be pooled with respect to the real block tendency. We refer to it as ‘aligned confidence’ and use the same concept for other relative quantities below. c Confidence in the aligned airplane majority increases with the aligned sample proportion. This modulation is stronger for larger sample sizes (green) compared to smaller ones (orange) while it has no effect for an indifferent sample (50% sample proportion, crossing point between the two lines). d Likewise, aligned confidence increases with the aligned sample proportion of the preceding trial and is modulated by its respective sample size. e The influence of all previous trials, determined by the weights of a regression analysis, should be equal on average (e.g., trials 1–2 on trial 3, T3). However, it decreases with the number of previous trials due to normalization. f Aligned confidence increases across trials within a block because evidence for the block tendency accumulates across trials in the block. All patterns are derived from the ideal observer model (see Methods). Full size image Second, averaged across sample proportions and samples sizes, confidence in a blue (red) majority in the current airplane should increase the higher the inferred tendency of blue (red) passengers is. Because of the symmetry across these two cases, we defined a (block-) aligned confidence to indicate the confidence in the direction (passenger type) that is aligned to the actual block tendency and pooled the results across these two cases. For the ideal observer, aligned confidence increases with the aligned inferred tendency (Fig. 4b ). In other words, the inferred context informs inference of the current airplane’s proportion to the degree that the context is reliable itself. Sample size of the current observation should play a very important role in modulating decision confidence as it indicates increased reliability of the sample relative to the prior. Indeed, aligned confidence increases with the aligned sample proportion, and it does so with a higher slope when sample size is large (Fig. 4c ). Similarly, if the context is inferred probabilistically, the reliability of previous trials should be taken into account. As a consequence, the sample size of the previous observation should modulate aligned confidence (Fig. 4d ). For instance, if the previous sample was large and suggested a red majority, then confidence in a red majority in the current trial should be larger. Another pattern that is expected from the ideal observer is that the weights (see Methods) of all previous trials in a block onto the confidence in the current trial should be constant (Fig. 4e ), because an earlier trial provides the same evidence for the context as a recent one, on average across blocks. Finally, the more trials have been observed in the block, the better the inference about the current context ought to be. Thus, on average across blocks, aligned confidence should increase with the number of previous observations which indicates accumulation of evidence for the contextual variable (Fig. 4f ). It is important to emphasize that these patterns correspond to predictions of the ideal observer model. They will be used as a benchmark for a direct comparison to behavioral data without fitting any parameters. Consequently, we do not expect a perfect match in terms of absolute values, but we would expect similar patterns of variation if participants follow a probabilistic inference strategy. Human behavior follows patterns of probabilistic inference We first tested whether human participants can infer and use contextual reliability by studying whether they followed the patterns described above. We found that our participants’ confidence in a blue majority increased with the proportion of blue samples, but that confidence in a blue majority was larger when the block favored airplanes with blue majorities as opposed to red majorities (Fig. 5a ). This result indicates that participants not only relied on the current sample to infer the current airplane majority, but that they also inferred the context and used it to modulate their confidence judgments. Fig. 5 Inferred block tendency affects confidence reports. a Confidence in blue majority is higher when the block tendency favors blue majorities (cyan) than when it favors red majorities (magenta). Experimental results (data points) are shown along with optimal behavior (solid lines), indicating an integration of sample information with a learned belief about the block tendency. b Aligned confidence (black) increases with the optimally inferred belief about the block tendency and is a close correlate of the optimal response (red), suggesting that participants internally track a graded belief based on previously available evidence. Error bars indicate SEM across participants. Full size image Further evidence for this result comes from the observation that aligned confidence increases with the strength of the inferred tendency aligned to the block as computed by the ideal observer, indicating that the more evidence was collected for a given block’s tendency, the larger the modulation on the confidence reported in the current trial was. The gradual increase (which was also present at an individual level, Supplementary Fig. 8a ) shows how nuanced the representation of the contextual variable is as there is no thresholding nor any sign of categorical representation. This shows that the contextual variable—for which we never explicitly asked—is represented in a graded manner, as it would be expected from a probabilistic agent. Our participants not only followed this pattern qualitatively, but they also seemed to adhere quite closely to the quantitative, parameter-free, predictions made by an ideal observer (Fig. 5b ; Pearson correlation on binned values, pooled across participants, ρ = 0.77, p = 5.13 × 10 −33 ), except for the fact that contextual information did not affect predicted confidence as much as when contextual information was high (Fig. 5b , rightmost part), which was also observed on a participant by participant basis (one-sided signed rank on fitted slopes, p = 0.004). Thus, even though the inferred tendency is subjective to the participant, the correlation with the inferred tendency of the ideal observer shows that participants must be tracking a similar quantity. Next, we studied how reliability governs hierarchical information integration (see Fig. 4c, d ). Both the current sample and previous samples should be relied upon more strongly when their reliabilities, controlled by sample size, are high. We first confirmed that the slope of the confidence curve increases with sample size of the current observation (Fig. 6a ; Pearson correlation of slope with sample size, pooled across participants, ρ = 0.49, p = 8.67 × 10 −14 ), indicating that participants used the reliability of the current observation to form confidence estimates, as in the previous task without hierarchical dependencies (see Fig. 2b ). Fig. 6 Sample size effects reveal reliability-based information integration. a As in the basic task (Fig. 2b ), the slope (data points) of the confidence curves over the sample proportion increases with sample size and tightly follows the optimal pattern (solid line). b The modulation of aligned confidence with the aligned sample proportion of the current trial is larger when the sample size is high (green) than when it is low (orange). Significant signed differences of a bin-wise one-sided signed rank test are indicated, *0.01 < p ≤ 0.05, ** p ≤ 0.01. c The modulation of aligned confidence with the aligned sample proportion of the previous trial is larger when the sample size of the previous trial is high (green) than when it is low (orange), similar to the previous panel. Error bars indicate SEM across participants in a – c . Full size image Beyond the finding above that participants learn the block tendency (Fig. 5a ), they should use it selectively and rely more strongly on the sample compared to prior information when sample evidence is reliable (Fig. 6b , pattern: Fig. 4c ). Indeed, the modulation with the aligned sample proportion is stronger for larger sample sizes and leads to the crossover of the two conditional curves (signed difference of conditional slopes from linear regression, signed rank test across participants, p = 1.44 × 10 −5 ). On average across trials, prior information increases aligned confidence (Fig. 6b ). Relative to this offset, behavior is less strongly driven by smaller samples because they provide less information so that the agent resorts more closely to the top–down expectations gained from previous trials. Direct control of the reliability through sample size allows us to study whether the inferred reliability of the context interacts with the reliability of the current observation to inform confidence judgments. Using this degree of freedom, we tested whether participants used the reliability of the previously observed sample. We found that, consistent with the pattern predicted by the ideal observer, aligned confidence increased with the aligned sample proportion of the previous sample and that this increase was larger the larger its corresponding sample size was (Fig. 6c ; signed-rank test for positive difference of linear regression slopes across participants, p = 0.002; see also dependence on previous message m t-1 (b) Supplementary Fig. 8b ). A central prediction of the probabilistic model is that all previous trials should have equal influence on behavior on average across blocks (see Fig. 4e, f ). We determined their influence from a regression analysis on the confidence judgments (see Methods) and found a rather balanced influence of all previous trials (Fig. 7a ). Accordingly, no significant trend could be evidenced through another linear regression analysis in which the previous trial index is used to predict the average weight of the previous trial on the aligned confidence (regression on the means across participants, separately for current trials position 3, 4, and 5: p -values 0.41–0.89 for trials with 2–4 previous trials respectively). Apparently, there are no signatures of temporally selective evidence integration for the contextual variable such as a confirmatory bias, which is characterized by an insufficient belief revision once a belief has been established. If it were present, later trials would be expected to have a lower influence here. Probabilistic inference on the other hand, never fully collapses onto one specific interpretation and hence never excludes evidence for competing hypotheses. Similarly, this rather balanced weighting is also inconsistent with some sort of leaky prior integration scheme in which evidence presented long ago is fading from memory. In agreement with these findings, evidence for the block tendency, and thus also aligned confidence, increases over the trials within a block (Fig. 7b ). A linear regression analysis of aligned confidence as a function of the aligned trial index clearly shows the expected increase (regression on means across participants, p = 8.68 × 10 −9 ). Overall, hierarchical integration offers a parsimonious explanation for context integration which does not require explicit memorization of previous samples after they are integrated into the context-level variable. Fig. 7 Behavior reflects hierarchical evidence integration across trials. a On average across blocks, all previous trials provide the same information about the block tendency irrespective of their temporal distance to the current trial. From top to bottom, trials number 3–5 of each block are predicted from the indicated previous trials (sample proportion). Participants show a balanced weighting despite smaller weights compared to the ideal observer model (red). b Participants accumulate evidence about the block tendency in a gradual fashion. Aligned confidence increases over trials within a block despite a smaller effect compared to the optimal model (red). Error bars indicate SEM across participants. Full size image Interestingly, the most obvious quantitative departure from the expected patterns was that human participants appear to rely less on contextual information as the observed effects of previous trials were smaller than the predictions from the ideal observer. For instance, the effect of previous trials on aligned confidence is weaker (see e.g., Fig. 7a ) but does not depend on how long ago the information was acquired. Further support for such an insensitivity to prior information is provided by trials in which an ideal observer would e.g., estimate a red majority despite more blue samples because of a high prior belief in a red tendency. We found that most participants make these evidence-opposing choices (see Methods, one-sided signed rank test with respect to non-hierarchical ratio model with realistic response noise, p = 0.007; Supplementary Fig. 5b ). There is however a tendency to stay on the side of the category boundary that is suggested by the momentary evidence, as they make significantly fewer opposing choices than the optimal model (one-sided signed rank test, p = 0.008). Finally, we tested whether this relative insensitivity to prior information could be explained by mismatching assumptions about the magnitude of the block tendency which we modeled with specific skewed distributions of passenger proportions under the red and blue contexts (see Fig. 3a ). In fact, some behavioral biases, such as confidence under- and overestimation 26 , can be partly explained by choosing (structurally) mismatched probability distributions for the task at hand 27 , 28 . To test this possibility, we used a model that allowed for a differently skewed distribution implementing this block tendency (see Methods) and compared it to the ideal observer model. To correct for other distortions, both models used an additional mapping onto the final response. We found that the model with the mismatched block tendency almost perfectly described the patterns of probabilistic inference (Fig. 8 ; exceedance probability p ≈ 1, for patterns see Supplementary Fig. 7 ) and that participants appear to subjectively assume a weaker block tendency as evidenced by the expectation value of the skewed Beta-distribution used to model a blue block tendency (optimal 0.61, median across participants 0.55, one-sided signed rank test for difference, p = 1.68 × 10 −4 ). This suggests that qualitative differences arise from a mismatch between the experimental and the assumed skewed distributions by the participants. Fig. 8 Patterns for fitted block tendency. Behavioral patterns in the hierarchical inference task (Experiment 2) compared to a fitted model assuming a mismatched block tendency and a sigmoidal response mapping accounting for distortions on the response. The fits of this model closely reproduce the patterns produced by participants. Error bars indicate SEM across participants. Full size image Model comparison favors probabilistic inference of context The previous analysis has shown that behavior adheres to the main features of probabilistic inference in a reliability-based hierarchical task. We have seen that these patterns were qualitatively reproduced by the optimal model without the fitting of free parameters, and that a simple extension of the ideal observer model largely improved the qualitative fits of the patterns. To go beyond qualitative patterns of behavior and provide a more quantitative account of the results and the adherence of behavior to reliability-based hierarchical inference, we fitted the ideal observer estimate of the contextual variable to behavior and compared it to simpler heuristic estimates that do not rely on probabilistic inference. These simpler models assumed specific forms for the accumulated contextual information that depart from the optimal computations, as follows (contextual variable M = M ( b ) in Fig. 3 ; see Methods). In the ‘averaging’ model we assume that the estimate of the contextual variable M equals the presented percentages of previous trials in a block and thus neglects sample size. In the ‘tally’ model we assume that the estimate of the contextual variable M equals the ratio between the total number of blue samples observed so far in all previous trials within a block over the number of all red and blue samples observed within a block so far. This is similar to pooling the samples of all trials, as if they were drawn from a common population. Thus, as larger samples contribute more points, this model is sensitive to sample size, but in a different way than the ideal observer model. Finally, in the ‘difference’ model, contextual information is a sigmoidal function of the running average of the differences between the number of blue and red samples in all previous trials. All these models only differ in how they estimate the contextual variable M . To introduce as few constraints as possible on the integration of M with the current sample ( N B / N,N ) and to compute the final response, we used a flexible generalization Eq. ( 14 ) of the sigmoidal response mapping Eq. ( 13 ), attempting to reduce noise for model comparison. Even though all three heuristic approaches are close correlates of the optimally estimated contextual variable, we found that the three models were inferior to the probabilistic strategy of the ideal observer model (Supplementary Fig. 3 ). Beyond quantitative comparisons, heuristic models also failed to quantitatively reproduce the defining features of subject behavior. Specifically, participants’ responses were influenced by the proportion of blue passengers in previous trials of the block, and that influence increased with trial position in the block as subjects accumulated evidence about current context across trials (Supplementary Fig. 10 ). Such feature was seen in the optimal model. By contrast, in all three heuristics models, the influence of the proportion of blue passengers in previous trials remained constant across the block, as in these heuristics models evidence about the current context is averaged and not accumulated across trials (see Eqs. 9 – 11 ; Supplementary Fig. 10c-e ). Moreover, the ‘averaging’ model was, by construction, insensitive to the sample size of previous trials, unlike our participants (Supplementary Fig. 10b ). Discussion One important question is whether humans can hold probabilistic representations of contextual variables and use them to improve inference of lower-level variables by providing suitable constraints on their possible values. Here, we report that humans can perform reliability-based hierarchical inference in a task in which they have to report their decision confidence about the value of a lower-level variable that is constrained by a higher-level, partially observable variable. We controlled evidence by using reliability cues in the form of sample size, giving us enough leverage to test the identified patterns of hierarchical probabilistic inference. The similarity between observed and probabilistic inference patterns of behavior, the strong dependence of confidence on currently and previously observed samples sizes, and a model comparison between optimal and heuristic models, supports the notion that humans can mentally hold and update ubiquitous representations of uncertainty over structured knowledge representations such as graphs 22 . A large body of research has addressed the question whether, and under what conditions, humans can perform probabilistic inference, typically, by using perceptual tasks 10 , 29 , 30 . More recently, the usage of confidence reports has opened a window to more directly examine how uncertainty is handled in internal models that humans use while they perform a task 8 , 23 , 28 , 31 , 32 . However, most of this work has focused on simple inference problems in which the value of a hidden variable has to be estimated based on noisy evidence 24 , 33 , 34 , without any hierarchical structure. In contrast, even visual processing in normal conditions should rely on hierarchical schemes where hidden variables at higher levels constrain the values of partially observed variables at lower levels 35 . Hierarchical representations allow to exploit inferential constraints by learning them from experience with related situations by exploiting abstract similarities through contextual variables. Such joint inference over structured probability distributions is a crucial ingredient for theories such as predictive coding 3 , 6 , 36 . However, whether human inferences rely on ubiquitous probabilistic representations across a hierarchy of variables is largely unknown. Addressing this important question requires the ability to independently control the reliability of higher-level and lower-level variables to test, for instance, whether and how behaviorally reported confidence is modulated by them. If reliability cues produce modulations of confidence reports in accordance with theoretically predicted patterns, then such observations would constitute evidence in favor of mental representations similar to probabilistic graphical models. Previous work has studied perception and decision making in similar hierarchical schemes like ours 1 , 7 , 8 , 15 , but it has been difficult to independently modulate the reliability at both higher and lower hierarchical levels. For example, when using stimulus duration and stimulus strength as an indirect proxy to control reliability 15 , the way these manipulations affect reliability depends on the specifics of the sensory system and sensory noise. In our task, uncertainty emerged not from sensory noise but from a hidden cause for stochastically generated stimuli, and the reliability of both levels could be controlled directly and independently through sample size, thus providing an objective measure of trial-to-trial reliability independent of the sensory system. Our task revealed that humans modulate their confidence not only based on the reliability of the currently observed sample, but also on the inferred reliability of the context which is itself a function of previous samples. Specifically, we have found strong dependencies of confidence on the sample size of current and previous observations, and these dependencies adhered to the predicted trends and patterns of hierarchical probabilistic inference. Dependencies on previous observations emerged only based on the distribution of previous stimuli, without any trial-to-trial external feedback that could be used to modulate the priors. In summary, while previous studies had already shown in isolation sample size sensitivity 37 and some form of hierarchical probabilistic reasoning 1 , 7 , 8 , 15 , here the conjunction of both phenomena and the very detailed correspondence between human and optimal behaviors builds strong evidence for ubiquitous reliability-based integration of hierarchical information, even without extensive prior training on the task. It is possible that our participants did not truly hold probabilistic uncertainty representations over a mental graphical representation across multiple levels, but that they rather used very sophisticated heuristics that we were not able to characterize. However, estimating uncertainty about latent variables is a particularly difficult problem for heuristic approaches just based on point estimates that disregard the distributional format that the estimate should take 5 , e.g., that several airplane proportions are consistent with a given sample. In our task, for instance, learning calibrated confidence reports would require repeated exposure to the same sample together with supervising feedback about the actual latent variable (airplane majority). Even for very simple problems, the scarcity of such data makes this frequentist approach to uncertainty estimation practically difficult and thus un-ecological. As we did not provide supervising feedback, our participants presumably held accurate internal trial-by-trial representations of uncertainty 38 , 39 . Although we cannot completely rule out the use of non-probabilistic or heuristic shortcuts, the main patterns of probabilistic inference have been fulfilled by our participants. Their generalizations are hard to conceive without relying on an internal generative model of the observations. This is in line with previous studies (e.g., 40 , 41 ) which conclude that human inferences are model-based or use internal simulations 42 . One clear limitation of our study is that it shows that humans can use reliability-based hierarchical integration of evidence, but it does not speak to the circumstances when this occurs. In particular, our results contrast with a vast literature that has reported deviations from the norms of rational inference in human judgments such as sample size insensitivity 17 , 19 , 43 . One important methodological difference between this previous work and ours is that behavioral economics has typically dealt with situations that have been conveyed using mathematical terms 21 . We speculate that the success of our participants in ‘understanding’ the hierarchical structure of the task is the result of the way the task has been framed and communicated. We put participants in an imaginary yet intuitive setting of arrivals to an airport whose city hosts an event and refrained from using terms such as “urns” or “correlations”, which mathematically define our task on an abstract level. Evidently, this approach was successful in at least two respects. First, the task structure is clearly communicated so that participants make roughly correct assumptions for inference. Second, our participants managed to interrogate cognitive systems that are capable of probabilistic inference 14 . Interestingly, a recent proposal has suggested that intuitive tasks that sidestep high demands on working memory and natural language may improve performance 44 . The existence of such framing effects 45 onto the algorithmic nature of perceptual inference mechanisms should be tested in a future experiment. A related but slightly different hypothesis is that probabilistic inference would be shaped during lifetime experience by repeated exposure to choices between options that require integration between sources of varying reliability. In the lab, such probabilistic inference process would only be applied if the task bears some similarity with the problems already encountered by the subjects in their life. In support of such hypothesis, a recent study did find sample size sensitivity in how subjects updated product evaluation by learning about previous consumers’ ratings 37 , which is a highly familiar task routinely performed in everyone’s everyday life. However, our work has also revealed some differences between optimal and observed behavior. Most strikingly, we have found evidence that top-down information is relied upon less strongly relative to information from the specific instances of the sample 28 . Such a tendency to discount prior information is indeed reminiscent of the biases that emerge when the representativeness heuristic is used 16 , 17 . However, as we have shown with a model that assumes a different block tendency (Fig. 8 ), all these differences could be attributed to mismatched assumptions about the prior distribution of the context. In general, when comparing behavior against normative approaches, the interpretation of deviations should consider as much as possible the internal assumptions, constraints and motivations that the participant obeys 46 , 47 . Accounting for such differences might be crucial to interpret and possibly account for many cognitive biases 27 , 48 . Beyond these differences in the central inference stage, there could be alternative sources of distortions in the conversion from the estimate into a motor report. Taking into account such distortions allowed to capture some other part of the departure of our participants’ behavior to the optimal observer (Supplementary Fig. 1 ). By contrast, our participants behavior was found to be little affected by numerosity or other forms of sensory noise (see ‘Sensory noise’ in Supplementary Methods). The easiness with which our participants seemed to perform probabilistic inference over the mental representation of a graphical model at several levels of a hierarchy should not distract us from the computational difficulty of the inference process. Typically, probabilistic inference even in simpler tasks involves complex operations such as normalization and marginalization 5 , 49 , 50 . Interestingly, inference in our task can be considerably facilitated if the conditional independence properties between variables are exploited. In this case, the distribution factorizes so that only local computations (marginalization) need to be performed whose results can be passed on as messages. Hence, the graphical structure of the model facilitates inference which may even be implemented with recurrent neural populations 51 . Apart from the tractability of the computations, we must bear in mind that the goal of the participant is not necessarily pure inference, but the maximization of some subjective cost-benefit measure 52 . Further research is needed to test what constitutes the main challenges to probabilistic inference for humans such as imposing adequate structural constraints that leverage contextual knowledge or the use of tractable approximations due to limited cognitive resources. In sum, we have developed a novel reliability-based hierarchical task based on which we found that humans are sensitive to the reliability of both high- and low-level variables. Our results reveal uncertainty-sensitive integration of information in hierarchical state representations and suggest that humans can hold mental representations similar to probabilistic graphical models where top-down and bottom-up messages can inform behaviorally relevant variables. Methods Participants All participants were invited to complete three sessions on different days within three consecutive weeks. The sessions were targeted to take about 35 min (Session 1) and 45 min (Sessions 2,3). In total 25 participants (15 female, 10 male) were recruited mainly among students from the Pompeu Fabra University in Barcelona. The study was approved by the Ethics Committee of the Department (CIREP approval #0031). We excluded data from one participant that did not complete the experiment. The median age was 25 (minimum 20, maximum 43). We accepted all healthy adults with normal or corrected to normal vision. We obtained written confirmation of informed consent to the conditions and the payment modalities of the task. Irrespective of their performance, they were paid 5 € for session 1 and 7 € for sessions 2 and 3. Additionally, they had the chance to obtain a bonus payment which was determined by the mean of their final score after removing the worst trials (2.3%). The score S = 1 − | y − y opt | of a response y was computed based on the proximity to the optimal confidence report y opt (see below for details of the optimal model in both experiments). As such, the overall score reflected the ability of the subject to correctly infer the probability that the observed stimuli would be sampled from one category or the other. The payment was determined by comparison to an array of five thresholds that were set according to the {0.5, 0.6, 0.7, 0.8, 0.9} cumulative quantiles of the empirical score distribution across prior participants. A higher score S corresponds to a better performance so that participants were payed an additional bonus of {1, 2, 3, 4, 5} € if their final score was higher or equal to the quantile thresholds. This is a way of rewarding their efforts to optimize their responses. Written task instruction explained that we would score their responses with respect to the chances that their decision would be correct and that bonus payments would be based on that score. Additionally, they were informed that their score was to be compared to the other participants and that the experimenter could monitor their behavior on-line via a second screen from outside. Stimuli & responses The task was presented on an LCD screen with a computer running Matlab Psychtoolbox 3.0.12. Immediately after trial onset, our participants were shown the sample consisting of red and blue solid circles arranged on a two-dimensional grid about the screen center (Fig. 1a ). The only feature that distinguished the sampled passengers was the dot color that we chose to be either blue or red. Because the positions of the dots are communicated not to be informative, the sample is completely summarized by the sufficient statistics. We tried to make the number of dots (sufficient statistics in our task) easily perceptible while making their locations appear as random as possible. Adequate grid spacing was introduced to prevent the circles from overlapping. Furthermore, we kept red and blue samples separate along the horizontal direction (details in SI). The display is static until the participant makes a response by clicking the USB-mouse which clears the display of the sample. After a short delay of 300 ms, the program shows a centered horizontally elongated response bar of random horizontal extent with a vertical line marking its center. In addition, the response cursor (Fig. 1a , orange vertical line) is shown at a random and uniformly distributed initial horizontal position along the response bar. Participants can adjust the horizontal position of the response cursor by moving the mouse horizontally and confirm the input with a click to report their choice about the airplane’s passenger majority and their subjective confidence in its correctness. The movement range of the response cursor was bounded to the horizontal extent of the response bar. The raw response is linearly mapped onto an interval between [0,1] and interpreted as the confidence in a blue trial majority y . Consequently, the corresponding quantity for the confidence in a red majority is 1 − y . Experiment 1: Procedure & instructions First, participants read detailed written instructions of the task. We introduced the task metaphor that relates to judging the (hidden) majority of passengers on a flight and used it to explain the mathematical assumptions in more intuitive terms (see Supplementary Methods). Additionally, our participants were given 30 trials to familiarize with the handling of the task. The subsequent experimental session (session 1) consisted of 280 trials with pauses together with feedback after every 5 trials. The sample sizes N were independent and identically distributed (i.i.d.) samples from {3, 5, 7,…, 13} while the hidden airplanes’ passenger proportions μ were i.i.d. samples from a Beta(4,4) distribution. Then, the number of blue passengers of the sample is determined by a draw from a Binomial distribution N B ~ Bin( N,μ ). After each trial, the participant receives feedback about the correctness of his decision (whether the cursor was placed on the side corresponding to the underlying passenger majority) but no supervising feedback regarding his confidence estimate. In addition, a two second time-out was presented for incorrect decisions which is signaled by a horizontal ‘progress bar’ which linearly diminishes over time indicating the fraction of the waiting time left. During time-out, there is nothing a participant can do to proceed but wait. In principle, the correctness feedback could be used by participants to learn the mapping from stimuli to the probability of selecting the correct category. In practice however, subject behavior was found to be very stable from the first test trial and throughout the session (Supplementary Fig. 6 ). Every five trials, a pause screen was shown which provided information about how many out of all trials had already been completed. To motivate engagement in the task, we gave motivational feedback as an average \(\langle S\rangle\) of the score S (distance to optimal observer, see above) over the last 5 trials since the last pause. Such feedback was uninformative as to how subjects should change their behavior to improve their score: Because it averaged performance over 5 trials, it was very unlikely they could use to learn current mappings and shape future responses (see stability of participants behavior Supplementary Fig. 6 ). Additionally, they also received a time-out of a few seconds proportional to \(1 - \langle S\rangle\) . The overall rationale behind the time-out was to more strongly incentivize task engagement and prevent click-through. Experiment 2: Generative model for the stimuli In Experiment 2, trials of one block are tied together because they depend on a common unobserved variable selecting the context. There were two possible contexts: one biased towards red passengers, the other towards blue passengers. To keep the notation simple below, we use the same variable names for the generative process (Fig. 3a ) as for the ideal observer (Fig. 3c ), although in general, an agent’s representation is not necessarily the same as the generative process in the environment. First and once for every block, the binary variable b governing the prevalence for either red ( b = 0) or blue ( b = 1) passenger majorities in the airplanes, called block tendency, is drawn from a Bernoulli distribution b~ Bernoulli(0.5). Then for every trial, the unobserved proportion of blue passengers of the airplane μ is drawn from a mixture of two Beta distributions depending on the block tendency b . $$\begin{array}{*{20}{c}} {p\left( {\mu {\mathrm{|}}\nu _1,\nu _2,b} \right) = b \cdot Beta\left( {\mu {\mathrm{|}}v_1,v_2} \right) + \left( {1 - b} \right) \cdot Beta\left( {\mu {\mathrm{|}}v_2,v_1} \right)} \end{array}.$$ (1) The Beta distribution is parameterized by two parameters ( v 1 = 14, v 2 = 9), chosen such that the resulting distribution over the passenger proportion μ is skewed. By convention, \({\mathrm{Beta}}\left( {\mu _t{\mathrm{|}}v_1,v_2} \right)\) is negatively skewed ( v 1 ≥ v 2 ) and models a blue block tendency. The greater the expectation v 1 /( v 1 + v 2 ) ≈ 0.609 the more extreme this effect because more airplanes with a majority of blue passengers ( μ > 0.5) as opposed to red passengers ( μ < 0.5) will be encountered. Once the block tendency b has been selected in a block, sampling of the observed passengers in the following 5 trials within a block proceeded as in Experiment 1. First, the sample size N is determined by an i.i.d. drawn from a uniform categorical distribution \({\mathrm{Cat}}\left( {N|1/n, \ldots ,1/n} \right)\) over all n sample sizes \(N \in \{ 3, \ldots ,11\}\) . Then, the number of blue passengers of the sample is determined by a draw from a Binomial distribution N B ~ Bin( N , μ ). Hence, the distribution for each of the 5 trials within a block is $$\begin{array}{*{20}{c}} {p\left( {N_B,N,\mu {\mathrm{|}}\nu _1,\nu _2,b} \right) \propto Bin\left( {N_B{\mathrm{|}}N,\mu } \right) \cdot Cat\left( {N{\mathrm{|}}1/n, \ldots ,1/n} \right) \cdot p\left( {\mu {\mathrm{|}}\nu _1,\nu _2,b} \right)} \end{array}.$$ (2) The geometric placement on the screen is not considered to be part of the generative model as we assume that only the sufficient statistics matter. The expression in Eq. ( 2 ) defines the probability distribution for the sufficient statistics of the observations of trial t to which we refer more concisely by \(p(q_t,N_t,\mu _t|b,\nu _1,\nu _2)\) , thus equivalently expressing it in terms of each trial’s sample proportion \(q = N_B/(N_B + N_R)\) of the number of blue ( N B ) and red ( N R ) passengers, and the sample size N = N B + N R . We drop the conditioning on the parameters of the categorical distribution over sample sizes to keep the notation uncluttered. Using this expression, the entire sampling distribution over all variables of all trials within a block is: $$\begin{array}{*{20}{c}} {p(q_1, \ldots ,q_5,N_1, \ldots ,N_5,\mu _1, \ldots ,\mu _5,b|\nu _1,\nu _2) = p\left( b \right)\mathop {\prod }\limits_{t = 1}^5 p(q_t,N_t,\mu _t|b,\nu _1,\nu _2)} \end{array}.$$ (3) Note that given the block tendency b , the per-trial quantities, such as μ t , are conditionally independent. Experiment 2: Procedure & Instructions Experiment 2 comprises the sessions 2 and 3 and was carried out with the same 25 participants as in Experiment 1 (session 1). Despite the hierarchical extension across blocks of five trials, the handling of the task and the presentation of the sample is virtually the same. The changes to the latent structure should lead to a different interpretation of the information which we attempted to convey by an extension of the task metaphor (see Supplementary Methods) and written task instructions. As for Experiment 1 and prior to starting session 2, participants completed two very short training sessions. First, they were given 20 trials (4 blocks) with a strong and visually obvious block tendency (sample sizes \(\{ 8, \ldots ,11\}\) , block tendency Beta(15,7)). Then another 30 trials under slightly harder conditions (sample sizes \(\{ 3, \ldots ,11\}\) , block tendency Beta(15,7)). Importantly, this only permits them to understand the structure of the reasoning task, such as the dependence between the variables, and get familiarized with the task environment in increasingly difficult conditions. We intentionally provided as little information as possible as to how they should respond. The important point was to make clear what the structure of the process was that generated the samples. Thus, we did not monitor their performance, nor give them any feedback about how specifically they should place the response cursor. They could, however, ask the experimenter to clarify the assumptions behind the task. We proceeded to the actual experimental session when our participants reported that they had ‘understood’ the task. The above-mentioned procedure was clear enough to achieve that, and yet sparse enough not to reveal the normative response strategy against which we wanted to compare their behavior. After familiarizing, our participants completed 270 trials of the experimental session 2 with an even more difficult setting of the parameters (sample sizes \(\{ 3, \ldots ,11\}\) , block tendency Beta(14,9)). On the third session, on a different appointment, the participants just continued the instructed task of session 2 for 300 trials with identical settings to obtain more data. In Experiment 2 and different from Experiment 1, no feedback nor time-out was provided after each trial. However, as in Experiment 1, every five trials, i.e., after each block in Experiment 2, participants were presented with a pause screen with a score based on the results of the last block and a time-out of a few seconds proportional to \(1 - \langle S\rangle\) . As described before, the purpose was mainly to engage participants with the task. That they may have used this extremely sparse and indirect information to somehow guide future responses in Experiment 2, seems even more unlikely than in Experiment 1 as participants already showed no signs of converging to the normative strategy over time there (Supplementary Fig. 6b ) where the task was less complex than in Experiment 2 with several hidden variables. Ideal observer for Experiment 1 The ideal observer model is assumed to know the actual generative process of the observations. Based on the observed passengers, it infers the most likely airplane proportion. Due to the choice of a conjugate prior distribution p ( μ ) for the Binomial probabilistic model N B ~ Bin( N, μ ) above, posterior inference yields a Beta-distribution over the latent airplane proportion μ . Specifically, to give calibrated responses, i.e., confidence estimates that correspond to the actual odds of making correct decisions, the prior distribution used for inference must correspond to the actual base rates specified by \({\mathrm{Beta}}(\mu |4,4)\) . The confidence in e.g., a blue trial majority c ( B ) of an ideal observer can be expressed as the belief that choosing a blue majority is correct by integrating over the corresponding subspace 23 of inferred blue majorities. $$\begin{array}{*{20}{c}} {c\left( B \right) = 1 - c(R) = p\left( {\mu \, > \, 0.5{\mathrm{|}}N_B,N_R} \right) = \mathop {\smallint }\limits_{0.5}^1 {\mathrm{Beta}}(\mu |N_B + 4,N_R + 4)d\mu } \end{array}$$ (4) Heuristic models for experiment 1 Here we describe two heuristic models that humans could use to estimate the probability of blue passenger majority on the airplane. 1. Ratio model In the ratio model, the response is simply mapped from the proportion of blue passengers in the sample N B /( N B + N R ) $$c\left( B \right) = \sigma (2N_B/(N_B + N_R) - 1)$$ where σ is a sigmoid function with possible distortions that provides output in the [0 1] range (see below). 2. Difference model In the ratio model, the response is mapped from the difference between the number of blue and red passengers in the sample N B − N R . Again the difference is mapped onto the [0 1] range using a sigmoid with possible distortions: $$c\left( B \right) = \sigma \left( {(N_B - N_R)} \right.$$ Distorted reports of internal confidence estimates Apart from inference, behavior may be influenced by extraneous factors, e.g., due to motor control constraints. We accounted for those by a nonlinear transformation of the confidence estimate \(c \in [0,1]\) onto our model’s prediction of the response \(\hat y\) . First, we standardize the output \(c^\prime = 2(c - 0.5)\) which then enters the argument of a logistic sigmoid function through the polynomial \(Z = \omega _0 + \omega _1c\prime + \omega _2c^{\prime 3}\) . $$\begin{array}{*{20}{c}} {\hat y = \frac{1}{{1 \, + \, {\mathrm{exp}}( - Z)}}} \end{array}$$ (5) As we assume symmetry, only odd powers of \(c^\prime\) are used. In other words, the distorted confidence estimate \(\hat y\) should lead to the same decision confidence regardless of whether the estimated majority is blue or red. This function is flexible and able to approximate a wide range of distorted reports including the identity mapping and various forms of probability distortion 53 , 54 . It only accounts jointly for all effects which affect the final judgment. Other systematic deviations during confidence estimation which are conditional on a subset of the input space can only be partially accounted for, e.g., deviations for extreme values of the sample proportion. Ideal observer for Experiment 2 The ideal observer model (see Fig. 3c ) we describe here makes use of the generative process described in the main text and Fig. 3a, b . It updates a probability distribution over the observations of all in-block trials and their respective latent variables \((\mu _1, \ldots ,\mu _T)\) up to the current trial T . The parameters ( v 1, v 2 ) defining the block tendency are part of the generative structure and assumed to be known. Consequently, inference amounts to an updating of the distribution over the latent variables through a calculation of the posterior distribution conditional on the observations (We identify the distributions by their respective arguments and e.g., write p ( D | μ ) for the distribution over the sufficient statistics of the sample. We often use the abbreviation D = ( q , N ) for the observations, omitting parameters and index according to in-block trials t ) as $$\begin{array}{*{20}{c}} {p\left( {\mu _1, \ldots ,\mu _T,b{\mathrm{|}}D_1, \ldots ,D_T} \right) \propto p(b)\mathop {\prod }\limits_{t = 1}^T p(D_t|\mu _t)p(\mu _t|b)} \end{array}$$ (6) The current trial is labeled T , and p(b) is the prior probability for block type b ( p(b = 0 ) = p(b = 1 ) = 0.5). Note that the probability distributions related to one block are independent to observations from previous blocks: in contrast to change-detection task paradigms, the model has explicit knowledge of when a new context start, and does not have to infer it. We checked in a control analysis that responses were only influenced by response in the same block but were not contaminated by responses in the previous block (Supplementary Fig. 9 ). This showed that subjects indeed incorporated the block structure into their inference process. The same knowledge was incorporated into heuristics models (see below) as well. We would like to compute the probability of a blue latent trial majority, namely that μ T is larger than 0.5. For this purpose, all variables relating to previous trials which are not of interest must be integrated out. $$\begin{array}{*{20}{c}} p\left( {\mu _T \ge 0.5{\mathrm{|}}D_1, \ldots ,D_T} \right) = \frac{1}{\psi }\mathop {\sum }\limits_{b = \{ 0,1\} } \mathop {\smallint} \limits_{0.5}^1 p\left( {D_T{\mathrm{|}}\mu _T} \right)p\left( {\mu _T{\mathrm{|}}b} \right)d\mu _T \\ \qquad\qquad\qquad\qquad\quad \cdot \, p(b)\mathop {\prod }\limits_{t = 1}^{T - 1} \mathop \smallint \limits_0^1 p\left( {D_t{\mathrm{|}}\mu _t} \right)p\left( {\mu _t{\mathrm{|}}b} \right)d\mu _t \end{array}$$ (7) The constant ψ ensures normalization and can be recovered analytically as shown below. Because of conditional independence given the block tendency b , the high-dimensional distribution factorizes so that only one-dimensional integrals over the latent variables of previous trials must be performed. Examining the graph structure (see Fig. 3 ), we see that they may be considered messages m t ( b ) which are passed upwards to update the block-level variable b . $$\begin{array}{*{20}{c}} {m_t\left( b \right) = \frac{1}{{\psi _{m_t}}}\mathop {\smallint }\limits_0^1 p\left( {D_t{\mathrm{|}}\mu _t} \right)p\left( {\mu _t{\mathrm{|}}b} \right)d\mu _t.} \end{array}$$ (8) For proper normalization ψ mt , they are themselves probability distributions that convey bottom-up evidence for the block tendency variable b = {0, 1} based on the observations D t = ( q t , N t ) . These bottom-up messages from previous trials within a block are integrated to update the belief M T ( b ) about the block tendency b prior to trial t through point-wise multiplication and proper renormalization ψ M . $$\begin{array}{*{20}{c}} {M_T\left( b \right) = \frac{1}{{\psi _M}}p(b)\mathop {\prod }\limits_{t = 1}^{T - 1} m_t(b)} \end{array}$$ (9) As more evidence is gathered (trials), more factors can be absorbed into the belief about b without having to store data from all previous trials independently as it is efficiently encoded in M T ( b ). Subsequently, this knowledge serves as top-down constraint on future inferences on the trial level. Consequently, to derive the probability of a blue trial majority on the current trial, the integration of momentary evidence (Eq. ( 6 )) can be expressed as $$p\left( {\mu _T \ge 0.5{\mathrm{|}}D_1, \ldots ,D_T} \right) = \frac{1}{\psi }\mathop {\sum }\limits_{b = \{ 0,1\} } M_T(b) \mathop {\smallint } \limits _{0.5}^{1} \, \, p\left( {D_T{\mathrm{|}}\mu _T} \right)p\left( {\mu _T{\mathrm{|}}b} \right)d\mu _T$$ (10) Proper normalization for the constants ψ , ψ M and ψ mt can be obtained analytically (see Supplementary Methods). Heuristic models to estimate the block tendency Here we describe three heuristic models that humans could use to estimate the block tendency. 1. Averaging model The computation of the optimal estimate of a blue block tendency from previous trials, M T in Eq. ( 9 ), requires marginalization over hidden variables and normalization, which could be computationally difficult. Instead, participants could resort to approximations or heuristics. For the first model, the heuristic averaging model, we assume that the estimate of a blue block tendency ( b = 1) is approximated by computing the average of the presented fractions of blue samples \(q_t = N_{Bt}/(N_{Bt} + N_{Rt})\) in the trials t prior to the current trial T ( T ≥ 2). $$\begin{array}{*{20}{c}} {M_T^{avg}\left( {b = 1} \right) = \frac{1}{{T - 1}}\mathop {\sum }\limits_{t = 1}^{T - 1} q_t} \end{array}$$ (11) This estimate neglects sample size and corresponds to the implicit assumption that the inferred airplane’s passenger proportion of each trial is well captured by a point estimate, i.e., by its respective sample proportion 17 . The model gives the same weight to each trial and thus ignores the fact that some trials provide more information than others due to different sample sizes. As for the other models below, indifference is assumed on the first trial \(M_{T = 1}^{avg}\left( {b = 1} \right) = 0.5\) . The way the heuristics top-down message \(M_T^{avg}\) is integrated into the confidence estimation process is described below (see Flexible mapping capturing current and prior information integration). 2. Tally model Similarly, this model computes a tally of all blue samples observed prior to the current trial T versus the number of all samples observed in a block so far. $$\begin{array}{*{20}{c}} {M_T^{tly}(b = 1) = \frac{{\mathop {\sum }\nolimits_{t = 1}^{T - 1} N_{Bt}}}{{\mathop {\sum }\nolimits_{t = 1}^{T - 1} (N_{Bt} + N_{Rt})}}} \end{array}$$ (11) This corresponds to pooling the samples of all trials, as if they were drawn from a common population of unknown population proportion. 3. Difference model The heuristic difference model considers the difference between the number of blue and red samples \(d_t = N_{Bt} - N_{Rt}\) in every observed trial t within a block as informative to establish a belief about the block tendency. Across trials, it is accumulated by computing ( T ≥ 2): $$\begin{array}{*{20}{c}} {M_T^d(b = 1;\omega ) = \frac{1}{{1 + {\mathrm{exp}}( - \omega \cdot \mathop {\sum }\nolimits_{t = 1}^{T - 1} d_t/(T - 1))}}} \end{array}$$ (13) The logistic sigmoidal function ensures that the result always takes a value between zero and one and that it can be interpreted as a proper belief, as in the previous two approximations. The parameter ω adjusts the sensitivity to the sample-difference statistics d t and can be determined by a fit to behavioral data. Flexible mapping capturing hierarchical integration This is a more flexible extension of the response mapping described before that can be used for the hierarchical learning task (Experiment 2). More concretely, we want to integrate any given prior belief M , not necessarily derived from a probabilistic model, with the momentary sample D = ( q, N ) and map it onto the modeled response \(\left( {q,N,M} \right) \mapsto \hat y\) . As a mere function approximator, it is agnostic to the mechanisms that participants may use to combine information. Correspondingly, its parameters ω must be determined by a fit to the experimental data. Here, this process is approximated by a polynomial function Z of the input ( q , N , M ) that is fed into a logistic sigmoid as in Eq. ( 5 ). $$\begin{array}{*{20}{c}} {Z = \omega _1 + \omega _2q^\prime + \omega _3q^\prime N + \omega _4M + \omega _5q^{\prime3} + \omega _6q^{\prime3}N + \omega _7NM^\prime + \omega _8M^{\prime3} + \omega _9NM^{\prime3}} \end{array}$$ (14) The argument Z contains only odd powers of q and M because we assume symmetry and no preference for estimating either red/blue majorities. Correspondingly, both quantities are standardized beforehand by the mapping \(x^\prime = 2(x - 0.5)\) . As they are also independent from one another, no corresponding product terms are included. Preliminary testing revealed that the inclusion of nonlinear terms is important to capture finer-grained patterns of behavior. The sample size N is introduced into some terms to model its magnifying effect for the signed quantities ( q , M ). We performed a weight normalization by the SD of each polynomial (for the input data) which was absorbed into the indicated weights ω . The particular choice of the terms in Eq. ( 14 ) balances flexibility with model complexity (and optimization for scarce behavioral data). We manually tested different parameterizations but did not find crucial differences for other reasonable choices of the mapping. Response distribution We assume that the probability of obtaining the behavioral confidence report y t on trial t conditional on the data d t and the model parameters is a Gaussian distribution truncated to the interval from zero to one \(N_{[0,1]}\left( {y_t{\mathrm{|}}\hat y_t,\theta } \right)\) . The mean parameter of the normal distribution is set to the model prediction \(\hat y_t\) . The latter is denoted by \(\hat y\) to distinguish it from the response y of the participant which is formally represented by a draw from the response distribution to account for task-intrinsic behavioral variability beyond the variations captured by the model. The standard deviation (SD) parameter θ of the Gaussian is assumed to be constant and robustly estimated from the data (see Supplementary Methods). When analyzing the patterns of behavior produced by a fitted model (either a heuristic model or the optimal model with distortions), we computed the expected value of the mode under the truncated gaussian noise. Because of such truncated noise, the expected value is more centered than the noiseless model prediction: $$\left\langle {y_t|\hat y_t,\theta } \right\rangle = \hat y_t + \theta \frac{{N\left( \alpha \right) - N\left( \beta \right)}}{{\phi \left( \beta \right) - \phi \left( \alpha \right)}}$$ where \(\alpha = - \hat y_t/\theta\) and \(\beta = (1 - \hat y_t)/\theta\) As our data might be contaminated by other processes such as lapses, we take precaution against far outlying responses. The response likelihood is calculated for all responses as $$\begin{array}{*{20}{c}} {p\left( {{\mathbf{y}}{\mathrm{|}}{\mathbf{d}}_1, \ldots ,{\mathbf{d}}_T} \right) = \mathop {\prod }\limits_{t = 1}^T \left( {1 - {\it{\epsilon }}} \right)N_{[0,1]}\left( {y_t{\mathrm{|}}\hat y_t,\theta } \right) + {\it{\epsilon }}.} \end{array}$$ (15) Additionally, to prevent isolated points from being assigned virtually zero probability we generally add a small probability of \({\it{\epsilon }} = 1.34 \times 10^{ - 4}\) to all. This corresponds to the probability of a point at four standard deviations from the standard normal distribution. For non-outlying points this alteration is considered negligible. To avoid singularity problems common to fitting mixture models, we constrained the SD parameter θ to be larger than 0.01 during fitting. Inferential patterns for fitted block tendency The probabilistic model assumes that the block tendency from which the trial-by-trial (airplane) proportions μ are drawn is given by one of two skewed Beta-distributions (see Methods). By convention a ‘blue’ ( b = 1) context is characterized by the block tendency Beta \((\mu |\nu _1 = 14,\nu _2 = 9)\) while the ‘red’ context ( b = 0) is correspondingly denoted by Beta \((\mu |\nu _2,\nu _1)\) . The two distributions are symmetric with respect to the block aligned trial majorities, \(\tilde \mu _b = b\cdot \mu + (1 - b)\cdot (1 - \mu )\) , which immediately follows from the property of the Beta distribution: Beta \((\tilde \mu _{b = 1}|\nu _1,\nu _2) =\) Beta \((\tilde \mu _{b = 0}|\nu _2,\nu _1)\) . A variation of the optimal inference routine (Eqs. ( 7 – 9 )) is used that allows for different values of the parameters v 1, v 2 governing the block tendency with the restriction that v 1 ≥ v 2 . In addition, the sigmoidal response mapping (Eq. ( 13 )) is used to allow for nonlinear distortions of the output. Estimating model evidence The evidence that each participant’s data lends to each model is derived from predictive performance in terms of the cross-validation log likelihood (CVLL). For training, we maximized the logarithm of the response likelihood (Eq. ( 15 )). To maximize the chances of finding the global maximum even for non-convex problems or shallow gradients, every training run first uses a genetic algorithm and then refines its estimate with gradient based search (MATLAB ga, fmincon). The CVLL for each participant and model is summarized by the median of the logarithm of the response likelihood (Eq. ( 15 )) on the test set across all cross validation (CV) folds (SI). Group level comparison Instead of making the assumption that all participants can be described by the same model, we use a hierarchical Bayesian model selection method (BMS) 55 that assigns probabilities to the models themselves. This way, we assume that different participants may be described by different models. That is a more suitable approach for group heterogeneity and outliers which are certainly present in the data. The algorithm operates on the CVLL for each participant \((p = \{ 1, \ldots ,P\} )\) and each model \((m = \{ 1, \ldots ,M\} )\) under consideration and estimates a Dirichlet distribution \({\mathrm{Dir}}({\boldsymbol{r}}|\alpha _1,...,\alpha _M)\) that acts as a prior for the multinomial model switches u pm . The latter are represented individually for each subject by a draw from a multinomial distribution \(u_{pm} \sim {\mathrm{Mult}}(1,{\boldsymbol{r}})\) whose parameters are \(r_m = \alpha _m/(\alpha _1 + ... + \alpha _M)\) . We use the CVLL and assume an uninformative Dirichlet prior a 0 = 1 on the model probabilities. Later, for model comparison, exceedance probabilities, \(p_{exc} = \mathop {\smallint }\nolimits_{0.5}^1 {\mathrm{Beta}}(\alpha _i,\mathop {\sum }\limits_{j \ne i} \alpha _j)\) , are calculated corresponding to the belief that a given model is more likely to have generated the data than any other model under consideration. High exceedance probabilities indicate large differences on the group level. We consider values of p exc ≥ 0.95 significant (marked with *) and values of p exc ≥ 0.99 very significant (marked with **). Regression for sample size dependence Separate regression analyses conditional on sample size N are used to determine the slope of the psychometric curves of the confidence judgments in a blue trial majority over the sample proportion of blue samples q (Figs. 1 , 2 , 6 ). For a given sample size N , we use a logistic sigmoid with a linear weight ω N to relate the standardized sample proportion \(q_N^\prime = 2(q_N - 0.5)\) to the modeled response \(\hat y\) . $$\begin{array}{*{20}{c}} {\hat y = \frac{1}{{1 + {\mathrm{exp}}[ - \omega _N \cdot q\prime _N]}}} \end{array}$$ (16) We note that with this parameterization unbiased judgments are assumed. Conditioning reduces the number of data points available for fitting. To avoid numerical singularities (sigmoid collapses to step function) due to finite data, we use the likelihood function (Eq. ( 15 )) but with the truncated Gaussian replaced by a Gaussian. This choice effectively leads to weighted regression assigning less probability density to responses close to the extremes (e.g., a response of 1 is assigned ½ of the density due to spill-over of the Gaussian into [1, ∞)). In this (heuristic) scheme, outlying responses are given less importance which translates into higher stability of the weight estimate. Regression for previous trial weights To estimate the weight on the sample proportion of previously presented in-block trials on the current confidence estimate we perform a regression analysis (see Figs. 4e and 7a ). Probabilistic integration of evidence for the block tendency M (Eq. ( 9 )) results in a nonlinear increase of aligned confidence with the number of previously observed trials which saturates due to normalization. Hence, as the relative contribution of each trial decreases as more trials are observed, we perform the regression analysis separately for different numbers \((2, \ldots ,T - 1)\) of predictors (previous trials). $$\begin{array}{*{20}{c}} {\hat y = \frac{1}{{1 \, + \, {\mathrm{exp}}[ - \mathop {\sum }\nolimits_{t = 1}^{T - 1} \omega _t \cdot q_t^{\prime} ]}}} \end{array}$$ (17) As before, we use a logistic sigmoid with a linear combination of standardized sample proportion \(q_t^\prime = 2(q_t - 0.5)\) of each previous trial t to the modeled response \(\hat y\) . Again, this conditioning reduces the number of data points available for fitting (570/5 = 114 trials) from which up to four weights have to be determined. To avoid numerical singularities due to finite data, we use the likelihood function (Eq. ( 15 )) but with the truncated Gaussian replaced by a Gaussian (see above). Evidence-opposing choices due to contradictory prior Evidence-opposing choices are a crucial prediction of the ideal observer model which occur when the prior belief overrides contradictory evidence from the current sample. If we e.g., record a response that reports a blue majority while the sample majority is red, we call this an evidence opposing choice (confidence judgment). This can be attributed to an influence of an opposing prior belief or task-intrinsic response noise (input-independent). To avoid biased estimates because of the latter, the analysis is conditional on trials that on average provide opposing evidence to the sample. We only used trials whose aligned sample proportion is smaller than 0.5 as it opposes the tracked prior belief (on average). Crucially, in Experiment 1, we found that noise basically does not lead to evidence opposing choices (see Supplementary Methods). Nevertheless, we make a conservative estimate by comparing behavior to a model whose evidence opposing choices just result from noisy responses in the absence of any prior belief tracking. This reference model \(\hat y = \tilde q + {\it{\epsilon }}\) just reports the aligned sample proportion \(\tilde q\) plus independent noise ε drawn from a truncated Gaussian distribution of standard deviation SD = 0.1. Binning for visualization and analyses To impose minimal constraints on data for visualization (see Figs. 5 – 7 ), we plotted the responses by grouping them into approximately equally filled bins across participants. The number of bins was manually chosen to achieve an appropriate trade-off between resolution and noise of the estimated bins values. Importantly, this only affects visualization. Unless stated otherwise, the underlying ungrouped data is used for testing. The conditional curves in Figs. 6b, c were determined by the cumulative quantiles Q of the sample size distribution (many ≥ Q (0.6), few < Q (0.4)) and (many > Q (0.5), few ≤ Q (0.5)) respectively. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings of this study are available as Supplementary Data. Code availability The code for data analysis is available publicly at .
How do human beings perceive their environment and make their decisions? To successfully interact with the immediate environment, for human beings it is not enough to have basic evidence of the world around them. This information by itself is insufficient because it is inherently ambiguous and requires integrating into a particular context to minimize the uncertainty of sensory perception. But, at the same time, the context is ambiguous. For example, am I in a safe or a dangerous place? A study published on 28 November in Nature Communications by Philipp Schustek, Alexandre Hyafil and Rubén Moreno-Bote, researchers at the Center for Brain and Cognition (CBC) of the Department of Information and Communication Technologies (DTIC) at UPF, suggests that the brain has a refined form of representation of uncertainty at several hierarchical levels, including context. Hence, the brain has a very detailed, almost mathematical probabilistic representation of all that surrounds us we consider important. "The notions of probability, though intuitive, are very difficult to quantify and use rigorously. For example, my statistics students often fail to solve some of the problems I pose in class. In our study, we find that a complicated mathematical problem involving the use of the most sophisticated rules of probability can be solved intuitively if it is presented simply and in a natural context," asserts Rubén Moreno-Bote, coordinator of the Research Group on Theoretical and Cognitive Neuroscience at the CBC. Cognitive tasks of hierarchical integration Let us suppose that a city airport is hosting a football final and we look at a few passengers who are leaving a plane. If we note that four of them are fans of the red team and two of the blue team, we could conclude that more fans of the red team are attending the final than of the blue team. This inference, based on incomplete sensory evidence, could be improved with contextual information. For example, if worldwide there are more fans of the blue team than of the red team, despite our initial observation, we would review our inference counting how many supporters of each group are travelling on the plane to more accurately confirm whether more fans of the red team have really come to the city than of the blue team. Or, we could also do the opposite, basing ourselves on the context inferring whether the sample observed follows the more general context or not. The researchers designed their experiments presenting hierarchical integration tasks using the plane task. "For the study, we told our participants that they are at an airport where planes can arrive carrying more of one type of person than of another, for example, more supporters of Barça than of Madrid. On seeing a handful of passengers leaving several aircraft, the participants can predict with mathematical precision the likelihood that the next plane will be carrying more passengers of a certain type," Moreno-Bote explains. "In general, this structure of tasks creates hierarchical dependencies among the hidden variables to be solved bottom up (deducing the context of previous observations) and then passing the message top down (deducing the current status combining current observations with the inferred context)," the authors explain. The results showed that the participants, based on their preliminary observations, built a probabilistic representation of the context. These results help to understand how people form mental representations of what surrounds us and how we assign and perceive the uncertainty of this context.
10.1038/s41467-019-13472-z
Medicine
Study suggests that even brief exposure to air pollution has rapid impacts on the brain
Jodie R. Gawryluk et al, Brief diesel exhaust exposure acutely impairs functional brain connectivity in humans: a randomized controlled crossover study, Environmental Health (2023). DOI: 10.1186/s12940-023-00961-4 Journal information: Environmental Health
https://dx.doi.org/10.1186/s12940-023-00961-4
https://medicalxpress.com/news/2023-01-exposure-air-pollution-rapid-impacts.html
Abstract Background While it is known that exposure to traffic-related air pollution causes an enormous global toll on human health, neurobiological underpinnings therein remain elusive. The study addresses this gap in knowledge. Methods We performed the first controlled human exposure study using functional MRI with an efficient order-randomized double-blind crossover study of diesel exhaust (DE) and control (filtered air; FA) in 25 healthy adults (14 males, 11 females; 19–49 years old; no withdrawals). Analyses were carried out using a mixed effects model in FLAME. Z (Gaussianised T/F) statistic images were thresholded non-parametrically using clusters determined by Z > 2.3 and a (corrected) cluster significance threshold of p = 0.05. Results All 25 adults went through the exposures and functional MRI imaging were collected. Exposure to DE yielded a decrease in functional connectivity compared to exposure to FA, shown through the comparison of DE and FA in post-exposure measurement of functional connectivity. Conclusion We observed short-term pollution-attributable decrements in default mode network functional connectivity. Decrements in brain connectivity causes many detrimental effects to the human body so this finding should guide policy change in air pollution exposure regulation. Trial registration University of British Columbia Clinical Research Ethics Board (# H12-03025), Vancouver Coastal Health Ethics Board (# V12-03025), and Health Canada’s Research Ethics Board (# 2012-0040). Peer Review reports Background Exposure to traffic-related air pollution (TRAP) has long been associated with a range of adverse health effects, principally cardiovascular and respiratory [ 1 ]. This poses an enormous global burden, in terms of morbidity and lost productivity, as well as deaths estimated at approximately five million per year worldwide [ 2 ]. This profound toll is increasingly appreciated as including impacts on the central nervous system, but the data therein remains immature. Further, neurobiological underpinnings of these observations remain elusive, although some preliminary data suggest direct transmission of particles via the olfactory bulb and/or secondary transmission of inflammation likely generated more proximally [ 3 – 5 ]. Given the profound implications for public health across essentially all communities [ 6 ], data that adds overall biologic plausibility and also specific evidence of affected body systems are critically needed in order to support observational data [ 7 – 10 ]. Therefore, we performed the first controlled human exposure study to TRAP, using an established and safe paradigm of diluted diesel exhaust, that examines changes in functional MRI in an efficient crossover study (namely, diesel exhaust [DE] or filtered air [FA] exposure following light exercise), allowing observation of short-term effects on brain connectivity in this context. Methods Participants A total of 100 MRI acquisitions were obtained in the current study. Twenty-five adult participants were tested immediately pre- and post-exposure to diesel exhaust (DE) and immediately pre- and post-exposure to filtered air (FA) for comparison. All participants were recruited through posters in the community, online notices, and e-mail notifications to the Vancouver Coastal Health Staff List-Serve. Inclusion criteria for participants were as follows: between 19 and 49 years of age, able to converse in English, healthy, non-smoking, not pregnant or breast-feeding, and without any contraindications for MRI. So long as inclusion criteria were met, the only exclusion criteria was claustrophobia. Procedure The study employed a controlled, double-blinded crossover design at the Air Pollution Exposure Lab. Each participant was tested in both the control condition (exposure to FA) and the experimental condition (exposure to DE) with four data acquisitions: (1) pre-FA; (2) post-FA; (3) pre-DE and (4) post-DE. The order of exposure to FA and DE was randomized and counterbalanced across participants, with a two-week delay between conditions. Both participants and individuals involved in collecting the MRI data were blinded to the condition, a technique that has been shown not only nominal but also effective [ 11 ]. FA or DE (nominal concentration: 300 µg of particulate matter of 2.5 microns or less [PM 2.5 ]/m 3 ) exposure occurred for 120 min [ 12 ]. During exposure, participants cycled on a stationary bicycle at light effort (that which yields ventilation at 15 L/min/m 2 ) for 15 min, during the first quarter of each hour, to maintain a representative level of activity. Image acquisition The following MRI protocol was employed pre-FA, post-FA, pre-DE and post-DE for each participant. All images were acquired at BC Children’s Hospital on a 3 Tesla GE Discovery MR750 MRI scanner. A whole-brain anatomical MRI scan was acquired with a T1-weighted FSPGR 3D sequence, with the following parameters: a repetition time (TR) of 8.148 ms, an echo time of 3.172 ms, voxel size of 1 × 1 × 1 mm, and a flip angle of 8°. A functional MRI (fMRI) scan was obtained during resting state (with eyes open or closed). The resting state fMRI scan was 6 min in duration and obtained with a T2*-weighted echo-planar imaging sequence with the following parameters: a repetition time of 2000 ms, an echo time of 19 ms, 180 volumes, 39 slices, and a voxel size of 3 × 3 × 3 mm. Additional task-based scans were obtained following resting state scans, but were ancillary to the hypotheses being tested here. Functional MRI data analyses All analysis steps were performed using tools within the Functional MRI of the Brain Software Library (FSL; Analysis Group, FMRIB, Oxford, UK, ) [ 13 ]. Non-brain tissue in the raw T1 images was removed using the automated Brain Extraction Tool, followed by manual verification and optimization for each subject. A seed-based approach was used to examine functional connectivity in the default mode network (DMN) [ 14 ]. The FEAT function was used to pre-process the data including skull removal (using the Brain Extraction Tool), motion correction (using MCFLIRT) [ 15 ], and highpass temporal filtering (using Gaussian-weighted least-squares straight line fitting with σ = 50.0 s). No smoothing was applied. Registration of the functional data to the high-resolution structural image was carried out using the boundary-based registration algorithm. Registration of the high-resolution structural images to standard space was carried out using FLIRT [ 15 , 16 ] and then further refined using FLIRT or FNIRT nonlinear registration (optimized for each individual) [ 17 , 18 ]. Next, the posterior cingulate cortex region of interest (ROI or seed) was registered to individual space. This ROI/seed was created based on ROIs from previous studies and included a 10 voxel spherical ROI centred on the following MNI coordinates: -2, -51, 27 [ 19 , 20 ]. The FEAT function was used to examine the default mode network posterior cingulate cortex ROI/seed and to regress out the lateral ventricle signal to correct for confounding noise. Specifically, the mean blood oxygen level-dependent signal time series was extracted from the posterior cingulate seed region and used as the model response function in a general linear model analysis. This allowed for examination of functional connectivity in the DMN through the detection of voxels with timeseries that correlate with that measured in the posterior cingulate seed. The time-series statistical analysis was carried out using FILM (FMRIB’s Improved Linear Model) with local autocorrelation correction and correction for motion parameters [ 21 ]. Higher-level analyses were carried out using FMRIB’s Local Analysis of Mixed Effects (FLAME), an approach for multisubject and multisession fMRI data analyses [ 22 , 23 ]. Specifically, this approach allowed for higher-level within group comparisons of resting state functional connectivity in the DMN pre- versus post-DE exposure, pre- versus post-FA exposure, pre-FA versus pre-DE exposure and post-FA versus post-DE exposure (all contrasts were examined bidirectionally). Z (Gaussianised T/F) statistic images were thresholded non-parametrically using clusters determined by Z > 2.3 and a (corrected) cluster significance threshold of p = 0.05 [ 22 , 24 ] The pre-exposure MRI effectively serves as a baseline for a given individual and, given the crossover design of this study, each individual served as his/her own control, virtually eliminating the concern for confounding by personal characteristics [ 12 ]. Results In the present study, we focused on putative effects of TRAP on the default mode network (DMN), a set of inter-connected cortical brain regions in which activity is maximal at rest or during internal thought engagement. We focused on the DMN, given the preferential vulnerability of this network to aging [ 25 , 26 ], toxicity [ 27 ], and disease states [ 28 , 29 ]. The 25 participants were 11 female and 14 male, with mean age of 27.4 (s.d. 5.5) years. Exposure conditions were achieved as follows, in terms of PM 2.5 as µg/m 3 , for filtered air (FA): 2.4 and for DE: 289.6; total volatile organic carbons (ppb) for FA: 124.5 and for DE: 1425.0; carbon dioxide (ppm) for FA: 794.1 and for DE: 2098.0; nitrogen dioxide (ppb) for FA: 51.9 and for DE: 283.1. In the DE group, there were no significant differences in DMN functional connectivity for post- compared to pre-DE exposure (Fig. 1 A). By contrast, in the FA group, significantly greater DMN functional connectivity was observed post-exposure relative to pre-exposure, localized in the right middle temporal gyrus and occipital fusiform gyrus (Fig. 1 B; Table 1 ). Fig. 1 Results of group level comparisons ( p < 0.05, corrected) with significant regions in red. A represents no significant findings pre- versus post- diesel exhaust. B depicts regions with increased functional connectivity post-filtered air > pre-filtered air. C shows regions with increased functional connectivity pre-diesel exhaust > pre-filtered air. D depicts areas with greater functional connectivity post-filtered air > post-diesel exhaust Full size image Table 1 Functional connectivity post- and pre-filtered air exposure Full size table There were small albeit significant differences observed when comparing groups pre-exposure. Specifically, participants demonstrated greater functional connectivity, pre-DE compared to pre-FA, in the right occipital fusiform gyrus as well as the occipital pole (Fig. 1 C; Table 2 ). However, a more robust pattern of significant differences emerged when groups were compared post-exposure. Participants demonstrated greater functional connectivity in widespread regions of the default mode network following exposure to FA compared to following exposure to DE (Fig. 1 D; Table 3 ). Briefly stated another way, exposure to DE yielded a decrease in functional connectivity compared to exposure to FA. Table 2 Functional connectivity pre-diesel exhaust exposure and pre-filtered air exposure Full size table Table 3 Functional connectivity post-diesel exhaust exposure and post-filtered air exposure Full size table Discussion Our study provides the first evidence in humans, from a controlled experiment, of altered brain network connectivity acutely induced by air pollution. The use of this model is important because it is not subject to potential confounding by variables correlated to exposure, a vexing concern common to observational studies. The precise functional impact of the changes seen in fMRI are unknown but are likely modest given the small magnitude of change, as expected with such limited exposure. That said, real-world exposures are often more persistent, particularly in regions of the world for which levels such as those we use are not uncommon. It is hypothesized that chronic exposure is effectively a series of short-term exposures (only one of which our participants were exposed to) that ultimately leads to accumulated deficits through a stress on allostatic load [ 30 , 31 ], but whether or not this applies to pollution in the neurocognitive realm, while hypothesized [ 32 ], requires further study. That being said, our results are consistent with a study of chronic air pollution exposure in Germans [ 33 ]. In considering why DE attenuated functional connectivity in the DMN relative to FA, it is worth noting previous studies have demonstrated increased functional connectivity following exercise and the results for the FA condition are consistent with these findings [ 34 , 35 ]. However, these results were only found when participants were exposed to the FA condition (whereas no significant change in functional connectivity was detected pre-post DE exposure). Therefore, our current results suggest that the brain-related benefits of light exercising (e.g., increased functional connectivity) are not obtained under the DE condition. Although previous observational investigations suggest exposure to air pollutants is associated with decreased functional connectivity [ 36 , 37 ] the current results are an extension of these findings, given that the DE condition elicited a relative decrease in functional connectivity compared to the FA condition. Our demonstrating this using such directly controlled methodology adds considerably to the plausibility if these previous findings. More precise mechanisms have been elusive to date, though a link to neuroinflammation (difficult to measure directly in the intact human), potentially secondary to particle migration via the olfactory bulb as seen in animal models [ 38 ], seems likely. There are several ways in which decrements in brain connectivity, such as those we demonstrated, might manifest in daily life. Changes in brain connectivity have been associated with decreased working memory [ 39 ] and behavioural performance [ 40 ], and deterioration in productivity at work (which is also associated with air pollution) [ 41 ]. It is also possible that these decrements worsen further in the context of multifaceted exposures not studied here [ 42 ]. Conclusion The current study represents the first controlled human exposure to diesel exhaust investigation using functional MRI. The results of an order-randomized double-blind crossover study of diesel exhaust and control air in healthy adults revealed immediate pollution-attributable declines in default mode network functional connectivity. Change in policy surrounding air pollution exposure has long been driven by a combination of observational and experimental evidence, which together are most compelling especially in the face of interests aggressively opposed to regulation that foster improved air quality. In spite of volumes of existing evidence regarding adverse effects of air pollution, history demonstrates that implicating additional organ systems can augment the already strong evidence and effectively apply further pressure for emissions control in areas lagging in that regard. This data may be informative therein, while deepening the evidence base for direct evidence of neurocognitive effects due to acute exposure to TRAP. As the changes in cognition we have demonstrated may put individuals at risk for impaired vocational performance, this is an important consideration for public health. Availability of data and materials The data generated during this study are available from the corresponding author (CC) on reasonable request. Change history 23 January 2023 A Correction to this paper has been published: Abbreviations DE: Diesel exhaust DMN: Default mode network FA: Filtered air MRI: Magnetic resonance imaging ROI: Region of interest TRAP: Traffic-related air pollution
A new study by researchers at the University of British Columbia and the University of Victoria has shown that common levels of traffic pollution can impair human brain function in only a matter of hours. The findings, published in the journal Environmental Health, show that just two hours of exposure to diesel exhaust causes a decrease in the brain's functional connectivity—a measure of how The study provides the first evidence in humans, from a controlled experiment, of altered brain network connectivity induced by air pollution. "For many decades, scientists thought the brain may be protected from the harmful effects of air pollution," said senior study author Dr. Chris Carlsten, professor and head of respiratory medicine and the Canada Research Chair in occupational and environmental lung disease at UBC. "This study, which is the first of its kind in the world, provides fresh evidence supporting a connection between air pollution and cognition." For the study, the researchers briefly exposed 25 healthy adults to diesel exhaust and filtered air at different times in a laboratory setting. Brain activity was measured before and after each exposure using functional magnetic resonance imaging (fMRI). The researchers analyzed changes in the brain's default mode network (DMN), a set of inter-connected brain regions that play an important role in memory and internal thought. The fMRI revealed that participants had decreased functional connectivity in widespread regions of the DMN after exposure to diesel exhaust, compared to filtered air. "We know that altered functional connectivity in the DMN has been associated with reduced cognitive performance and symptoms of depression, so it's concerning to see traffic pollution interrupting these same networks," said Dr. Jodie Gawryluk, a psychology professor at the University of Victoria and the study's first author. "While more research is needed to fully understand the functional impacts of these changes, it's possible that they may impair people's thinking or ability to work." Taking steps to protect yourself Notably, the changes in the brain were temporary and participants' connectivity returned to normal after the exposure. Dr. Carlsten speculated that the effects could be long lasting where exposure is continuous. He said that people should be mindful of the air they're breathing and take appropriate steps to minimize their exposure to potentially harmful air pollutants like car exhaust. "People may want to think twice the next time they're stuck in traffic with the windows rolled down," said Dr. Carlsten. "It's important to ensure that your car's air filter is in good working order, and if you're walking or biking down a busy street, consider diverting to a less busy route." While the current study only looked at the cognitive impacts of traffic-derived pollution, Dr. Carlsten said that other products of combustion are likely a concern. "Air pollution is now recognized as the largest environmental threat to human health and we are increasingly seeing the impacts across all major organ systems," says Dr. Carlsten. "I expect we would see similar impacts on the brain from exposure to other air pollutants, like forest fire smoke. With the increasing incidence of neurocognitive disorders, it's an important consideration for public health officials and policymakers." The study was conducted at UBC's Air Pollution Exposure Laboratory, located at Vancouver General Hospital, which is equipped with a state-of-the-art exposure booth that can mimic what it is like to breathe a variety of air pollutants. In this study, which was carefully designed and approved for safety, the researchers used freshly-generated exhaust that was diluted and aged to reflect real-world conditions.
10.1186/s12940-023-00961-4
Medicine
Insights into a versatile molecular death switch
Melanie Fritsch et al, Caspase-8 is the molecular switch for apoptosis, necroptosis and pyroptosis, Nature (2019). DOI: 10.1038/s41586-019-1770-6 Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1770-6
https://medicalxpress.com/news/2019-11-insights-versatile-molecular-death.html
Abstract Caspase-8 is the initiator caspase of extrinsic apoptosis 1 , 2 and inhibits necroptosis mediated by RIPK3 and MLKL. Accordingly, caspase-8 deficiency in mice causes embryonic lethality 3 , which can be rescued by deletion of either Ripk3 or Mlkl 4 , 5 , 6 . Here we show that the expression of enzymatically inactive CASP8(C362S) causes embryonic lethality in mice by inducing necroptosis and pyroptosis. Similar to Casp8 −/− mice 3 , 7 , Casp8 C362S/C362S mouse embryos died after endothelial cell necroptosis leading to cardiovascular defects. MLKL deficiency rescued the cardiovascular phenotype but unexpectedly caused perinatal lethality in Casp8 C362S/C362S mice, indicating that CASP8(C362S) causes necroptosis-independent death at later stages of embryonic development. Specific loss of the catalytic activity of caspase-8 in intestinal epithelial cells induced intestinal inflammation similar to intestinal epithelial cell-specific Casp8 knockout mice 8 . Inhibition of necroptosis by additional deletion of Mlkl severely aggravated intestinal inflammation and caused premature lethality in Mlkl knockout mice with specific loss of caspase-8 catalytic activity in intestinal epithelial cells. Expression of CASP8(C362S) triggered the formation of ASC specks, activation of caspase-1 and secretion of IL-1β. Both embryonic lethality and premature death were completely rescued in Casp8 C362S/C362S Mlkl −/− Asc −/− or Casp8 C362S/C362S Mlkl −/− Casp1 −/− mice, indicating that the activation of the inflammasome promotes CASP8(C362S)-mediated tissue pathology when necroptosis is blocked. Therefore, caspase-8 represents the molecular switch that controls apoptosis, necroptosis and pyroptosis, and prevents tissue damage during embryonic development and adulthood. Main In addition to its role in apoptosis and necroptosis, recent in vitro studies have indicated that caspase-8 induces the production of cytokines by acting as a scaffolding protein and that this role is independent of its enzymatic activity 9 , 10 . The scaffold function of caspase-8 was also shown to be involved in the double-stranded RNA (dsRNA)-induced activation of the NLRP3 inflammasome in macrophages 11 . Additional studies indicate that the enzymatic activity of caspase-8 is required for the activation of NF-κB and secretion of cytokines in response to activated antigen receptors, Fc receptors or Toll-like receptors (TLRs), independently of cell death 12 , 13 . To investigate the physiological role of the enzymatic activity of caspase-8, we generated knock-in mice that expressed catalytically inactive caspase-8 by mutating Cys362 in the substrate binding pocket to serine (C362S) (Extended Data Fig. 1a ). Although heterozygous Casp8 C362S/WT mice were viable (Extended Data Fig. 1b ), Casp8 C362S/C362S embryos died around embryonic day 11.5 (E11.5). Hyperaemia in the abdominal areas was detected in Casp8 C362S/C362S embryos (Fig. 1a ), presumably owing to defects in vascular development, which resembles the phenotype of Casp8 −/− embryos 7 . In order to address the role of caspase-8 in vascular development, we used Tie2 cre (also known as Tek cre ) mice, in which efficient Cre-mediated recombination is induced in all endothelial cells and most haematopoietic cells 14 . Loss of the catalytic activity of caspase-8 in Casp8 C362S/fl Tie2 cre mice or specific knockout of caspase-8 in the endothelial cells of Casp8 fl/fl Tie2 cre mice caused embryonic lethality at the same developmental stage as Casp8 C362S/C362S embryos (Extended Data Fig. 1c, d ). Casp8 C362S/fl Tie2 cre and Casp8 fl/fl Tie2 cre embryos showed the same gross pathology associated with a decrease in yolk-sac vascularization (Fig. 1b and Extended Data Fig. 1d ). Fig. 1: The enzymatic activity of caspase-8 is required to inhibit necroptosis. a , b , Representative images of Casp8 WT/WT ( n = 5), Casp8 C362S/WT ( n = 8), Casp8 C362S/C362S ( n = 3), Casp8 −/− ( n = 3), Casp8 C362S/fl ( n = 5) and Casp8 C362S/fl Tie2 cre ( n = 5) mouse embryos at E11.5 ( a , b , top). CD31 staining as endothelial marker of whole-mount yolk sacs ( b , bottom). Scale bars, 100 µm. c , Representative images of 9-day-old (P9) Casp8 C362S/fl ( n = 3) and Casp8 C362S/fl K14 cre ( n = 3) mice (top left) and skin sections stained with haematoxylin and eosin (H&E) (top right, bottom. Scale bars, 100 µm (magnification, top) and 300 µm (bottom). d , Ileal sections from 10-week-old Casp8 C362S/fl ( n = 3) and Casp8 C362S/fl Villin cre ( n = 4) mice stained with H&E (left), for lysozyme (Paneth cells, middle) and periodic acid–Schiff (PAS) (right). Scale bars, 100 µm. e , f , Count of Paneth cells ( e ) and dead IECs ( f ) (per crypt per mouse, n = 3). Dots, individual mice. Data are mean ± s.e.m. One-way analysis of variance (ANOVA) followed by Dunnett’s post-analysis. Source data Full size image Specific loss of the catalytic activity of caspase-8 in epidermal keratinocytes or intestinal epithelial cells was achieved by crossing Casp8 C362S/fl mice with Krt14 cre (also known as K14 cre ) mice 15 or Vil1 cre (also known as Villin cre ) mice 16 , respectively. Loss of the catalytic activity of caspase-8 in these two cell types caused similar pathologies to caspase-8 deficiency in these tissues. Casp8 C362S/fl K14 cre and Casp8 fl/fl K14 cre mice developed inflammatory skin lesions with focal epidermal thickening and scaling that appeared 5–7 days after birth and these lesions gradually increased in size and distribution and covered large cutaneous areas (Fig. 1c and Extended Data Fig. 1e ). Histological skin analyses revealed epidermal hyperplasia and immune-cell infiltration into the dermis of Casp8 C362S/fl K14 cre and Casp8 fl/fl K14 cre mice as has previously been shown by transgenic overexpression of CASP8(C362S) in epidermal keratinocytes 17 . Mice with intestinal epithelial cell (IEC)-specific loss of caspase-8 catalytic activity ( Casp8 C362S/fl Villin cre ) developed ileitis at 8–10 weeks of age, associated with increased numbers of dying IECs, an altered distribution of goblet cells and loss of Paneth cells; a phenotype that is similar to that found in Casp8 fl/fl Villin cre mice 8 (Fig. 1d–f and Extended Data Fig. 2a, b ). Together, these results demonstrate that the loss of the catalytic activity of caspase-8 causes similar pathologies to those found in Casp8 −/− mice during embryonic development and adult tissue homeostasis. To further characterize the cells that express CASP8(C362S), we isolated endothelial cells from Casp8 C362S/fl and Casp8 WT/fl mice and induced Casp8 gene deletion in vitro using a cell-permeable active Cre protein (HTNCre) 18 (Extended Data Fig. 3a ). Endothelial cells that expressed CASP8(C362S) were unable to activate caspase-3 in response to TNF, but were sensitized to TNF-induced necroptosis, as the cytotoxic effect of TNF was associated with MLKL phosphorylation and was abolished by co-treatment with necrostatin-1 (Nec-1) (Extended Data Fig. 3b, c ). Loss of RIPK3 or MLKL was previously shown to inhibit necroptosis and to prevent the embryonic lethality caused by caspase-8 knockout 4 , 5 , 6 . Similar to Casp8 −/− Ripk3 −/− mice, Casp8 C362S/C362S Ripk3 −/− mice survived weaning but showed markedly stunted growth and suffered from anaemia with distinct haematological abnormalities that led to splenomegaly (Fig. 2a, b and Extended Data Fig. 3d–g ). MLKL deficiency 19 did not rescue the lethality caused by the CASP8(C362S) mutation, leading to perinatal death of Casp8 C362S/C362S Mlkl −/− mice (Fig. 2c and Extended Data Fig. 3h ). Although Casp8 C362S/C362S Mlkl −/− embryos were present at E13.5 in expected numbers and without any gross phenotypic alterations (Extended Data Fig. 4a ), deletion of MLKL did not lead to Casp8 C362S/C362S animals reaching weaning age. Together, necroptosis deficiency did not fully lead to Casp8 C362S/C362S animals reaching adulthood, which suggests that the loss of the enzymatic activity of caspase-8 compromises perinatal development by additional, necroptosis-independent functions. Fig. 2: CASP8(C362S) induces necroptosis-independent tissue destruction. a , Representative images of 8-week-old mice. b , Body weight of mice of the indicated ages. Dots, individual mice. Data are mean ± s.e.m. One-way ANOVA followed by Sidak’s post-analysis compared to the corresponding Casp8 WT/WT values. c , Representative images of 1-day-old Mlkl −/− ( n = 2), Casp8 C362S/WT Mlkl −/− ( n = 4) and Casp8 C362S/C362S Mlkl −/− ( n = 3) mouse neonates. d , Representative images of an embryo ( n = 3) at E11.5 (top) and CD31 staining of a whole-mount yolk sac (bottom). Scale bar, 100 µm. e , Representative images of 9-day-old mice ( n = 4) (top left) and skin sections stained with H&E (top right, bottom). Scale bars, 100 µm (magnification, top) and 300 µm (bottom). f , Body weight of 5-week-old mice (top). Dots, individual mice. Data are mean ± s.e.m. One-way ANOVA followed by Sidak’s post-analysis. Kaplan–Meier survival curves of mice as indicated (bottom). P values by two-sided log-rank test. Source data Full size image To dissect the necroptosis-independent role of CASP8(C362S) in different tissues, we assessed how MLKL deficiency affects the phenotypes that develop after specific expression of CASP8(C362S) in the endothelium (and haematopoietic cells), the skin or the intestinal epithelium. Casp8 C362S/fl Tie2 cre Mlkl −/− mice survived weaning without gross phenotypic alterations yet developed splenomegaly at 8 weeks of age, similar to Casp8 −/− Mlkl −/− mice 6 (Fig. 2d and Extended Data Fig. 4b ). These observations indicate that the enzymatic activity of caspase-8 is required to inhibit necroptosis at early stages of embryonic development (E9–E12) that involve the formation of the cardiovascular system and placenta. Inflammatory skin lesions induced by CASP8(C362S) expression were not detected in MLKL-deficient Casp8 C362S/fl K14 cre Mlkl −/− mice, indicating that the prevention of necroptosis by the intact enzymatic activity of caspase-8 is crucial for skin homeostasis (Fig. 2e and Extended Data Fig. 4c ). By contrast, Casp8 C362S/fl Villin cre Mlkl −/− mice showed reduced body weight and died at 4–8 weeks of age (Fig. 2f and Extended Data Fig. 4d ). Histological analyses revealed intestinal inflammation in Casp8 C362S/fl Villin cre Mlkl −/− mice (at 5–7 weeks of age) that was characterized by pronounced villus atrophy, reduced numbers of Paneth cells, an altered distribution of goblet cells, the elongation of the crypts and hyperplasia in the ileum (Fig. 3a and Extended Data Fig. 5a ). Therefore, MLKL deficiency not only could not rescue the ileitis that developed in Casp8 C362S/fl Villin cre mice, but also strongly exacerbated the phenotype that caused premature death in these mice. These findings reveal that catalytically inactive caspase-8 triggers intestinal pathology in a necroptosis-independent way, which led us to search for caspase-8 catalytic activity-dependent mechanisms that prevent intestinal inflammation and that are distinct from its role in inhibiting necroptosis. Fig. 3: CASP8(C362S) activates the ASC inflammasome in IECs. a , Ileal sections of Casp8 C362S/fl ( n = 4), Casp8 C362S/fl Villin cre ( n = 5), Casp8 C362S/fl Mlkl −/− ( n = 4) and Casp8 C362S/fl Villin cre Mlkl −/− ( n = 5) 5-week-old mice with H&E, lysozyme, PAS, ASC and caspase-8 staining. Scale bars, 100 µm (H&E and lysozyme) and 50 µm (PAS, ASC and caspase-8). Arrows, ASC or caspase-8 aggregates. b , c , Western blot analysis of ileal lysates from two representative mice ( b ; mice are shown in a ) or from one representative P1 mouse neonate ( c ; mice are shown in Fig. 2c ) detecting cleaved caspase-1 and ASC. Lanes, individual mice. d , Western blot analysis of CASP8 −/− HEK293T cells (clone 1, Extended Data Fig. 5e ) transfected with DsRed-tagged human ASC (DsRed–ASC) either with human wild-type caspase-8 or CASP8(C360S) and treated with Nec-1 and/or IDN-6556 for 14 h, as indicated. Results representative of two individual experiments. e , ASC-speck-positive BMDMs ( n = 100, one representative experiment) (top) and IL-1β measurement in supernatants of BMDMs (bottom) after 24 h HTNCre treatment in biologically independent replicates ( n = 6, representative of two individual experiments). Dots represent an individual biological replicates. Data are mean ± s.e.m. One-way ANOVA followed by Sidak’s post-analysis to the corresponding values without HTNCre. Source data Full size image To further characterize the mechanisms that underlie the necroptotis-independent intestinal pathology found in Casp8 C362S/fl Villin cre Mlkl −/− mice, we examined the expression of a panel of secreted and soluble cytokines in ileal protein extracts (Extended Data Fig. 5b ). This analysis revealed the pronounced secretion of IL-1β and TNF in the ileum of Casp8 C362S/fl Villin cre Mlkl −/− mice. In particular, increased levels of IL-1β were associated with proteolytic activation of the inflammatory caspase, caspase-1, which was detected in the soluble fraction of ileal lysates (Fig. 3b and Extended Data Fig. 5c ). Caspase-1 is activated by canonical inflammasomes that involve members of the Nod-like receptor protein family such as NLRP3 and the inflammasome adaptor ASC that aggregates to form macromolecular ASC specks and serves as activation platform for caspase-1 20 . Expression of CASP8(C362S) resulted in the aggregation of ASC in the insoluble fraction of ileal lysates from Casp8 C362S/fl Villin cre mice and the aggregation was further exacerbated upon inhibition of necroptosis in Casp8 C362S/fl Villin cre Mlkl −/− mice (Fig. 3b ). Furthermore, IL-1β secretion, caspase-1 activation and ASC aggregation was also detected in intestinal lysates that were derived from Casp8 C362S/C362S Mlkl −/− neonates (Fig. 3c and Extended Data Fig. 5b, d ). Immunostaining revealed the formation of ASC specks in IECs that express CASP8(C362S), in particular, when necroptosis was blocked in Casp8 C362S/fl Villin cre Mlkl −/− mice (Fig. 3a ). These results suggest that expression of catalytically inactive caspase-8 causes activation of the inflammasome both in neonate and adult mouse intestines. To mechanistically characterize the ability of catalytically inactive caspase-8 to induce the formation of ASC specks, we established two independent human cell lines that lacked caspase-8 expression (Extended Data Fig. 5e ) and ectopically expressed human caspase-8 and human ASC after transient transfection. Only upon ectopic expression of enzymatically inactive human CASP8(C360S), but not wild-type caspase-8, ASC aggregates were increasingly detected in the Triton X-100 insoluble cellular fractions (Fig. 3d and Extended Data Fig. 5f ). Wild-type caspase-8 only induced ASC aggregation when transfected cells were additionally treated with the pan-caspase inhibitor IDN-6556 (emricasan). Immunofluorescence staining of caspase-8 in transfected cells revealed that most of the CASP8(C360S) and wild-type caspase-8 protein co-localized with cytoplasmic ASC aggregates in IDN-6556-treated cells (Extended Data Fig. 6a ). Notably, CASP8(C360S) was detected in the soluble and insoluble cellular fractions independently of IDN-6556 treatment, whereas full-length wild-type caspase-8 was expressed at low levels in untreated cells, the expression of which markedly increased in the presence of the IDN-6556 (Fig. 3d and Extended Data Fig. 4f ). Immunoprecipitation of the overexpressed caspase-8 variants in total cell lysates from CASP8 −/− HEK293T cells indicated that the human caspase-8 mutant CASP8(C360S) interacts with human ASC (Extended Data Fig. 6b ). Notably, overexpression of wild-type caspase-8 in cells resulted in its autocleavage and a reduction in the protein level of caspase-8. Only when HEK293T cells were treated with IDN-6556, comparable amounts of caspase-8 could be detected, the expression of which, in turn, induced ASC aggregation and co-immunoprecipitation. Thus, overexpression studies suggest that the expression of inactive caspase-8 alone is sufficient for the formation of ASC specks. In addition to transfection studies, isolated bone-marrow-derived macrophages (BMDMs) from Casp8 C362S/fl and Casp8 C362S/fl Mlkl −/− mice were exposed to HTNCre to induce the deletion of the Casp8 floxed alleles in vitro. The deletion of the floxed Casp8 allele in Casp8 C362S/fl and Casp8 C362S/fl Mlkl −/− macrophages ( Casp8 C362S/− and Casp8 C362S/− Mlkl −/− ) resulted in the formation of ASC specks and the increased secretion of IL-1β (Fig. 3e ). By contrast, ablation of caspase-8 in BMDMs derived from Casp8 fl/fl mice did not result in the release of IL-1β or the formation of ASC speck (Extended Data Fig. 6c ). Together, these data suggest that the expression of catalytically inactive caspase-8 is required and sufficient to induce inflammasome formation when the active wild-type Casp8 gene is ablated. Notably, inhibition of caspase activity in BMDMs using IDN-6556 did not recapitulate the data obtained by the expression of CASP8(C362S). Consistent with previous findings 11 , IDN-6556 was only able to induce the release of IL-1β and formation of ASC specks when BMDMs were co-treated with lipopolysaccharide (LPS) (Extended Data Fig. 6d ). In contrast to CASP8(C362S) expression, the release of IL-1β induced by IDN-6556 and LPS treatment was completely abolished in cells that lacked MLKL. To assess whether caspase-1 or ASC contributes to the pathology that caused the lethality found in Casp8 C362S/C362S Mlkl −/− mice, we bred these mice into caspase-1 knockout 21 or ASC knockout 22 genetic backgrounds (Extended Data Fig. 7a ). Indeed, Casp8 C362S/C362S Mlkl −/− Casp1 −/− and Casp8 C362S/C362S Mlkl −/− Asc −/− mice survived beyond parturition (beyond 20 weeks of age) and developed normally without any major macroscopic alterations (Fig. 4a ). Both genotypes led to abnormal haematopoiesis, which was characterized by a strong increase in spleen size and weight (Fig. 4a and Extended Data Fig. 7b ), as observed in Casp8 −/− Mlkl −/− mice 6 . Histological analyses did not display overwhelming tissue damage or loss of Paneth cells in the intestines of Casp8 C362S/C362S Mlkl −/− Asc −/− or Casp8 C362S/C362S Mlkl −/− Casp1 −/− mice (Fig. 4b, c and Extended Data Fig. 5a ), suggesting that the lethality of mice that express catalytically inactive caspase-8 is mainly caused by caspase-1 and ASC. Notably, we still detected the formation of ASC specks and aggregation of caspase-8 in intestinal tissues of Casp8 C362S/C362S Mlkl −/− Casp1 −/− mice (Fig. 4b, d and Extended Data Fig. 7d ). Fig. 4: ASC or caspase-1 deficiency rescues embryonic lethality of mice expressing CASP8(C362S). a , Body and spleen weight of 8-week-old mice. Dots, individual mice. Data are mean ± s.e.m. One-way ANOVA followed by Sidak’s post-analysis compared to the corresponding Casp8 WT/WT values. b , Representative ileal sections from 5-week-old Casp8 C362S/C362S Mlkl −/− Asc −/− ( n = 4) and Casp8 C362S/C362S Mlkl −/− Casp1 −/− ( n = 3) mice with H&E, lysozyme, PAS, ASC and caspase-8 staining. Scale bars, 100 µm (H&E and lysozyme) and 50 µm (PAS, ASC and caspase-8). Arrows, ASC or caspase-8 aggregates. c , IL-1β enzyme-linked immunosorbent assay (ELISA) in ileal lysates. Dots, individual mice. Data are mean ± s.e.m. One-way ANOVA followed by Turkey’s post-analysis. d , Western blot analysis of ileal lysates from two representative mice (shown in b ) detecting cleaved caspase-1, ASC and caspase-8. Lanes, individual mice. Source data Full size image Our data collectively demonstrate that catalytically inactive caspase-8 serves as a nucleation signal for the formation of ASC specks and activation of caspase-1, which ultimately leads to the premature death of mice when necroptosis is blocked. These results reveal a previously unknown and unexpected role for the enzymatic activity and scaffold function of caspase-8, which involves the activation of the inflammasome and induction of pyroptosis under circumstances in which apoptosis and necroptosis are compromised. Notably, we found that the inhibition of necroptosis alone was sufficient to prevent embryonic lethality when CASP8(C362S) was specifically expressed in the endothelial or skin compartment (Fig. 2 ). In contrast to IECs or macrophages, endothelial cells and skin epithelium that expressed CASP8(C362S) did not undergo pyroptosis (Fig. 2 and Extended Data Fig. 7c ), indicating that the capability of the caspase-8 scaffold to induce pyroptosis is restricted to specific cell types—such as myeloid cells and IECs—that need to respond regularly to invading microbial pathogens. Caspase-8 has frequently been reported to interact with the caspase-1–ASC adaptor complex and to promote ASC self-assembly, particularly during bacterial infection 23 , 24 , 25 , 26 , 27 . Viruses are heavily reliant on the fate of infected cells and have evolved to encode suppressors of apoptosis that inhibit caspase-8 and necroptosis suppressors that inhibit RHIM-containing proteins, such as RIPK1 and RIPK3 28 . We therefore hypothesize that the abundance of such viral inhibitors may have driven the counteradaptation of pyroptosis as a host defence. Thus, the caspase-8-mediated switch between different modes of cell death adds a critical layer to the plasticity of specific pathogen-tailored immune responses. Questions that remain to be addressed include how the different modes of inflammatory and/or lytic cell death after inhibition of the enzymatic activity of caspase-8 influence anti-microbial immunity and coordinate adaptive immune responses. Methods Mice Casp8 C362S mice were generated by pronuclear injection of C57BL/6N zygotes with 20 ng µl −1 Cas9 mRNA (TriLink Biotechnologies), 10 ng µl −1 sgRNA and 20 ng µl −1 template DNA (Eurofins). Sequence of single-guide RNA (5′-CACCGTTTCATTCAGGCTTGCCA-3′) and template DNA (5′-CACTGGTTCAAAGTGCCCTTCCCTGTCTGGGAAACCCAAGATCTTTTTCATTCAGGCTAGCCAAGGAAGTAACTTCCAGAAAGGAGTGCCTGATGAGGCAGGCTTCGAGCAACAGAAC-3). Ear cuts were genotyped by Sanger sequencing (Microsynth SEQLAB) with PCR Primer (5′-TGCAAATGAAATCCACGAGA-3′ and 5′-CCAGGTTCCATTCACAGGAT-3′). Founders carrying the intended C362S mutation were backcrossed to a C57BL/6N genetic background for five generations. All mouse studies were performed after approval by local government authorities (LANUV, NRW, Germany) in accordance with the German animal protection law. Animals were housed in the animal care facility of the University of Cologne under standard pathogen-free conditions with a 12-h light/dark schedule and provided with food and water ad libitum. Calculations to determine sample size, randomization and blinding were not performed. Mice were grouped according to their phenotype in mixed sexes. Embryological studies For timed mating experiments a male mouse was paired with a single female mouse. Embryonic day count started at E0.5 with the day at which a positive plug was found. For staining of vascularization of the yolk sac, the yolk sac was fixed in 4% PFA in PBS for 1 h at 4 °C, washed in PBS with 0.5% Tween-20 followed by blocking with blocking buffer (PBS with 0.5% Tween-20, 0.2% BSA and 2% normal goat serum) for 2 h. Yolk sacs were then incubated with Alexa Fluor 647 anti-mouse CD31 antibody (BioLegend) overnight at 4 °C, washed with PBS and mounted with Mowiol. Imaging was conducted on a motorized inverted Olympus IX81 microscope (Cell Imaging Software) 29 . Immunohistochemistry Intestinal tissues and skin were fixed in Roti-HistoFix 4% (Carl Roth), embedded in paraffin and cut in 5-µm sections 30 . After rehydration and heat-induced antigen retrieval in 10 mM citrate buffer or by proteinase K treatment, sections were stained with antibodies against ASC (Santa Cruz), caspase-8 (Enzo) and lysozyme (DAKO). As secondary antibodies, biotinylated goat anti-rabbit IgG (Perkin Elmer) and biotin-SP-AffiniPure goat anti-rat IgG (Jackson Immunoresearch) were used. Staining was visualized using the ABC Kit Vectastain Elite (Vector Laboratories) and DAB staining kit (DAKO) and counterstained with haematoxylin (Carl Roth). Incubation time of the substrate for DAB staining was equal for all tissue sections. Tissue sections were stained with H&E. Goblet cells were stained using PAS staining (Sigma Aldrich) according to the manufacturer’s instructions. Stained sections were scanned with an SCN4000 Slide Scanner (Leica) and analysed with the imaging software Aperio ImageScope v.12.2.2.5015 (Leica). For Paneth cell counts, we analysed 30 crypts per mouse and for dead-cell counts we analysed 15 villi per mouse. Immunofluorescence microscopy Immunofluorescence microscopy analysis was carried out as described previously 30 . In brief, cells were seeded on glass coverslips, treated as indicated and fixed with 3% PFA in PBS for 20 min at room temperature. Subsequently, cells were washed twice with PBS and incubated with blocking buffer (0.1% saponin (Carl Roth), 3% BSA (Carl Roth) in PBS) for 30 min at room temperature. Coverslips were incubated with diluted primary anti-ASC antibody (Santa Cruz), or human-specific anti-caspase-8 antibody (Cell Signaling) in blocking buffer in a humid chamber overnight at 4 °C. After incubation, coverslips were washed with washing buffer (0.1% saponin in PBS) three times and incubated with secondary antibody goat anti-rabbit Alexa Fluor 568 (Thermo Fisher Scientific) or goat anti-mouse Alexa Fluor 488 (Thermo Fisher Scientific) for 1 h at room temperature. Subsequently, cells were stained with 300 nM DAPI (Molecular Probes) for 10 min and washed three times, before being embedded with Mowiol overnight. Imaging was performed on an UltraView Vox Spinning Disk confocal microscope (Perkin Elmer and Nikon) and analysed using Volocity v.5.4.2 (PerkinElmer). Endothelial cell culture Mouse endothelial cells were isolated from the lungs of Casp8 WT/fl and Casp8 C362S/fl mice. Organs were resected and briefly washed in PBS, before being minced and enzymatically (0.5% collagenase) digested. The cell solution was then squeezed through a cell strainer (70 µm) and processed for magnetic bead separation (mouse CD31, Miltenyi Biotec) according to the manufacturer’s protocol. CD31 + endothelial cells were seeded on gelatine-coated wells and cultured in a 1:1 mixture of EGM2 (PromoCell) and full supplemented Dulbecco’s modified Eagle medium (DMEM) (Merck) (containing 20% FCS, 4 g l −1 glucose, 2 mM glutamine, 1% penicillin–streptomycin (100 U ml −1 penicillin, 100 µg ml −1 streptomycin), sodium pyruvate 1% (1 mM), HEPES (20 mM) and 1% non-essential amino acids). After the first passage, cells were resorted using the same magnetic beads. Isolated primary endothelial cells were analysed using the anti-CD31 antibody to distinguish endothelial cells from other cell types and routinely tested negative for mycoplasma contamination by PCR. DNA fragments of Casp8 WT/fl and Casp8 C362S/fl alleles were excised by treatment with recombinant HTNCre (5 µM) purified from Escherichia coli for 24 h in a mixture of DMEM:PBS 1:1 twice. Complete knockout was confirmed by PCR and western blot analysis. Casp8 −/− cells were used as controls. Endothelial cells were treated with mouse TNF (R&D), cyclohexemide (Sigma) and Nec-1 (Enzo) as indicated. Purification of HTNCre For site-specific recombination of floxed Casp8 alleles, HTNCre from transformed E. coli was purified. In brief, bacteria were grown from a diluted overnight culture until an optical density at 600 nm of 0.6–1.1 was reached. Transcription of the HTNCre construct was initiated by the addition of IPTG (1 mM). The culture was incubated for additional 4 h at 37 °C and then centrifuged for 25 min at 8,500 rpm at 4 °C. The pellet (1 g) was resuspended in 10 ml PBS, 1 mg lysozyme per ml suspension, 1:1,000 benzonase (Novagen) and protease inhibitor (Roche). Lysate was homogenated through a high-pressure homogenizer and HTNCre was purified by HisTrap FF crude column (GE Healthcare). Macrophage differentiation Mouse BMDMs were differentiated from the bone marrow of mice with the indicated genotypes. BMDMs were generated by culturing mouse bone marrow in RPMI supplemented with 15% L929-conditioned medium, 10% FCS (Sigma), 10 mM HEPES (Biochrom), 1 mM sodium pyruvate (Biochrom), 2 mM l -glutamine (Biochrom), 100 U ml −1 penicillin, 100 µg ml −1 streptomycin for 7 days. DNA fragments of Casp8 WT/fl , Casp8 C362S/fl and Casp8 fl/fl alleles were excised by treatment with recombinant HTNCre (2.5 µM) (Excellgene, purity of >98%, endotoxin levels of <0.1 endotoxin units µg −1 ) for 24 h in a mixture of RPMI:PBS 1:1 (v/v) or LPS (200 ng ml −1 ) (Invivogen) and IDN-6556 (20 µM). Viability assay Viability was detected using the neutral red assay. In brief, 20,000 endothelial cells per well were seeded on gelatine-coated 96-well plates and cultured in the appropriate medium overnight. Cells were then exposed to the specific conditions and neutral red assay was performed 6 h after treatment. Cell death assay LDH release was measured to analyse cell death using the cytotoxicity detection kit (Roche) according to the manufacturer’s instructions. Tissue homogenates Tissue sections of the small intestine from mice with the indicated genotypes were homogenized in RIPA buffer containing protease and phosphatase inhibitors (Roche) (20 ml buffer per 1 g tissue) using gentleMACS C tubes (Miltenyi Biotec). Supernatants of tissue homogenates were used for the determination of cytokine levels. Blood parameter analysis After cervical dislocation of mice, blood was collected from the heart with an EDTA-coated syringe and immediately diluted with Cellpack (Sysmex) in a ratio of 1:5. Blood was analysed at the University Hospital Cologne, Institute for Clinical Chemistry. Cytokine ELISA Cytokine levels were determined using IL-1β ELISA (R&D Systems) according to the manufacturer’s instructions. Cytokine array The LUNARIS Mouse Cytokine Kit (AYOXXA Biosystems) was used to determine cytokine levels in tissue homogenates according to the manufacturer’s instructions. Cultivation and transfection of human cells HCT-116 and HEK293T cells were purchased from ATCC. All cell lines routinely tested negative for Mycoplasma contamination by PCR. HCT-116 cells were cultured in McCoy’s 5A modified medium (Merck) supplemented with 10% heat-inactivated FCS (Biowest) and transfected with Lipofectamine 2000 (Invitrogen). HEK293T cells were cultured in DMEM (Merck) supplemented with 10% heat-inactivated FCS (Biowest) and were transfected with polyethylenimine (Polysciences Europe GmbH). HEK293T and HCT-116 cells were transfected for 14 h with respective constructs and subsequently treated with IDN-6556 (MedChemExpress) and Nec-1 (Enzo) as indicated. The coding sequence of CASP8 and site-directed CASP8 C360S mutation (forward, 5′-ATTCAGGCTTCTCAGGGGGAT-3′; reverse, 5′-ATCCCCCTGAGAAGCCTGAAT-3′) (Eurofins) was cloned into pcDNA3.1+. The coding sequence of human ASC was cloned into pDsRed2. Generation of CRIPSR–Cas9 Casp8 knockout cells Oligonucleotide sgRNAs (1, GCTCTTCCGAATTAATAGAC; 2, CTACCTAAACACTAGAAAGG) (Eurofins) targeting the Casp8 locus were cloned into the pSpCas9(BB)-2A-GFP (PX458) vector, which was a gift from F. Zhang (Addgene plasmid 48138), and transfected into HCT-116 and HEK293T cells. After transfection for 24 h, cells were plated onto 96-well plates (1 cell per well) and caspase-8 deficiency was checked after 3–4 weeks in single-cell clones by western blot analysis. Immunoprecipitation HEK293T cells were transfected for 14 h with respective constructs and subsequently treated with IDN-6556 as indicated. Cells were trypsinized, washed twice with chilled PBS and centrifuged at 700 g for 3 min at 4 °C. The cell pellet was resuspended in RIPA buffer (1% Triton X-100, 150 mM NaCl, 50 mM Tris, 0.1% SDS, 0.5% deoxycholate, 10% glycerol and 1 mM EGTA) and incubated for 30 min on ice and centrifuged for 20 min at 20,800 g at 4 °C. Before antibody incubation, immunoprecipitation incubation buffer (20 mM Tris, pH 8.0, 137 mM NaCl, 1% NP-40 and 2 mM EDTA) was added in a ratio of 1× RIPA lysate:2× incubation buffer. Lysates were incubated with an anti-caspase-8 human-specific antibody (Cell Signaling) for 1 h at 4 °C before addition of µMACS Protein G MicroBeads (Miltenyi Biotech). Normal mouse IgG (Santa Cruz) served as a control. Immunoprecipitation was performed according to the manufacturer’s instructions. Western blot analysis Cells and tissue were lysed in 20 mM Tris-HCl pH 7.5, 135 mM NaCl, 1.5 mM MgCl 2 , 1 mM EGTA, 1% Triton X-100, 10% glycerol, protease inhibitor (Roche) and phosphatase inhibitor (Roche). After 20 min on ice, cells were centrifuged at 20,000 g for 20 min at 4 °C, the soluble fraction was collected and the insoluble fraction was mechanically disrupted in 6 M urea, 3% SDS, 10% glycerine and 50 mM Tris pH 6.8. Cell lysates of endothelial, HCT-166 and HEK293T cells were prepared in CHAPS lysis buffer (10 mM HEPES pH 7.4, 150 mM NaCl, 1% CHAPS, protease inhibitor (Roche) and phosphatase inhibitor (Roche)) 31 . Protein concentrations of cell lysates and tissue lysates were determined using a Pierce BCA Protein Assay Kit (ThermoFisher Scientific) or DC protein assay (Bio Rad) according to the manufacturer’s instructions. Proteins were separated by SDS–PAGE and transferred to a nitrocellulose or PVDF membrane. Proteins were stained with antibodies against ASC (Santa Cruz), caspase-1 p10 (Santa Cruz), caspase-1 p20 (Adipogen), caspase-1 mouse-specific (Biolegend), caspase-8 (Enzo), caspase-8 mouse-specific (Cell Signaling), caspase-8 human-specific (Cell Signaling), cleaved caspase-8 Asp387 (Cell Signaling), caspase-3 (Cell Signaling), cleaved caspase-3 (Cell Signaling), phosphorylated MLKL(S345) (Abcam), DsRed (BD Biosciences), human-specific caspase-7 (Cell Signaling), human-specific caspase-9 (Cell Sgnaling), FADD (BD Biosciences), RIPK1 (BD Biosciences), RIPK3 (Enzo), cFLIP (Sigma) and β-actin HRP-conjugated (Santa Cruz), secondary antibodies included goat anti-rabbit IgG conjugated to horseradish peroxidase (HRP, Cell Signaling), goat anti-mouse IgG HRP (Sigma), goat anti-mouse IgG light chain HRP (Jackson Immuno Research) and goat anti-rat IgG (H+L) HRP (ThermoFisher Scientific) and then developed using a ChemiDoc MP Imaging System (BioRad) 30 . Statistics Data are mean ± s.e.m. Sample sizes (replicates, animals) are traceable as individual data points in each figure. In vitro experiments were repeated at least twice. Data involving animals depict pooled data of at least two independent experiments. All statistical tests used to examine statistical significance were two-sided. Exact P values and the respective tests or analyses are listed in the figure legends; * P < 0.05, ** P < 0.01, *** P < 0.001; NS, not significant. LUNARIS Analysis Suite 1.3, GraphPad Prism 7.0 and Excel were used to analyse data in this study. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The data supporting the findings of this study are available within the paper and its Supplementary Information . Source Data for Figs. 1 – 4 and Extended Data Figs. 3 , 5 – 7 are provided with the paper.
The enzyme caspase-8 induces a molecular cell death programme called pyroptosis without involving its enzymatic activity, a new study by Hamid Kashkar published in Nature shows. In order to safeguard healthy and functioning tissues, cells utilize different cell death mechanisms to dispose of unwanted cells (e.g. infected or aged cells). Apoptosis is a 'cellular suicide programme' that does not cause tissue injury and is induced by caspase-8. Necroptosis is another mode of regulated cell death which causes cellular damage and is normally engaged when caspase-8 is inhibited. Pyroptosis describes an inflammatory mode of regulated cellular death process, which is normally activated in response to microbial pathogens and is central for mounting anti-microbial immunity. Hamid Kashkar and his team have now shown that caspase-8 not only controls apoptosis and necroptosis but pyroptosis as well. The study "Caspase-8 is the molecular switch for apoptosis, necroptosis and pyroptosis" was published in Nature. The research team studied the biological roles of caspase-8 in cell cultures and mice. Kashkar's group showed that the enzymatic activity of caspase-8 is required to inhibit pyroptosis. "We found out that the expression of inactive caspase-8 causes embryonic lethality and inflammatory tissue destruction. This could only be restored when necroptosis and pyroptosis were simultaneously blocked," Hamid Kashkar explains. The lack of caspase-8 enzymatic activity primarily causes necroptotic cell death. Interestingly, when necroptosis is blocked, the inactive caspase-8 serves as a protein scaffold for the formation of a signalling protein complex called inflammasome, which ultimately induces pyroptosis. "Microbial pathogens are heavily reliant on the fate of infected cells and have evolved a number of strategies to inhibit apoptosis and necroptosis," Hamid Kashkar adds. The current study hypothesises that these strategies may have driven the counter-evolution of pyroptosis to secure cellular death as a host defence mechanism. The caspase-8-mediated switch between different modes of cell death adds a critical layer to the plasticity of cell death-induced immunity, which is increasingly involved in aging-associated disorders.
10.1038/s41586-019-1770-6
Biology
New screening method could lead to microbe-based replacements for chemical pesticides
Mari Kurokawa et al, An efficient direct screening system for microorganisms that activate plant immune responses based on plant–microbe interactions using cultured plant cells, Scientific Reports (2021). DOI: 10.1038/s41598-021-86560-0 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-021-86560-0
https://phys.org/news/2021-05-screening-method-microbe-based-chemical-pesticides.html
Abstract Microorganisms that activate plant immune responses have attracted considerable attention as potential biocontrol agents in agriculture because they could reduce agrochemical use. However, conventional methods to screen for such microorganisms using whole plants and pathogens are generally laborious and time consuming. Here, we describe a general strategy using cultured plant cells to identify microorganisms that activate plant defense responses based on plant–microbe interactions. Microbial cells were incubated with tobacco BY-2 cells, followed by treatment with cryptogein, a proteinaceous elicitor of tobacco immune responses secreted by an oomycete. Cryptogein-induced production of reactive oxygen species (ROS) in BY-2 cells served as a marker to evaluate the potential of microorganisms to activate plant defense responses. Twenty-nine bacterial strains isolated from the interior of Brassica rapa var. perviridis plants were screened, and 8 strains that enhanced cryptogein-induced ROS production in BY-2 cells were selected. Following application of these strains to the root tip of Arabidopsis seedlings, two strains, Delftia sp. BR1R-2 and Arthrobacter sp. BR2S-6, were found to induce whole-plant resistance to bacterial pathogens ( Pseudomonas syringae pv. tomato DC3000 and Pectobacterium carotovora subsp. carotovora NBRC 14082). Pathogen-induced expression of plant defense-related genes ( PR-1 , PR-5 , and PDF1.2 ) was enhanced by the pretreatment with strain BR1R-2. This cell–cell interaction-based platform is readily applicable to large-scale screening for microorganisms that enhance plant defense responses under various environmental conditions. Introduction Plants have evolved unique immune responses to protect against a variety of pathogens 1 , 2 . Plants perceive pathogen invasion via interactions between pattern recognition receptors on the cell surface and conserved molecular signature molecules known as pathogen/microbe-associated molecular patterns (PAMPs/MAMPs). Following pathogen recognition, a series of defense responses is induced, collectively known as PAMP-triggered immunity (PTI). Over time, however, specific pathogens have acquired the ability to suppress PTI in plants. These pathogens secrete various PTI-interfering effectors in the host plants. However, if the host plants acquire the ability to recognize these effectors via R (resistance) proteins, effector-triggered immunity (ETI) is induced, which involves stronger and longer-lasting responses than PTI. Early defense responses common to PTI and ETI include an increase in cytosolic Ca 2+ concentration, production of reactive oxygen species (ROS), activation of the mitogen-activated protein kinases (MAPKs), expression of various defense-related genes, and increased biosynthesis of phytoalexins and defense hormones, such as salicylic acid (SA) and jasmonic acid (JA) 3 , 4 , 5 . In addition to these local defense responses, plants exhibit systemically induced defense responses collectively known as systemic acquired resistance (SAR) 6 . In SAR, when pathogen-stimulated plants are subsequently challenged by pathogens, they exhibit more rapid and/or stronger activation of defense responses that enable them to resist to a wide range of pathogens. For example, inoculation of Arabidopsis thaliana with the pathogenic bacterium Pseudomonas syringae pv. tomato triggers SAR, which in turn induces resistance to the pathogenic oomycete Peronospora parasitica 7 . In addition to pathogen-induced SAR, non-pathogenic microorganisms can induce systemic defense responses known as induced systemic resistance (ISR) 8 . For example, the rhizobacterium Pseudomonas fluorescens WCS417r can trigger ISR in several plant species, including Arabidopsis and carnation 9 , 10 . Pretreatment of Arabidopsis with Streptomyces sp. strain EN27, an endophytic actinobacterium isolated from wheat, enhances resistance to the bacterial pathogen Pectobacterium carotovora subsp. carotovora and fungal pathogen Fusarium oxysporum 11 . The well-known beneficial bacterium Paraburkholderia phytofirmans PsJN can induce Arabidopsis resistance to P. syringae pv. tomato through ISR 12 , 13 . Microorganisms that activate plant immune responses have a high potential for application as biocontrol agents in agriculture because they could reduce the demand for pesticides 14 , 15 . Such microorganisms have attracted considerable attention because they function like vaccines without producing undesirable effects (e.g., growth inhibition) in plants 8 , 15 . Endophytes are particularly well suited for use as biocontrol agents due to their inherent ability to stably colonize the interior of plants. However, conventional methods to screen for such microorganisms use whole plants and pathogens and thus tend to be laborious and time consuming 9 , 10 , 11 , resulting in the identification of few microorganisms that activate plant defense responses. Moreover, microorganisms that produce antimicrobial compounds or exclude pathogens via niche competition can also be selected using conventional methods based on observation of disease symptoms. Therefore, these conventional methods are not suitable for direct screening for microorganisms that activate plant defense responses. Cultured plant cells are useful as a simplified experimental system for studying plant immunity 16 . Tobacco BY-2 cells, which exhibit rapid and stable growth, are typical cultured plant cells 17 . Cryptogein, a proteinaceous elicitor of plant immune responses produced by the pathogenic oomycete Phytophthora cryptogea , is a well-studied model useful for elucidating the mechanisms of plant defense responses in BY-2 cells 18 , 19 , 20 , 21 , 22 , 23 . Upon recognizing cryptogein, BY-2 cells exhibit plasma membrane ion fluxes, an increase in cytosolic Ca 2+ concentration, and NADPH oxidase–dependent ROS production 19 . These initial responses are accompanied by activation of MAPKs and accumulation of defense-related gene transcripts 21 , 22 . Cryptogein-induced ROS production is closely correlated with the expression of defense-related genes and hypersensitive cell death; thus, it could be a suitable marker for evaluating defense responses in BY-2 cells 18 , 21 , 22 . These findings suggest that monitoring cryptogein-induced ROS production in BY-2 cells is a suitable experimental system for screening for microorganisms or chemicals that activate plant immune responses. In this study, we describe a system based on plant–microbe interactions through physical and chemical signals for exploring the potential of microorganisms to activate plant immune responses. We monitored cryptogein-induced ROS production in BY-2 cells as an efficient marker to identify microorganisms capable of activating plant defense responses. This system involves incubation of a microorganism with tobacco BY-2 cells, followed by treatment with cryptogein and quantitative detection of ROS production via chemiluminescence. Our system streamlines the process of screening for microorganisms that “prime” and potentiate plant immune responses, thus helping plants resist pathogens. We first isolated bacteria from Brassica rapa var. perviridis as model microorganisms. A total of 31 bacterial strains isolated from the plant interior were assayed using the screening system, and strains that enhanced cryptogein-induced ROS production in BY-2 cells were selected. We identified two novel endophytes that induce bacterial pathogen resistance in whole Arabidopsis plants. This cell–cell interaction–based platform could facilitate the discovery of plant immunity–activating microorganisms from a variety of sources. Results Isolation of bacteria from the interior of B. rapa var. perviridis We isolated bacteria from the interior of B. rapa var. perviridis grown by organic farming without the use of pesticides. Roots, stems, and leaves were cut into small pieces and surface-sterilized using appropriate concentrations of sodium hypochlorite and ethanol 24 , as described in the Materials and Methods. After surface-sterilization, each tissue sample was rinsed with sterile water and placed on NBRC802 and ISP2 agar plates. The water used for rinsing was also spread onto each medium as a control. When no microorganisms appeared on medium for the control, that is, the surface had been sterilized, colonies that formed around the tissues were selected as putative endophytes. Using this isolation procedure, a total of 31 bacterial strains were isolated, of which 10 and 20 strains were derived from roots and stems, respectively, and 1 strain was derived from a leaf (Table 1 ). Taxonomic identification of these strains was performed based on 16S rDNA sequencing, and a phylogenetic tree of the sequences was constructed (Fig. 1 ). We found a variety of cultivable bacteria in the microbiome. These bacteria belonged to 9 different genera: Bacillus , Brevibacterium , Glutamicibacter , Arthrobacter , Paenarthrobacter , Agrobacterium , Delftia , Pseudomonas , and Stenotrophomonas (Table 1 and Fig. 1 ). Four strains (BR2L-1, BR3S-2, BR3S-8, and BR3S-10) exhibited low identity (< 96%) to previously reported sequences of typical strains, indicating that these strains might constitute new genera or species. The isolated bacteria were divided into 3 phyla, Firmicutes , Actinobacteria , and Proteobacteria (Fig. 1 ). Excluding potential human pathogenic bacteria (two Stenotrophomonas strains), the isolated bacteria were analyzed further. Table 1 Bacterial strains recovered from the interior of B. rapa var. perviridis . Full size table Figure 1 Phylogenetic relationships of bacterial strains recovered from the interior of B. rapa var. perviridis based on the 16S rDNA sequence. The bootstrap values from 1000 replications are shown at each of the branch points on the tree. Strain BR2R-1 is not included in the phylogenetic tree, because the 16S rDNA sequence contains an insertion (ca. 300 bp). Full size image ROS production induced by interaction between bacteria and cultured plant cells We first examined interactions between the isolated bacteria and cultured plant cells by monitoring ROS production. Although microbial components such as lipopolysaccharides and several metabolites reportedly induce ROS production in cultured plant cells 25 , 26 , few reports have examined ROS production induced by intact microbial cells 27 . Tobacco BY-2 cells were incubated with each strain of isolated bacteria, and ROS production was monitored using a chemiluminescence assay with luminol. Most of the bacteria (19 strains) had no effect on BY-2 cells during co-incubation, based on ROS production (Fig. S1 a). Interestingly, however, 10 strains (BR1R-2, BR1R-5, BR2R-4, BR2S-3, BR2S-6, BR3S-3, BR3S-7, BR3S-8, BR3S-10, and BR3S-11) induced ROS production after approximately 80 min of co-incubation (Fig. S1 b). These results suggest that intact bacteria can induce ROS production by plant cells via interactions. In order to determine whether the ROS was produced by the bacteria or cultured plant cells, the assays were repeated using cells killed by autoclave treatment. Incubation of autoclaved plant cells with intact Delftia sp. BR1R-2 cells resulted in no detectable ROS production. In contrast, incubation of intact plant cells with autoclaved bacteria resulted in a biphasic increase in ROS production. The first peak in ROS generation occurred after 40 min and was followed by a second peak that reached a maximum at approximately 160 min (Fig. 2 ), resembling the temporal pattern of cryptogein-triggered ROS production in tobacco BY-2 cells 18 , 19 . These results clearly demonstrate that bacteria act on BY-2 cells to induce ROS production. It is interesting to note that co-incubation with intact bacteria resulted in only one peak in ROS production, whereas co-incubation with autoclaved bacteria resulted in a biphasic increase in ROS. During co-incubation with intact strain BR1R-2 cells, some factor(s) derived from the bacteria might have scavenged ROS produced by the BY-2 cells. Considered collectively, these data indicate that this experimental system is useful for evaluating interactions between bacteria and cultured plant cells. Figure 2 Time course of ROS production in BY-2 cells co-incubated with BR1R-2 cells. Intact BY-2 cells were co-incubated with intact BR1R-2 cells (∆) or autoclaved BR1R-2 cells (□). In another experiment, autoclaved BY-2 cells were co-incubated with intact BR1R-2 cells ( ◊ ) or autoclaved BR1R-2 cells ( ○ ). ROS production was monitored by chemiluminescence. The average value of the autoclaved BY-2/autoclaved BR1R-2 ( ○ ) sample was expressed as 1.0. Average values ± SE from three independent experiments are presented. Full size image Screening for microorganisms that prime plant immune responses based on plant–microbe interactions using cultured plant cells We established an experimental system using intact bacteria and cultured plant cells to screen for microorganisms that prime plant immune responses. Cryptogein-induced ROS production in tobacco BY-2 cells was employed as a marker for the screening (Fig. S2 ). Buffer containing BY-2 cells was inoculated with culture solution of each isolated bacterial strain and incubated for 4 h. After the co-incubation, the cells were collected and suspended in fresh buffer to remove ROS scavengers and other bacteria-derived metabolites. Cryptogein, as an elicitor of plant immune responses, was then added to the buffer, and ROS production was monitored by chemiluminescence. We used Delftia sp. BR1R-2 to validate the screening system (Fig. 3 ). Incubation of only plant cells or bacteria resulted in low ROS production after cryptogein addition. In contrast, pre-incubation of BY-2 cells with BR1R-2 cells resulted in greatly enhanced cryptogein-induced ROS production. The amount of ROS produced by BY-2 cells after BR1R-2 treatment was three times that produced by BY-2 cells after mock treatment. These results indicate that strain BR1R-2 is suitable for priming the immune responses of BY-2 cells. Figure 3 Time course of cryptogein-induced ROS production in BY-2 cells co-incubated with BR1R-2 cells. BY-2 cells were co-incubated with BR1R-2 cells (∆) or mock treatment (only a mixture of the medium and the buffer, ○ ), and then cryptogein was added. In another experiment, BY-2 cells were co-incubated with BR1R-2 cells (□) or mock treatment (only the mixture, ◊ ), and then mock elicitor (only the buffer) was added instead of cryptogein. ROS was monitored by chemiluminescence. The maximum value of the mock/cryptogein ( ○ ) sample was expressed as 1.0. Average values ± SE from three independent experiments are presented. Full size image This system was then used to screen for the plant immunity–activating potential of the other bacteria. Although most of the bacteria (21 strains) had no effect on BY-2 cells with co-incubation (Fig. S3 a), 7 strains in addition to Delftia sp. BR1R-2 enhanced the cryptogein-induced ROS production (Fig. S3 b): Pseudomonas sp. BR1R-3, Pseudomonas sp. BR1R-5, Bacillus sp. BR2R-4, Bacillus sp. BR2S-4, Arthrobacter sp. BR2S-6, Agrobacterium sp. BR3S-1, and Paenarthrobacter sp. BR3S-9. Interestingly, these immunity-inducing bacteria formed distinct phylogenetic clusters (Fig. 1 ). In addition, P. phytofirmans PsJN, a well-known biocontrol bacterium 12 , 13 , enhanced the cryptogein-induced ROS production (Fig. S4 ), although the amount of ROS produced by BY-2 cells after PsJN treatment was lower than that produced by BY-2 cells after BR1R-2 treatment (Fig. 3 ). These results suggest that this system is useful as a general assay for screening bacteria for the plant immunity–activating potential. We also confirmed that these strains (with the exception of Bacillus sp. BR2S-4) enhanced ROS production in Arabidopsis T87 cells triggered by the plant immune response elicitor flg22, a 22–amino acid peptide derived from flagellin that is known to induce ROS production 28 (Figs. S5 and S6 ). These 7 strains were selected as candidate microorganisms for priming plant immune responses and then subjected to the second screening using whole plants. Biocontrol activity of selected microorganisms We examined the ability of the selected bacteria to enhance disease resistance using whole Arabidopsis plants. Plants were inoculated with each strain of selected bacteria by immersing the root tip of 7-day-old seedlings in the bacterial cell culture solution. After cultivation for an additional 7 days, we observed that plants inoculated with each of the bacterial strains were able to grow (Figs. S7 and S8 ). Plating extracts of surface-sterilized bacteria-inoculated plants on NBRC802 or ISP2 agar medium revealed that the bacteria colonized the interior of the Arabidopsis plants (Fig. 4 ). The number of bacteria ranged from 10 5 to 10 9 colony forming units (CFU) per gram of Arabidopsis , depending on the bacterial strain. Inoculation with Delftia sp. BR1R-2 or Arthrobacter sp. BR2S-6 did not affect plant growth, but inoculation with the other 5 strains resulted in a significant reduction in plant growth (Figs. S7 and S8 ). We also confirmed that strains BR1R-2 and BR2S-6 colonized the stems and leaves (Fig. S9 ), indicating that these strains spread from the roots to the aerial tissues of Arabidopsis as endophytes. Figure 4 Colonization of the selected bacteria in Arabidopsis . Plants were inoculated with each strain of selected bacteria by immersing the root tip of 7-day-old seedlings in the bacterial cell culture solution, followed by cultivation for 7 days. After plating extracts of surface-sterilized plants on medium, colonies formed on the plate were counted. No colonies were formed for plants that received mock treatment (only the medium) instead of the bacterial cell culture solution. Average values ± SE from three independent experiments are presented. Full size image The promising endophytes Delftia sp. BR1R-2 and Arthrobacter sp. BR2S-6 were then tested for their ability to enhance disease resistance. Pseudomonas syringae pv . tomato DC3000 and Pectobacterium carotovorum subsp. carotovorum NBRC 14082 were used as hemibiotrophic and necrotrophic bacterial pathogens, respectively. Arabidopsis seedlings treated with each endophyte were cultivated for 7 days, and the plants were then challenged with P. syringae pv . tomato DC3000. After cultivation for an additional 3 days, we observed that mock-treated plants exhibited severe disease symptoms of chlorosis (Fig. 5 ). In contrast, plants treated with strains BR1R-2 and BR2S-6 exhibited significantly less-severe disease symptoms compared with mock-treated plants (Fig. 5 ). We also found that the density of strain DC3000 in Arabidopsis decreased to 0.9% and 7.4% in plants treated with strains BR1R-2 and BR2S-6, respectively, compared with mock-treated plants (Fig. S10 ). Similarly, although plants challenged with P. carotovorum subsp. carotovorum NBRC 14082 exhibited soft rot, pretreatment with strains BR1R-2 and BR2S-6 enhanced the disease resistance of Arabidopsis plants (Fig. 5 ). These results confirm that the microorganisms selected using the present screening system enhance the resistance of Arabidopsis plants to two different specific pathogens. The biocontrol effects of strain BR1R-2 were more pronounced than those of strain BR2S-6 under the experimental conditions used in this study (Fig. 5 ), and therefore was further validated. Figure 5 Enhancement of pathogen resistance of Arabidopsis by pretreatment with strains BR1R-2 and BR2S-6. BR1R-2–, BR2S-6–, or mock (only the medium)–treated Arabidopsis seedlings were cultivated for 7 days, and the plants were then challenged with P. syringae pv . tomato DC3000 or P. carotovorum subsp. carotovorum NBRC 14082 and cultivated for 3 days. ( a ), representative photographs; ( b ), disease severity. Disease severity is the percentage calculated by dividing the number of the damaged leaves by the number of all the leaves. Average values ± SE from three independent experiments are presented. Asterisks indicate a significant difference from the mock control based on Student’s t-test (**, P < 0.01; ***, P < 0.001). Full size image Effects of colonization by Delftia sp. BR1R-2 on the expression of defense-related genes In order to examine the mechanism by which Delftia sp. BR1R-2 enhances disease resistance in Arabidopsis , the expression patterns of various defense-related genes ( PR-1 , PR-5 , and PDF1.2 ) in the aerial tissues of Arabidopsis plants were analyzed using reverse transcription–quantitative polymerase chain reaction (RT-qPCR). Activation of defense responses via the SA signaling pathway is accompanied by expression of PR-1 and PR-5 , whereas PDF1.2 is a marker of the JA/ET signaling pathway 29 , 30 . We first evaluated the effects of BR1R-2 colonization on gene expression (Fig. 6 , gray and yellow bars). RT-qPCR analysis revealed that colonization by strain BR1R-2 induced the expression of PR-1 , PR-5 , and PDF1.2 , although the expression of PR-5 was induced at a lower level than the other two genes. These results suggest that strain BR1R-2 simultaneously activates both the SA and JA/ET signaling pathways in Arabidopsis . Figure 6 Fold-increase in PR-1 , PR-5 , and PDF1.2 transcripts in Arabidopsis induced by pretreatment with strain BR1R-2 and pathogen challenge. BR1R-2–treated A. thaliana seedlings were cultivated for 7 days, and the plants were then challenged with P. syringae pv . tomato DC3000 or P. carotovorum subsp. carotovorum NBRC 14082 and cultivated for 3 days. Arabidopsis plants were pretreated with strain BR1R-2 (yellow bar) or mock treatment (only the medium, gray bar) and challenged with mock inoculum (only sterile water containing 0.025% Silwet L-77) instead of pathogen. In another experiment, Arabidopsis plants were pretreated with strain BR1R-2 (red bar) or mock treatment (only the medium, blue bar) and challenged with pathogen. ( a ), challenge with strain DC3000; ( b ), challenge with strain NBRC 14082. Average values ± SE from three independent experiments are presented. Asterisks indicate a significant difference from the mock control based on Student’s t -test (*, P < 0.05). Full size image We then evaluated the effects of BR1R-2 colonization on pathogen-induced gene expression in Arabidopsis . Plants grown from BR1R-2– and mock-treated seedlings were challenged with P. syringae pv . tomato DC3000 (Fig. 6 a, blue and red bars). BR1R-2–treated plants exhibited higher expression of PR-1 compared with mock-treated plants at 9 and 24 h after infection. The expression of PR-5 was higher in BR1R-2–treated plants than mock-treated plants at 24 h after infection, whereas PDF1.2 expression in BR1R-2–treated plants was upregulated at 3 h post-infection. Similarly, the expression of PR-1 , PR-5 , and PDF1.2 induced by P. carotovorum subsp. carotovorum NBRC 14082 in Arabidopsis was enhanced by pretreatment with strain BR1R-2 (Fig. 6 b, blue and red bars). The expression of PDF1.2 was upregulated at 24 h after infection with strain NBRC 14082, whereas its expression was high at 3 h after infection with strain DC3000. These results indicate that strain BR1R-2 primes and potentiates the expression of plant defense-related genes induced by pathogen infection. Discussion Assuming that pesticide-free vegetable plants can be viable as a result of colonization by beneficial microorganisms, we isolated bacteria from B. rapa var. perviridis grown by organic farming without the use of pesticides. A total of 31 bacterial strains were recovered from the interior of the plant, and these strains belonged to 3 phyla, Firmicutes , Actinobacteria , and Proteobacteria . It has been reported that bacterial endophytes are typically dominated by 4 phyla, Proteobacteria , Actinobacteria , Firmicutes , and Bacteroidetes 31 , 32 . The present results indicate that the bacterial strains isolated from B. rapa var. perviridis belong to the same groups at the phylum level as bacteria isolated from other plant species. Although endophytes that colonize Brassicaceae plants such as A. thaliana , B. napus , and B. campestris have been studied 33 , 34 , to our knowledge, this is the first report describing the diversity of bacteria isolated from the interior parts of B. rapa var. perviridis . We isolated 9 Bacillus , 5 Pseudomonas , and 2 Stenotrophomonas strains. Bacteria of these genera have frequently been recovered from Brassicaceae plants. In addition, we isolated 2 Brevibacterium , 3 Glutamicibacter , 3 Arthrobacter , and 2 Paenarthrobacter strains, as well as 1 Delftia strain. Interestingly, few reports have described bacteria of these genera in the microbiome of Brassicaceae . The number of bacterial strains isolated from roots and leaves were relatively small. This might be attributed to harsh conditions used for the surface sterilization in this study. In this study, we developed a novel system for screening for microorganisms that activate plant immune responses based on plant–microbe interactions using cultured plant cells. The bacteria isolated from the interior of B. rapa var. perviridis plants were examined using the screening system with cryptogein-induced ROS production in tobacco BY-2 cells as a marker. A total of 8 bacterial strains were selected using this screening system (Figs. 3 and S3b ). Interestingly, although 4 of these 8 strains (BR1R-3, BR2S-4, BR3S-1, and BR3S-9) did not induce BY-2 cells to produce ROS in the absence of cryptogein (Fig. S1 ), the 4 strains did enhance cryptogein-induced ROS production in the cultured plant cells. We also found that 7 of the 8 bacterial strains enhanced flg22-induced ROS production in Arabidopsis T87 cells (Fig. S6 ). Thus, using this screening system, bacteria belonging to a variety of genera within 3 phyla were selected as candidate microorganisms for priming plant immune responses. In this study, we subjected bacteria after 24-h cultivation to the assays in order to rapidly screen many bacterial strains. On the other hand, since cultivation time affects the growth phase, detailed examination of the time would be needed to optimize plant immunity–activating potential of each strain. It should also be noted that the developed method cannot select microorganisms that activate plant immune responses without enhancement of elicitor-induced ROS production. Endophytes are generally preferable as biocontrol agents due to their inherent ability to stably colonize in the interior of plants. Characteristics such as motility, adhesion, and cell-wall degradation activity are reportedly required for such colonization 35 , 36 . We confirmed that 7 bacterial strains selected using the proposed screening system were capable of colonizing the interior of Arabidopsis plants (Fig. 4 ). The number of bacteria colonizing plants was relatively high (Fig. 4 ). The medium used here contained 10 g/l sucrose as a carbon source for the plant, which is likely a good carbon source for the bacteria as well. However, 5 of these 7 strains caused a significant reduction in plant growth (Figs. S7 and S8 ). This growth inhibition was not correlated with the number of bacteria colonizing the plants (Figs. 4 and S8 ). One possible explanation is that these strains induce defense responses too strongly. Strong induction of defense responses in plants is often accompanied by cell cycle arrest or growth inhibition 21 , 37 , 38 . In contrast, 2 of the 7 bacterial strains, Delftia sp. BR1R-2 and Arthrobacter sp. BR2S-6, colonized the interior of Arabidopsis plants without inhibiting their growth (Figs. S7 and S8 ). These two endophytes endowed Arabidopsis with resistance to both hemibiotrophic and necrotrophic bacterial pathogens (Fig. 5 ). Therefore, strains BR1R-2 and BR2S-6 could be useful biocontrol agents. Strain BR1R-2 is the first bacterium of the genus Delftia shown to function as a biocontrol agent and exhibited more pronounced biocontrol effects on Arabidopsis than strain BR2S-6 (Fig. 5 ). Delftia sp. BR1R-2 was further examined in order to elucidate the mechanism by which it enhances pathogen resistance in Arabidopsis . Nonpathogenic bacteria reportedly enhance disease resistance by stimulating plant defense-related genes, as described above. Here, we investigated the expression of PR-1 and PR-5 , which are generally involved in the SA signaling pathway, and the expression of PDF1.2 , which is involved in the JA/ET signaling pathway. Colonization by strain BR1R-2 induced the expression of all three genes (Fig. 6 ). These results suggest that strain BR1R-2 simultaneously activates the SA and JA/ET signaling pathways in Arabidopsis and that the resulting expression of defense-related genes provides resistance to two different pathogens. The biocontrol activity of most nonpathogenic bacteria involves stimulation of either pathway (primarily the JA/ET signaling pathway), whereas the number of bacteria that activate both pathways is limited 39 . For example, defense responses mediated by the rhizobacterium Bacillus cereus AR156 are dependent on both pathways 39 . Furthermore, the expression of PR-1 , PR-5 , and PDF1.2 induced by the pathogens in the present study was enhanced by pretreatment with strain BR1R-2 (Fig. 6 ). These results indicate that strain BR1R-2 enhances the pathogen resistance of Arabidopsis by priming its immune responses. In conclusion, we described a general strategy for exploring the potential of microorganisms to activate plant immune responses based on plant–microbe interactions using cultured plant cells. The value of this strategy was demonstrated by identifying novel plant immunity–activating bacteria, Delftia sp. BR1R-2 and Arthrobacter sp. BR2S-6. The developed method using cultured plant cells enables rapid direct screening of microorganisms for plant immunity–activating potential, thus reducing the number of samples subjected to laborious assays using whole plants (Fig. 7 ). Therefore, this approach should be readily applicable to large-scale screening for plant immunity–activating microorganisms from a variety of environments. Figure 7 Schematic illustration of the developed method compared to the conventional method. Full size image Materials and methods Isolation and identification of bacteria from the interior of B. rapa var. perviridis Brassica rapa var. perviridis plants were grown by organic farming without the use of pesticides at the Suzuki Farm (Tachikawa, Tokyo, Japan) and collected between May and July 2017. The plants were separated into roots, stems, and leaves. The plant tissues were then washed with running tap water and aseptically sectioned into 1-cm fragments. These fragments were surface-sterilized by dipping in 5% sodium hypochlorite for 3 min, followed by 70% ethanol for 2 min, after which they were rinsed with sterile water for a few minutes, according to a previous report 24 , with some modifications. Each fragment was further cut and placed onto NBRC802 or ISP2 agar medium and incubated at 30 °C for approximately 1 month. The final rinse water was also plated onto each medium to confirm the effectiveness of the surface sterilization. After incubation, single-colony isolation was repeated for colonies formed around the tissues. NBRC802 medium contained (per liter) Hipolypepton (10 g), Bacto yeast extract (2 g), and MgSO 4 ·7H 2 O (1 g) (pH 7.0). ISP2 medium contained (per liter) Bacto yeast extract (4 g), Bacto malt extract (10 g), and glucose (4 g) (pH 7.3). Taxonomic identification of isolated bacteria was performed based on the 16S rDNA sequence. The DNA was amplified from colonies by polymerase chain reaction (PCR) using two oligonucleotide primers, 9F 5′-GAGTTTGATCCTGGCTCAG-3′ and 1541R 5′-AAGGAGGTGATCCAGCC-3′. PCR was performed using KOD FX Neo polymerase (Toyobo, Osaka, Japan) according to the manufacturer’s recommendations under the following conditions: 94 °C for 2 min, followed by 40 cycles of 98 °C for 10 s, 68 °C for 2 min, and 72 °C for 10 min. After purification, the amplified DNAs were sequenced by Eurofins (Tokyo, Japan). The sequences of the 5′-terminal region (ca. 500 bp) were determined for all strains except BR2R-1, for which the 3′-terminal region (ca. 500 bp) sequence was determined because the 16S rDNA sequence contains an insertion (ca. 300 bp) in the 5′-terminal region. The sequences were compared to those in the GenBank database using BLASTN ( ). MEGA software ( ) was used to align the sequences and construct a neighbor-joining phylogenetic tree. P. phytofirmans PsJN (DSM 17,436) was purchased from German Collection of Microorganisms and Cell Cultures. Strain PsJN was cultivated in trypticase soy broth (BD, Sparks, MD, USA) (pH 7.3) at 30 °C for 24 h. Plant materials and growth conditions Suspensions of tobacco BY-2 ( Nicotiana tabacum L. cv. Bright Yellow 2) cells were maintained by weekly dilution (1/100) with fresh Linsmaier and Skoog (LS) medium, modified according to previous reports 18 , 19 . The cells were maintained in the dark at 28 °C with aeration (shaking at 120 rpm). Suspensions of A. thaliana T87 cells were maintained by weekly dilution (2/100) with fresh Jouanneau and Péaud-Lenoël (JPL) medium 40 . The cells were maintained at 22 °C with aeration (shaking at 120 rpm) under a light intensity of 60–100 µE m −2 s −1 . Although we cannot provide the plant cell lines used in our laboratory, BY-2 and T87 cells are available from RIKEN BioResource Research Center in Japan. Arabidopsis thaliana Columbia-0 was employed for whole-plant experiments. Seeds were surface-sterilized by dipping in 20% sodium hypochlorite for 10 min and then washed repeatedly with sterile water. After treatment at 4 °C in the dark for 2 days, sterilized seeds were sown in 1/2 Murashige and Skoog (MS) medium (Sigma-Aldrich, St. Louis, MO, USA) supplemented with 10 g/l sucrose and solidified with 3 g/l Phytagel (Sigma-Aldrich) in Petri dishes 41 , 42 . The plates were then transferred to a plant growth chamber with a light intensity of 150–200 µE m −2 s −1 (16 h light/8 h dark) and temperature of 22 °C. Incubation of bacteria with tobacco BY-2 cells and measurement of ROS production After cultivation in modified LS medium for 3 days, tobacco BY-2 cells were collected by centrifugation and suspended in ROS assay buffer (5 mM MES, 175 mM mannitol, 0.5 mM CaCl 2 , and 0.5 mM K 2 SO 4 [pH 5.8]). The plant cell suspension (60 g wet cell weight/l) was incubated at room temperature on a rotary shaker (120 rpm) for 3 h. Cells of each isolated bacterial strain were cultivated in liquid NBRC802 or ISP2 medium at 30 °C for 24 h and then added to the plant cell suspension. In this process, the bacterial cell culture solution was adjusted to an optical density at 600 nm (OD 600 ) of 0.8 using NBRC802 or ISP2 medium, and the solution was further diluted by a factor of 2 using ROS assay buffer. Then, 0.1 mL of the diluted solution (cells and extracellular components in a mixture of the medium and the buffer at a ratio of 50:50) was mixed with 1.9 mL of the plant cell suspension (60 g wet cell weight/l) in a well (3 mL) of a 6-well plate. In this experimental system, both cells and extracellular components produced by cells were subjected to the assays to evaluate plant–microbe interactions based on physical and chemical signals. After addition of the diluted solution of bacterial cell culture, the mixture was incubated at room temperature on a rotary shaker (120 rpm), and production of ROS was monitored using a chemiluminescence assay with luminol. The mixture was filtered, and the filtrate (10 μL) was added to Tris–HCl buffer (50 mM [pH 8.0], 150 µL), followed by the addition of luminol (Wako, Osaka, Japan; 1 mM, 25 μl) and potassium ferricyanide (6 mM, 25 µL). ROS-associated chemiluminescence was measured for 15 s using a luminometer (Centro LB 960, Berthold, Germany). Chemiluminescence was integrated and expressed as relative intensity 18 , 19 . Samples that exhibited relative chemiluminescence intensity more than twice as high as mock treatment were selected as positives (Fig. S1 ). Measurement of cryptogein-induced ROS production in BY-2 cells after co-incubation with bacteria After cultivation in modified LS medium for 3 days, tobacco BY-2 cells were collected by centrifugation and suspended in ROS assay buffer. The bacterial cell culture solution was adjusted to OD 600 of 0.8 using NBRC802 or ISP2 medium (trypticase soy broth was used for strain PsJN), and the solution was further diluted by a factor of 10 using ROS assay buffer. Then, 0.1 mL of the diluted solution (cells and extracellular components in a mixture of the medium and the buffer at a ratio of 10:90) was added to the plant cell suspension (60 g wet cell weight/l, 1.8 mL) (Fig. S2 ). After co-incubation at room temperature on a rotary shaker (120 rpm) for 4 h, the cells were collected by centrifugation (1000 rpm, 3 min) and suspended in fresh buffer to remove ROS scavengers and other bacteria-derived metabolites. Cryptogein (6 µM, 0.1 mL), as a plant immune response elicitor, was then added to the solution. The mixture was incubated at room temperature on a rotary shaker (120 rpm), and production of ROS was monitored using a chemiluminescence assay with luminol, as described above. Samples that exhibited relative chemiluminescence intensity more than twice as high as mock treatment were selected as positives (Fig. S3 ). Measurement of flg22-induced ROS production in T87 cells after co-incubation Arabidopsis T87 cells were cultivated in JPL medium for 3 days and then collected by centrifugation and suspended in ROS assay buffer. The plant cells were co-incubated with each strain of isolated bacteria as described for tobacco BY-2 cells. The plant immune response elicitor flg22 (final concentration, 1 µM) was then added to the buffer instead of cryptogein (Fig. S5 ). The luminol derivative L-012 (Wako; final concentration, 50 µM) was added to the buffer simultaneously. The mixture was incubated at room temperature on a rotary shaker (120 rpm), and ROS-associated chemiluminescence was measured for 0.5 s using a luminometer. Treatment of whole Arabidopsis plants with isolated bacteria Whole Arabidopsis Col-0 plants were inoculated with each strain of isolated bacteria by immersing the root tip of 7-day-old seedlings in the diluted solution of bacterial cell culture (OD 600 , 0.002) for 1 s. This solution was prepared by diluting the bacterial cell culture solution after cultivation for 24 h using NBRC802 or ISP2 medium beforehand. After inoculation, the plants were transferred to fresh 1/2 MS agar medium and further cultivated on the plate at 22 °C with a light intensity of 150–200 µE m −2 s −1 (16 h light/8 h dark) for 7 days. To evaluate internal colonization of the plants by the isolated bacteria, inoculated plants were surface-sterilized by dipping in 5% H 2 O 2 for 2 min 41 , 42 . After washing three times with sterile water, a pooled sample of 6 seedlings was homogenized in 5 mL of sterile water using a mortar and pestle. Subsequently, appropriately diluted samples were plated onto NBRC802 or ISP2 agar medium. After incubation at 30 °C for a few days, colonies formed on the plates were counted, and bacterial density was expressed as CFU per gram of plant fresh weight. Evaluation of resistance of Arabidopsis to hemibiotrophic and necrotrophic bacterial pathogens P. syringae pv . tomato DC3000 and P. carotovorum subsp. carotovorum NBRC 14082 were used as bacterial strains pathogenic to Arabidopsis 42 , 43 . Strain DC3000 was cultivated on mannitol-glutamate (MG) agar medium containing rifampicin (50 µg mL −1 ) at 28 °C for 24 h. Strain NBRC 14082 was cultivated on NBRC802 medium at 30 °C for 24 h. MG medium contained (per liter) mannitol (10 g), L-glutamic acid (2 g), KH 2 PO 4 (0.5 g), NaCl (0.2 g), and MgSO 4 ·7H 2 O (0.2 g) (pH 7.0). Pathogenic bacterial cell suspension (4 × 10 5 CFU mL −1 ; 40 mL) prepared in sterile water containing 0.025% Silwet L-77 (Biomedical Science, Tokyo, Japan) was dispensed into 1/2 MS agar medium containing 14-day-old Arabidopsis seedlings, and the plates were incubated at room temperature for 2 min 41 , 42 . After the pathogen cell suspension was removed by decantation, the seedlings on the plates were rinsed twice with sterile water. The plates were then sealed with 3 M Micropore 2.5-cm surgical tape (3 M, St. Paul, MN, USA) and incubated at 22 °C with a light intensity of 150–200 µE m −2 s −1 (16 h light/8 h dark). Symptom development was observed at 3 days after infection. To determine the growth of strain DC3000 in Arabidopsis , the aerial tissues of infected plants were sampled. The tissues were surface-sterilized by dipping in 5% H 2 O 2 for 2 min 41 , 42 . After washing twice with sterile water, a pooled sample of 5 seedlings was homogenized in 5 mL of sterile water using a mortar and pestle. Subsequently, appropriately diluted samples were plated onto MG agar medium containing rifampicin. After incubation at 28 °C for 2 days, colonies formed on the plates were counted, and bacterial density was expressed as CFU per gram of plant fresh weight. Gene expression analysis The aerial tissues of Arabidopsis plants were sampled at 3, 9, and 24 h after pathogen infection and ground in liquid nitrogen using a mortar and pestle. Total RNA was isolated using an RNA extraction kit (NucleoSpin RNA Plus, Takara Bio, Shiga, Japan). Reverse transcription was performed using reverse transcriptase (ReverTra Ace qPCR RT Master Mix with gDNA Remover, Toyobo). The expression levels of defense-related genes were determined by quantitative PCR using Thunderbird SYBR qPCR Mix (Toyobo) and specific primer sets. The following primers were used: EF-1α , forward 5′-TGAGCACGCTCTTCTTGCTTTCA-3′ and reverse 5′-GGTGGTGGCATCCATCTTGTTA-3′; PR-1 , forward 5′-GTGGGTTAGCGAGAAGGCTA-3′ and reverse 5′-ACTTTGGCACATCCGAGTCT-3′; PR-5 , forward 5′-TCGGCGATGGAGGATTTGAA-3′ and reverse 5′-AGCCAGAGTGACGGGAGGAAC-3′; PDF1.2 , forward 5′-TCATGGCTAAGTTTGCTTCC-3′ and reverse 5′-AATACACACGATTTAGCACC-3′. Quantitative PCR was performed using a CFX Connect real-time system (BIO-RAD, Tokyo, Japan) according to the manufacturer’s recommendations under the following conditions: 95 °C for 1 min, followed by 40 cycles of 95 °C for 15 s, 60 °C for 1 min, and 65 °C for 15 s. The specificity of the amplifications was verified by melting curve analysis of the PCR products at the end of each experiment. The relative expression level of each gene was normalized against the expression level of EF1α , and calculated using the ΔΔCt method 44 .
Plants have evolved unique immunity mechanisms that they can activate upon detecting the presence of a pathogen. Interestingly, the presence of some nonpathogenic microorganisms can also prompt a plant to activate its systemic immunity mechanisms, and some studies have shown that pretreating agricultural crops with such "immunity-activating" nonpathogenic microorganisms can leave the crops better prepared to fight off infections from pathogenic microorganisms. In effect, this means that immunity-activating nonpathogenic microorganisms can function like vaccines for plants, providing a low-risk stimulus for the plant's immune system that prepares it for dealing with genuine threats. These are exciting findings for crop scientists because they suggest the possibility of using such pretreatment as a form of biological pest control that would reduce the need for agricultural pesticides. However, before pretreatment with nonpathogenic microorganisms can become a standard agricultural technology, scientists need a way to screen microorganisms for the ability to stimulate plant immune systems without harming the plants. There is currently no simple method for evaluating the ability of microorganisms to activate plant immune systems. Conventional methods involve the use of whole plants and microorganisms, and this inevitably makes conventional screening a time-consuming and expensive affair. To address this problem, Associate Professor Toshiki Furuya and Professor Kazuyuki Kuchitsu of Tokyo University of Science and their colleagues decided to develop a screening strategy involving cultured plant cells. A description of their method appears in a paper recently published in Scientific Reports. The first step in this screening strategy involves incubating the candidate microorganism together with BY-2 cells, which are tobacco plant cells known for their rapid and stable growth rates. The next step is to treat the BY-2 cells with cryptogein, which is a protein secreted by fungus-like pathogenic microorganisms that can elicit immune responses from tobacco plants. A key part of the cryptogein-induced immune responses is the production of a class of chemicals called reactive oxygen species (ROS), and scientists can easily measure cryptogein-induced ROS production and use it as a metric for evaluating the effects of the nonpathogenic microorganisms. To put it simply, an effective pretreatment agent will increase the BY-2 cells' ROS production levels (i.e., cause the cells to exhibit stronger immune system activation) in response to cryptogein exposure. Microbe-Based Replacements for Chemical Pesticide Replacement.A team of scientists from Tokyo University of Science has developed a screening method based on cultured plant cells that makes such testing easier. This may lead to microorganism-based crop protection methods that reduce the need for chemical pesticides. Credit: Tokyo University of Science To test the practicability of their screening strategy, Dr. Furuya and his colleagues used the strategy on 29 bacterial strains isolated from the interior of the Japanese mustard spinach plant (Brassica rapa var. perviridis), and they found that 8 strains boosted cryptogein-induced ROS production. They then further tested those 8 strains by applying them to the root tips of seedlings from the Arabidopsis genus, which contains species commonly used as model organisms for studies of plant biology. Interestingly, 2 of the 8 tested strains induced whole-plant resistance to bacterial pathogens. Based on the proof-of-concept findings concerning those 2 bacterial strains, Dr. Furuya proudly notes that his team's screening method "can streamline the acquisition of microorganisms that activate the immune system of plants." When asked how he envisions the screening method affecting agricultural practices, he explains that he expects his team's screening system "to be a technology that contributes to the practical application and spread of microbial alternatives to chemical pesticides." In time, the novel screening method developed by Dr. Furuya and team may make it significantly easier for crop scientists create greener agricultural methods that rely on the defense mechanisms that plants themselves have evolved over millions of years.
10.1038/s41598-021-86560-0
Medicine
New protein complex structure reveals possible ways to target key cancer pathway
Jason J. Kwon et al, Structure–function analysis of the SHOC2–MRAS–PP1C holophosphatase complex, Nature (2022). DOI: 10.1038/s41586-022-04928-2 Rita Sulahian et al, Synthetic Lethal Interaction of SHOC2 Depletion with MEK Inhibition in RAS-Driven Cancers, Cell Reports (2019). DOI: 10.1016/j.celrep.2019.08.090 Journal information: Cell Reports , Nature
https://dx.doi.org/10.1038/s41586-022-04928-2
https://medicalxpress.com/news/2022-07-protein-complex-reveals-ways-key.html
Abstract Receptor tyrosine kinase (RTK)–RAS signalling through the downstream mitogen-activated protein kinase (MAPK) cascade regulates cell proliferation and survival. The SHOC2–MRAS–PP1C holophosphatase complex functions as a key regulator of RTK–RAS signalling by removing an inhibitory phosphorylation event on the RAF family of proteins to potentiate MAPK signalling 1 . SHOC2 forms a ternary complex with MRAS and PP1C, and human germline gain-of-function mutations in this complex result in congenital RASopathy syndromes 2 , 3 , 4 , 5 . However, the structure and assembly of this complex are poorly understood. Here we use cryo-electron microscopy to resolve the structure of the SHOC2–MRAS–PP1C complex. We define the biophysical principles of holoenzyme interactions, elucidate the assembly order of the complex, and systematically interrogate the functional consequence of nearly all of the possible missense variants of SHOC2 through deep mutational scanning. We show that SHOC2 binds PP1C and MRAS through the concave surface of the leucine-rich repeat region and further engages PP1C through the N-terminal disordered region that contains a cryptic RVXF motif. Complex formation is initially mediated by interactions between SHOC2 and PP1C and is stabilized by the binding of GTP-loaded MRAS. These observations explain how mutant versions of SHOC2 in RASopathies and cancer stabilize the interactions of complex members to enhance holophosphatase activity. Together, this integrative structure–function model comprehensively defines key binding interactions within the SHOC2–MRAS–PP1C holophosphatase complex and will inform therapeutic development . Main SHOC2 is a scaffold protein composed of leucine-rich repeats (LRRs) that bind directly to the catalytic subunit of PP1 (PP1C). Activation of MRAS leads to membrane localization of the SHOC2–MRAS–PP1C (SMP) complex and to the dephosphorylation of proteins of the RAF family at a key inhibitory phosphorylation site, including S259 on CRAF (RAF1), S365 on BRAF and S214 on ARAF (hereafter collectively referred to as ‘S259’); this results in the release of autoinhibition and the potentiation of RAF activation 1 , 6 . The SMP holophosphatase has a critical role in the RAS–MAPK pathway signalling that underlies normal developmental processes as well as oncogenic signalling in cancer. Activating mutations in SHOC2, PP1C, and MRAS are found in Noonan-like syndrome, a ‘RASopathy' syndrome that is characterized by congenital cardiac, skeletal, and cognitive deficits 2 , 6 , 7 , 8 . SHOC2 is essential for the proliferation and survival of several cancers including RAS-driven leukaemia; melanoma; and models of non-small cell lung cancer 9 , 10 . Moreover, depletion of SHOC2 sensitizes RAS-driven cancers to MEK inhibition through the disruption of RTK-mediated feedback signalling and through MEK-inhibitor-induced RAF dimerization 11 , 12 . However, gaps in our knowledge of the biophysical basis of complex assembly and function limit our understanding of the mechanisms of disease and opportunities for therapeutic targeting of this ternary complex. Structure of the SMP complex To understand the role of SHOC2 in RAS–MAPK pathway signalling, we solved the SHOC2 structure alone and in complex with PP1C and MRAS. Specifically, we developed an optimized expression system to produce wild-type human SHOC2 and PP1C, as well as constitutively active GTP-bound MRAS(Q71L) (Fig. 1a and Methods ). We first used X-ray crystallography to solve the structure of the wild-type SHOC2 apoprotein at 1.8-Å resolution (Fig. 1b and Extended Data Table 1 ). The apo-SHOC2 structure is a canonical LRR protein that consists of 20 tandem LRR domains. These LRR domains concatenate to form a conventional solenoid structure that has an average LRR corkscrew rotation of ~4 o (Fig. 1b ) and is stabilized by an N-terminal flanking α-helix and a C-terminal helix-turn-helix. Each LRR is composed of 22–24 amino acids containing a β-strand followed by a descending loop, an α-helix and an ascending loop. The LRR β-strands assemble in parallel to generate the concave surface of the solenoid, whereas the α-helices form most of the convex surface. Internal to the structure, the conserved leucine residues of the LRR motif, LXXLXLXN(X) 1–2 L, condense to form the hydrophobic core of the protein, whereas the conserved asparagine residue participates in a highly stabilizing 'asparagine ladder' motif that has been shown to be critical to the LRR fold 13 . The SHOC2 structure does not show the presence of the flexible hinge within the medial LRRs that was predicted in previous computational models of SHOC2 (refs. 14 , 15 ). Fig. 1: Structure of apo-SHOC2 and the SMP holophosphatase complex. a , Schematic diagram of members of the SMP complex. Truncation of constructs is indicated with dashed lines. MRAS switch 1 (SI), switch 2 (SII) and the hypervariable region (HVR) are annotated. *indicates the 2–63-deletion construct that was used for cryo-EM; ** indicates the 2–88-deletion SHOC2 construct that was used for X-ray crystallography. b , Overview of the crystal structure of apo-SHOC2 along with a cross-sectional representation of the LRR domain. c , d , Side views of the cryo-EM structure of the SHOC2 complex with SHOC2 in teal), MRAS in maroon, and PP1C in yellow. A ribbon representation and view of MRAS ( c ) and PP1C ( d ) are shown with relevant structural features annotated. d , Manganese ions (red), hydrophobic (H), C-terminal (C) and acidic (A) grooves are shown. Full size image As the conformation and binding interfaces of SHOC2 that are involved in complex formation with PP1C and MRAS cannot be inferred from the structure of the apoprotein alone, we used single-particle cryo-electron microscopy (cryo-EM) to resolve the holophosphatase complex of an N-terminally truncated SHOC2, wild-type PP1C and GTP-loaded MRAS(Q71L) (Fig. 1c,d , Extended Data Fig. 1 and Extended Data Table 2 ). The resulting structure at 2.9-Å resolution unambiguously defined a ternary complex in which SHOC2 engages both PP1C and MRAS through its concave surface (Fig. 1d ). The overall structure of SHOC2 observed in the holoenzyme complex is similar to the isolated apoprotein. Through protein–protein interactions, the SHOC2 complex buries a total of 5,934 Å 2 of solvent-accessible surface area, with SHOC2–PP1C, SHOC2–MRAS and MRAS–PP1C covering 3,019 Å 2 , 1,729 Å 2 and 1,186 Å 2 , respectively (Extended Data Fig. 1e ). SHOC2 interacts with PP1C through two broad surfaces within the concave region of the LRR domains as well as a short stretch of its flexible N-terminal arm (residues 65–77). In this structure, the interaction between SHOC2 and PP1C does not lead to an appreciable change in the conformation of PP1C—in contrast to the interactions of other holoenzyme partners with PP1C, such as MYPT1 and SDS22 (refs. 16 , 17 ). The most extensive SHOC2–PP1C interaction surface is found within the ascending loops of LRR2–LRR5 and LRR7–LRR11; this area interfaces with a binding region of PP1C between the αG helix loop and the αA helix that resides proximal to—but does not directly interact with—the SILK-binding region, as previously suggested 18 . An additional contact surface on SHOC2 extends along the ascending loops of LRR13–LRR16 and LRR18 with the positively charged surface of the PP1C αF helix. These primary binding interactions between PP1C and SHOC2 occur on the face opposite to the PP1C catalytic site, which leaves all three conventional PP1C binding grooves poised for substrate engagement. The SHOC2—PP1C interaction surface consists of 30 SHOC2 and 36 PP1C residues that primarily engage in a mixture of polar and pi-cation interactions with 1–2 residues per SHOC2 LRR (Fig. 2a ). However, interactions through hydrogen and ionic bonds (SHOC2–PP1C: R203–E167; R182–E56; R203–E54; and E155–R188) are predicted to create stabilizing interactions, on the basis of calculated residue interaction energies (Fig. 2a,b , Extended Data Fig. 2 and Supplementary Table 1 ). We observed potential steric hindrance between the non-polar methyl group of SHOC2 T411 and the amine group of PP1C K147, which is anticipated to induce a modest destabilization of the interaction interface (Fig. 2b and Supplementary Table 1 ). The N-terminal region of SHOC2 exhibited low overall complexity and is generally predicted to be unstructured; however, this region has been shown to be necessary for MRAS and PP1C binding 8 . Within the holophosphatase structure, we observed that the N-terminal region of SHOC2 contains a cryptic RVXF motif ( 63 GVAF 66 ) that binds the RVXF interaction site of PP1C in a conventional manner 19 , 20 . In forming this interaction, SHOC2 residues 63–74 form a tight β-hairpin that extends the β-sandwich core of PP1C by engaging the RVXF binding motif composed primarily of β10 (Fig. 2a and Extended Data Fig. 2f ). SHOC2 residues V64 and F66 embed within the hydrophobic RVXF binding pocket to enhance the affinity of SHOC2 for PP1C (Fig. 2a,b and Supplementary Table 1 ). Fig. 2: Detailed contacts between ternary SHOC2 complex members provide insight into the mechanism of assembly. a , Enlarged images show surface-contacting residues between SHOC2 LRR domains and PP1C (top left and middle) or MRAS (bottom), and residues of the unstructured N terminus of SHOC2 contacting PP1C (top right). b , Energy contribution of key contact residues between complex members (bars) and cumulative energy of interaction interface by Amber10 force-field-based energy calculation (red line). c , Comparison of the MRAS switch I ‘open’ and ‘closed’ conformations. d , Sedimentation velocity analytical ultracentrifugation (SV-AUC) analysis of SMP holoenzyme formation in the presence of MRAS–GDP (red line) or MRAS–GppCp (blue line). Line trace represents n = 1 technical replicate and is representative of 3 biological replicates. e , f , BLI analysis of the SHOC2 complex order of assembly for apo-SHOC2 with MRAS–GTP and PP1C ( e ) and SHOC2–PP1C-activated engagement of MRAS–GTP binding ( f ). Line trace represents n = 1 technical replicate and is representative of 2 biological replicates. g , Schematic diagram of the proposed model for SMP holophosphatase complex assembly, in which SHOC2 (teal) and PP1C (yellow) first engage in binding followed by MRAS–GTP (maroon) to stabilize and slow down the dissociation of the complex. AUC and BLI experiments were repeated two or more times and a representative example is shown. Full size image In contrast to the multiple interaction surfaces of SHOC2–PP1C, SHOC2 binds MRAS exclusively through the concave surface formed by the descending loop and β-strands of SHOC2 LRR1–LRR11. A total of 29 residues of SHOC2 engage in hydrophobic and polar contacts with 19 residues on MRAS, primarily between the switch I and II regions, which are found in its closed conformation ( Supplementary Table 1 ). An additional minor hydrophobic contact surface occurs between the ascending loop of SHOC2 LRR13–LRR14 and the interstrand region of the MRAS G-domain, between the β5 strand and the α4 helix (Fig. 2a and Extended Data Fig. 1 ). The N-terminal loop, switch I region and α1 helix of MRAS also interact with PP1C on the acidic face that is proximal to the hydrophobic groove (Extended Data Fig. 1 ). Although the most stabilizing interactions between SHOC2 and MRAS involve selected ionic and hydrogen bonds (SHOC2–MRAS: R223–D43; R177–E47; K109–D64; and R292–D41), most of the surface involves 20 hydrophobic interactions that further stabilize the complex (Fig. 2b and Supplementary Table 1 ). The M173I mutation in SHOC2 has been previously reported to be a gain-of-function (GOF) mutation that is found in Noonan-like syndrome 21 . Consistent with this observation, M173 resides within the MRAS-binding region of SHOC2, and comprehensive in silico structural analysis reveals a marked increase in overall energy stabilization when hydrophobic amino acid residues are substituted at the M173 position (Extended Data Fig. 3 ). MRAS and PP1C interactions are primarily mediated by four hydrogen bonds (MRAS–PP1C: D48–R188; H53–D179; Q35–M190; and K36–Q198) between the MRAS effector domain and a region near the hydrophobic groove of PP1C (residues 178–225) (Fig. 2a,b and Supplementary Table 1 ). Given the polar nature of this interaction and an inferred aggregate binding energy of only around −63 kcal mol −1 , this interaction was not anticipated to be stable in isolation. Using analytical ultracentrifugation (AUC) experiments with isolated components of the complex, we found that MRAS does not bind PP1C in the absence of SHOC2 (Extended Data Fig. 2g ). Previous studies have indicated that the SMP holophosphatase complex requires GTP-bound MRAS for complex formation and activation 1 . When complexed with GTP, RAS proteins are known to adopt an activated 'closed' conformation in which residues of their switch I and switch II domains (MRAS residues 42–48 and 70–85, respectively) undergo an essential conformational change to interact with the gamma phosphate of GTP, and in so doing generate a permissive RAS effector-binding region 6 . Within the cryo-EM structure, we observed that the same activating switch mechanism and RAS effector-binding site occur in the assembly of the SMP holoenzyme complex. Indeed, assembly requires MRAS to be in the activated GTP-bound conformation to avoid extensive steric clashes with switch I and PP1C; this conformation allows interactions between the MRAS effector-binding region and PP1C and SHOC2 (Fig. 2c ). We confirmed this requirement through AUC, in which MRAS bound to the non-hydrolysable GTP analogue GppCp drove the assembly of the ternary complex, as seen by a 52.1% shift in distinct sedimentation species to a Svedburg coefficient of 6.3, along with depletion of SHOC and PP1C monomers and complex, whereas MRAS–GDP was unable to form a ternary complex (Fig. 2d ). Previous reports have suggested a potential flexible ‘hinge’ within the medial LRRs of SHOC2 14 , 15 , due in part to non-conserved intervening residues at key leucine positions of the medial SHOC2 LLR domains 11 and 12 (Extended Data Fig. 4 ). Comparing our apo-SHOC2 and holoenzyme structures, we observed compression between LRR1 and LRR20 as well as an increased net corkscrew rotation of the LRR region when SHOC2 is bound to MRAS and PP1C compared to the unbound structure (Extended Data Fig. 4 ). Molecular dynamic (MD) simulation studies also revealed a high degree of flexibility and dynamic motion of LRR11 and LRR12 when in complex compared to in the apo form (Extended Data Fig. 4c ). Finally, the SMP holophosphatase complex dephosphorylates RAF at the 'S259’ position, which induces the release of RAF from its autoinhibited state when bound to the 14-3-3 protein 1 . SHOC2-dependent PP1C dephosphorylation of RAF substrate is predicted to occur through proximal interactions at the plasma membrane for RAF-bound RAS proteins 1 , but the mechanism of this interaction has not been described. We systematically modelled the interaction between the SHOC2 ternary complex and RAF by merging the SHOC2 complex cryo-EM structure and a previously reported model of the RAS signalosome 22 ( Methods and Extended Data Fig. 5 ). This structural model of the multimeric complex reveals a spatially feasible and energetically favourable arrangement of the membrane-bound SHOC2 ternary complex with the RAS signalosome, in which S259 of RAF is engaged with the PP1C catalytic site. Furthermore, this model proposes that interactions between the SMP holophosphatase complex and RAF1 are mediated through PP1C, with no additional broad surfaces of SHOC2 in contact with any other member of the RAS–RAF complex. Mechanism of SHOC2 complex assembly To define the mechanism of SMP holophosphatase complex assembly, we calculated the interface contacting energy and observed that the SHOC2–PP1C interaction (−238 kcal mol −1 ) shows the greatest stability, primarily owing to a greater number of electrostatic interactions compared to the other protein interfaces within the complex (Fig. 2b ). To examine ternary complex assembly and dissociation, we performed bio-layer interferometry (BLI), in which SHOC2 was immobilized and sequentially exposed to either PP1C or activated MRAS bound to GppCp. These findings showed the clear engagement of SHOC2–PP1C in a fast on/off binding reaction with weaker affinity (dissociation rate constant ( k d ) = 2.7 × 10 4 M s −1 ; association rate constant ( k a ) = 0.39 s −1 ) followed by kinetic locking through MRAS–GppCp binding (k d = 7.0 × 10 4 M s −1 ; ka = 0.18 s −1 ) (Fig. 2e,f ). By contrast, SHOC2 sequentially exposed to MRAS–GTP followed by PP1C did not form a complex. Consistent with observations from the cryo-EM structure, we confirmed the requirement of MRAS in the GTP-bound state to stably form the complete holoenzyme complex (Fig. 2g ). To further validate the cryo-EM model, we used BLI to assess the binding and kinetics of well-known GOF mutations in each complex member that have been reported in clinical cases of Noonan-like syndrome (PP1C(P50R), MRAS(Q71L) and SHOC2(M173I); Supplementary Table 1 and Extended Data Fig. 6 ). PP1C(P50R) resulted in an additional ionic interaction with E224 and a hydrogen bond with N202 on the convex surface of SHOC2 LRR5, and an increased residue interaction energy of −9.4 kcal mol −1 . BLI studies showed that there was a 2-fold increase in k a and a 14.7-fold decrease in the steady-state dissociation constant ( K D ) compared to wild-type PP1C, with minimal difference in MRAS engagement (Extended Data Table 3 ). By contrast, we predicted that the SHOC2(M173I) GOF mutation would further stabilize hydrophobic interaction with M77 found within the switch II domain of MRAS (Extended Data Table 3 ). Indeed, SHOC2(M173I) induces relatively nominal changes in PP1C binding kinetics, but notably increases the k a value associated with MRAS (Extended Data Table 3 ). Finally, MRAS(Q71L) is a mutation that leads to reduced intrinsic GTP hydrolysis and a marked decrease in k d with the ternary complex. Combining all three mutant complex members, we observed an increase of more than 33-fold in K D , which further confirms the proposed model of complex assembly (Extended Data Table 3 ). SHOC2 is also known to engage various isoforms of PP1C (PP1Cα, PP1Cβ and PP1Cγ) and MRAS in an MRAS–GTP-dependent manner 18 . We found no substantial difference when we compared the binding kinetics between SHOC2–PP1C isoforms and MRAS–GppCp (Extended Data Fig. 6e and Extended Data Table 3 ). Together, these studies suggest that the GTP-bound, closed state of MRAS is a key requisite for stable complex formation. Although MRAS has been reported to be the sole RAS isoform that binds the SHOC2 ternary complex, some studies 23 , 24 , 25 suggest that additional RAS isoforms also bind SHOC2. Sequence alignment among RAS isoforms reveals conservation between MRAS and KRAS, HRAS or NRAS within several residue positions of switch I and II regions that engage in SHOC2–PP1C binding (Extended Data Fig. 7a ). Modelling additional activated RAS isoforms into the complex, we observed a marked similarity in the overall orientation, and MD simulation revealed broadly stabilizing interactions between SHOC2 or PP1C and the KRAS, NRAS, and HRAS isoforms (Extended Data Fig. 7b ). To experimentally evaluate these in silico findings, we tested the ability of KRAS–GppCp to form a stable complex with the SHOC2 ternary complex. We observed a modestly higher kinetic on rate ( k a ) for active KRAS compared to MRAS binding to SHOC2–PP1C, but the interaction between SHOC2–KRAS–PP1C was transient owing to a high dissociation rate constant, in contrast to the more stable engagement that was observed with MRAS (Extended Data Fig. 7c ). To further complement these findings, we confirmed that SHOC2–PP1C interacts with RAS isoforms that have Q61 mutations (HRAS, KRAS and NRAS), as assessed by immunoprecipitation studies in cells that exogenously overexpress RAS isoforms and SHOC2 (Extended Data Fig. 7d ). These observations suggest that additional RAS isoforms beyond MRAS form SHOC2 ternary complexes. Systematic structure–function analysis of SHOC2 To complement the cryo-EM data and to define structure–function relationships within the SMP complex, we performed a deep mutational scanning (DMS) screen of the SHOC2 protein. Specifically, we created around 12,000 single-residue variants of SHOC2, assessed their function in KRAS -mutant cancer cells under SHOC2-dependent conditions and determined the scaled log 2 -transformed fold change (LFC) of SHOC2 variants, with the statistical threshold for GOF and loss-of-function (LOF) set at >0.6 and <−0.6, respectively (Fig. 3a , Extended Data Fig. 8 , Supplementary Table 2 , and Methods ). We noted that at several residues within SHOC2, more than half of the missense mutations scored as negatively selected LOF variants in the screen, and we refer to these positions as mutationally intolerant SHOC2 residue positions. These mutationally intolerant positions occurred with a distinct periodicity and resided structurally within interaction surfaces between ternary complex members, with SHOC2 residues that engaged in polar contacts exhibiting the greatest functional effect (Fig. 3b,c and Extended Data Fig. 9a ). Indeed, we found that several SHOC2 residues that are predicted to form stabilizing interactions between SHOC2 and MRAS or PP1C were negatively selected in the screen upon mutation. Residues along the concave surface of SHOC2 that engage MRAS (E127, Y131, R177, R200, R223 and R288) and PP1C (N156, H178, R203, Y293 and N316) were found to be intolerant to mutation within the DMS screen (Fig. 3a,c ). We validated representative LOF variants that scored in the DMS screen with mutations at positions that are involved in interactions with PP1C and MRAS in low-attachment growth conditions (Extended Data Fig. 9b ), an additional SHOC2-dependent functional assay 18 . Furthermore, we assessed downstream MAPK signalling and performed immunoprecipitation studies with select LOF mutants from the DMS screen (for example, Y131E, R223F and E457K), and found that these variants exhibited increased phosphorylation of the S259 residue in RAF1, diminished MAPK activity and disrupted SMP complex interactions compared to wild-type SHOC2 (Extended Data Fig. 9c–h ). Fig. 3: Systematic DMS reveals structural constraints of the function of the SHOC2 complex. a , Heat map representation of LFC allele enrichment (red) and depletion (blue) between trametinib treatment and vehicle control, centred on wild-type SHOC2 (silent) and normalized to the mean of nonsense mutations (scaled LFC). SHOC2 positional evolutionary sequence variation (Evo Score) and protein–protein interacting residues (PPI) from cryo-EM data are indicated. b , Projections of the observed DMS allele abundance on the N-terminal unstructured region (bottom left), and MRAS (top right) and PP1C interface (bottom right) onto the cryo-EM structure. Colour indicates the mean positional scaled LFC in the DMS fitness screen, and size of residue indicates the number of variants that score as GOF or LOF. c , Scatter plot showing position-level calculated, mean free-energy change upon mutation (intrinsic SHOC2 stability) and corresponding average scaled LFC for fitness in the SHOC2 DMS screen, with higher ΔΔ G values corresponding to greater instability. Positive DMS scaled LFC: positive selection, GOF. Negative DMS z -score: negative selection, LOF. Full size image In agreement with the cryo-EM structure, SHOC2 alleles with mutations in the conserved hydrophobic residues of the cryptic N-terminal RVXF motif (V64 and F66) generally could not functionally rescue viability in the DMS screen. Conversely, G63K, G63R or G63H substitutions scored as GOF in the screen (Extended Data Figs. 8 and 9 ). These GOF mutations change the wild-type 63 GVAF 66 sequence into a canonical RVXF motif, which is likely to result in an increased stabilizing interaction through the formation of a new hydrogen bond with D242 on PP1C, similar to conventional RVXF motifs 26 . In line with this, the G63R mutation in SHOC2 was found to enhance PP1C binding as well as increased MAPK activity, whereas the V64 and F66 mutants exhibited impaired binding and MAPK signalling compared to the wild-type SHOC2 control (Extended Data Fig. 9c–h ). Positive charge substitutions at proximal positions (S57, A59 and P62) also resulted in GOF phenotypes within the DMS screen (Extended Data Fig. 9i and Supplementary Table 2 ). These mutations are anticipated to similarly enhance polar interactions between the N-terminal region of SHOC2 and the acidic surface of PP1C proximal to the hydrophobic tunnel of the RVXF binding pocket. Furthermore, substitutions replacing the hydroxyl side chain of S67—which proceeds the RVXF ( 63 GVAF 66 ) motif—with bulkier hydrophobic side chains (S67W, S67I, S67V or S67F) resulted in a GOF phenotype (Fig. 3a , Extended Data Fig. 9i and Supplementary Table 2 ), probably by enhancing the hydrophobic interaction with the RVXF binding pocket on PP1C. In addition, SHOC2–PP1C binding is also mediated through electrostatic interactions between PP1C and two key surfaces within the LRR domains of SHOC2 (Extended Data Fig. 10a,b ). The surface along the C-terminal LRRs of SHOC2 that binds PP1C is negatively charged and is functionally intolerant to positively charged amino acid substitutions, as exhibited by negative selection in the DMS screen (Extended Data Fig. 10e ). Conversely, the basic surface within the N-terminal LRRs of SHOC2 that binds PP1C is intolerant to negatively charged substitutions (Extended Data Fig. 10g ). Furthermore, the substitution of uncharged SHOC2 residue N434 with negatively charged residues that match the overall local surface charge of the PP1C-binding region of SHOC2 C-terminal LRRs leads to a GOF phenotype in the DMS screen. Computational, structural analysis predicts that N434D has enhanced ionic interactions, resulting in a calculated interaction with K150 in PP1C and a calculated net stabilization of less than −20 kcal mol −1 (Extended Data Fig. 10m–o ). Indeed, SHOC2 N434D shows enhanced binding to MRAS–PP1C in immunoprecipitation studies (Extended Data Fig. 9g ). SHOC2(M173I) is clinically linked to a RASopathy phenotype in humans 21 . M173I scores as a significant GOF in our screen and similar hydrophobic residue substitutions (M173V or M173L) were also found to increase fitness to a level higher than that of wild-type SHOC2 ( Supplementary Table 2 ). These findings are likely to reflect further stabilization of the hydrophobic interaction between SHOC2 and MRAS surfaces, as previously noted ( Supplementary Table 1 ). We corroborated these observations through detailed computational and structural analysis of the M173 position, which revealed that hydrophobic residue substitutions result in increased interprotein interaction energy as well as relatively lower intrinsic SHOC2 protein instability compared to other variants (Extended Data Fig. 3d–f ). As noted previously, steric hindrance at T411 has a destabilizing effect on the SHOC2–PP1C interaction interface (Fig. 2b and Supplementary Table 1 ). Mutations at T411 show strong positive selection in the DMS screen, which suggests that GOF phenotypes are conferred by these residues (Fig. 3a and Supplementary Table 2 ). In the case of T411A, computational modelling suggests that the alanine mutation avoids the steric hindrance between the methyl group of wild-type SHOC2 T411 and the amine on K147 of PP1C (Extended Data Fig. 10p,q and Supplementary Table 1 ), and SHOC2(T411A) was found to have enhanced interactions between complex members and to confer an increase in MAPK activity (Extended Data Fig. 9c–h ). Substitutions at T411 with positively charged residues were not enriched in the screen ( Supplementary Table 2 ), which is probably a result of enhanced electrostatic repulsion with K147 on PP1C. Collectively, integrative mapping of DMS and cryo-EM results suggest a high degree of structure–function concordance and define variants of SHOC2 that are critical for stabilizing or destabilizing the interactions of SHOC2 with complex members to mediate holophosphatase function (Extended Data Fig. 10t ). The SMP complex in human disease Germline SHOC2 GOF mutations that result in increased complex assembly or plasma membrane localization have been associated with Noonan-like syndrome 2 , 8 . However, the functional consequences of only a small number of these mutations have been characterized. To investigate single-residue disease-associated mutations in SHOC2 , we cross-referenced the DMS screen data with germline RASopathy mutations in the ClinVar database and recurrent (more than two patients) mutations in human cancer in the COSMIC database ( Supplementary Table 3 ). We specifically identified those mutations that reside within or proximal to protein–protein complex interfaces that also scored as GOF within our screen (scaled LFC > 0.6) (Fig. 4a ). Previously identified GOF RASopathy mutations include P50R in PP1C and M173I and M173V in SHOC2 (refs. 2 , 3 , 4 , 5 , 21 ), which are located within interaction interfaces found in the cryo-EM structure (Fig. 4a and ExtendedData Fig. 8 ). Moreover, we identified previously uncharacterized variants in ClinVar for all three complex members that also reside in the protein interaction interfaces of the cryo-EM structure ( Supplementary Table 3 ). Several additional SHOC2 RASopathy mutants score as GOF in the DMS screen (scaled LFC > 0.6), including G63R, T411A, Q249K and Q269R (Fig. 4a–d ). In addition to finding that T411A alleviates a solvation penalty and steric hindrance (Fig. 2b ), we found through in silico modelling that this variant also enhances adjacent contact sites between SHOC2 and PP1C (SHOC2–PP1C: D388–K147 and N434–K150) (Fig. 4e ). By contrast, Q249K stabilizes the SHOC2–PP1C interface by creating a salt bridge with E116 of PP1C, further enhancing the binding energy by −22.7 kcal mol −1 (Fig. 4c,e and Extended Data Fig. 10p,r ). We also identified and confirmed an uncharacterized RASopathy-associated gain-of-function SHOC2 allele—G63R—that completes a canonical RVXF motif, enabling tighter binding of the RVXF binding pocket on PP1C (Fig. 4d,e ). Molecular modelling suggested that G63R further enhances this interaction by creating two additional hydrogen bonds involving D242 on PP1C, which releases an additional interaction energy of 18.88 kcal mol −1 . These findings are in line with previous reports that indicate that there is a level of degeneracy in RVXF motifs, in which the conservation of VXF can be sufficient for PP1C binding 20 . In cells, all three SHOC2 pathogenic variants G63R, Q259K and T411A showed enhanced interactions with PP1C and MRAS as well as enhanced MAPK signalling (Extended Data Fig. 9 ) Thus, the integrated SMP holophosphatase structure and DMS screen data provide a useful resource for the future interpretation of pathogenic mutations in SHOC2, PP1C and MRAS that are observed in human RASopathy syndromes and cancer. Fig. 4: Structure–function analysis identifies disease-associated mutations. a , Clinical missense mutations of SHOC2 complex members in Noonan-like syndrome (NL-S) (ClinVar) and cancer (COSMIC), with interface mutant alleles annotated. The lollipop size of interface mutants is proportional to DMS scaled LFC. b , c , Dynamic change in interaction surface between SHOC2 and PP1C in wild type (WT) and Noonan-like syndrome-associated (SHOC2(T411A)) ( b ) or cancer-associated (SHOC2(Q249K)) ( c ) mutations. d , Modelling of the anticipated GOF SHOC2(G63R) mutant. e , Contact surface energy of the SHOC2 complex for the functional pathogenic variants SHOC2 T411A, Q249K, and G63R, as predicted by Amber10 force-field-based energy calculation. Full size image The SHOC2 ternary complex has a clear functional importance in RASopathies and RAS-driven cancers and remains an appealing therapeutic target. Using engineered degron systems, we have previously demonstrated 11 a proof-of-concept for small-molecule-mediated proteasomal degradation of SHOC2; however, small-molecule ligands that bind to SHOC2 or to the holophosphatase complex have not to our knowledge yet been reported. Enabled by insights from the cryo-EM structure and by evidence of the feasibility of developing selective inhibitors of PP1C holoenzymes 27 , we used the SiteMap algorithm (Methods) and identified three distinct pockets within the complex that have favourable properties for the binding of small-molecule ligands (druggability scores of higher than 0.92; Extended Data Fig. 11a–d ). Notably, each pocket is located proximal to a critical stabilizing interface between the interaction surface of each complex member. At a combined volume of 3,961 A 3 , these pockets present an attractive path towards the discovery of small-molecule ligands that disrupt holoenzyme formation or function. Discussion We have resolved the cryo-EM structure of the SMP holophosphatase, which shows that SHOC2 forms a ternary complex with PP1C and MRAS. Through integrative analysis of the SHOC2 DMS dataset, we developed a structure–function map of this complex, resolving the key functional contributions of residues within the N-terminal unstructured region and concave LRR surface to mediate complex stability and function. In addition, we provide a biophysical model that suggests that SHOC2 and PP1C exist in a relatively high-affinity equilibrium between the bound and the unbound states (Extended Data Fig. 11e ). After activation through GTP loading, MRAS adopts an activated conformation and completes assembly of the SMP holophosphatase complex at the plasma membrane. We predict that sustained localization of the SMP complex at the membrane mediates the dephosphorylation of RAF S259 in a RAS–GTP-dependent manner. Moreover, the SHOC2–RAS–PP1C complex is also likely to localize PP1C to lipid domains, where RAS has been known to cluster 28 , increasing the local membrane concentration of PP1C and RAF1 substrate. Our finding that numerous pathogenic GOF variants of SHOC2 exhibit enhanced binding affinities for complex members further supports this model. Further studies will be necessary to test whether the SHOC2- and RAS–GTP-dependent localization of PP1C at discrete lipid microdomains is necessary for its activity to dephosphorylate RAF or whether the holophosphatase itself structurally confers substrate specificity. Furthermore, we provide structural, computational and cell-based evidence that the canonical RAS isoforms (KRAS, HRAS and NRAS) can also mediate complex assembly, although with decreased stability compared with MRAS-mediated complex formation. However, our experimental evidence for the interactions of complex members was obtained using exogenously expressed, tagged forms of SHOC2 and RAS proteins. Demonstration of endogenous interactions among these proteins will require further study. These studies reveal disease mechanisms that are mediated by mutations in the SHOC2 ternary complex in congenital RASopathy syndromes and in cancer. We found that these mutations resulted in complex stabilization and enhanced interaction energy of SHOC2 with PP1C and/or MRAS. Collectively, this systematic structure–function map of SHOC2 provides a resource for interpretation of the functional importance of additional germline or tumour-associated mutations in SHOC2, as well as for inference of the functional implications of observed mutations in PP1C or MRAS. Moreover, SHOC2 is an essential mediator of RAS pathway signalling and RAS-driven cancer cells depend on SHOC2 for proliferation and survival, particularly in combination with MEK inhibition. The structural model of the SMP holophosphatase identifies key interaction interfaces that promote complex formation and function, and disruption of the SHOC2 complex represents an attractive therapeutic strategy to inhibit the activation of RAF kinase. Finally, several studies have shed light on the autoinhibited structural state of RAF kinases when 14-3-3 is bound at two critical phosphorylation sites on RAF (S259 and S621) 29 , 30 ; this research has led to a biophysical model of RAF activation through engagement of the RAS-binding domain or cysteine-rich domain with RAS and phospholipids at the plasma membrane, resulting in the release of RAF from 14-3-3 (ref. 31 ). However, the structural basis by which the SMP holophosphatase functions within this stepwise activation model is yet to be determined. Our structural data does not support a SHOC2 ‘hinge’ within the medial LRRs; however, if such flexibility were to exist, it could serve to modulate the access of RAF substrate to the PP1C hydrophobic grove and active site, which we expect are key to holoenzyme catalysis. In addition, our DMS screen revealed several mutationally sensitive SHOC2 surfaces, all of which are interpreted as important for the binding of complex members; the lack of additional broad surfaces that are mutationally intolerant suggests that SHOC2 may not be involved in direct binding or recruitment of RAF as a substrate. Further clarification of how the SHOC2 complex engages its sequestered RAF substrate will further inform and enable rational drug design. Together, these studies provide a roadmap for characterizing disease-associated mutations in the SMP holophosphatase and yield insights that may provide new avenues to therapeutically target this complex. Methods Production of SHOC2 Full-length SHOC2 was cloned into pFastBac (Thermo Fisher Scientific) along with sequence to produce an N-terminally tagged 6×His-GST-TEV SHOC2 protein. The protein was expressed in SF9 cells (Expression Systems) that had been transfected with baculovirus made from the relevant bacmid and then collected by centrifugation after 72 h. Cell pellet was resuspended in 50 mM HEPES, pH 7.5, 500 mM NaCl, 10% glycerol, 0.5 mM TCEP containing protease inhibitors and lysozyme. The cells were lysed by microfluidizer and cell debris was removed by high-speed centrifugation followed by filtration. The clarified lysate was supplemented with 20 mM imidazole and passed over a HisTrap column (Cytiva), washed with 50 mM HEPES, pH 7.5, 500 mM NaCl, 10% glycerol and 0.5 mM TCEP and eluted over a 20 mM–500 mM imidazole gradient. The appropriate fractions were pooled and the 6×His-GST tag was removed by TEV cleavage. The untagged protein was passed over an S200 size-exclusion column (GE Life Sciences) that had been equilibrated in the wash buffer to further isolate purified protein. The pooled protein was concentrated by exploiting the slight affinity of the untagged SHOC2 to a HisTrap column. The protein was loaded onto the HisTrap column, the column was washed and the protein was eluted in a single-step 50 mM imidazole elution. The final protein was dialysed into storage buffer of 50 mM HEPES, pH 7.5, 150 mM NaCl, 10% glycerol and 0.5 mM TCEP, and stored at −80 °C. Production of PP1C Full-length PP1C (alpha isoform) was cloned into pET21b (Thermo Fisher Scientific) along with sequence to generate an N-terminally tagged 6×His-SUMO-TEV PP1C protein. The construct was transformed into BL21DE3 cells (Thermo Fisher Scientific) and cultured in TB medium supplemented with 1 mM MnCl 2 . Protein expression was induced by isopropyl β- d -1-thiogalactopyranoside (IPTG) at an optical density at 600 nm (OD 600 nm ) of 0.8; cells were then incubated overnight at 18 °C and collected by ultracentrifugation. The cell pellet was resuspended for lysis in 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 0.5 mM MnCl 2 , 10% glycerol and 0.5 mM TCEP supplemented with protease inhibitors and lysozyme. The cells were lysed by microfluidizer and the lysate was clarified by high-speed centrifugation followed by filtration. The clarified lysate was supplemented with 20 mM imidazole and passed over a HisTrap column (Cytiva), washed with 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 0.5 mM MnCl 2 , 10% glycerol and 0.5 mM TCEP and then eluted over a 20 mM–500 mM imidazole gradient. The appropriate fractions were pooled and the 6×His-SUMO tag was removed by TEV cleavage. The protein was passed over the HisTrap column to separate the tag and collected from the fractionated flow-through. The protein was pooled and concentrated using spin filters with a molecular weight cut-off (MWCO) of 10,000 to decrease the protein volume for the final size-exclusion chromatography step. The protein was passed over an S75 column (GE Life Sciences) that was equilibrated with the final PP1C storage buffer 50 mM Tris-HCl, pH 8.0, 300 mM NaCl, 0.5 mM MnCl 2 , 10% glycerol and 0.5 mM TCEP. The purified protein was stored at −80 °C. Production of MRAS(Q71L) MRAS(Q71L) (1–182) was cloned into pET21b (Thermo Fisher Scientific) along with sequence to generate an N-terminally tagged 6×His-TEV MRAS protein. The construct was transformed into BL21DE3 cells (Thermo Fisher Scientific) and cultured in TB medium. Protein expression was IPTG-induced at an OD 600 nm of 0.8 and incubated overnight at 18 °C, and the cells were collected by ultracentrifugation. The cell pellet was resuspended for lysis in 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 10% glycerol, 0.5 mM TCEP, protease inhibitors and lysozyme and then lysed by microfluidizer. The lysate was clarified by high-speed centrifugation followed by filtration. The clarified lysate was supplemented with 20 mM imidazole and passed over a HisTrap column (Cytiva), washed with 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 5 mM MgCl 2 , 10% glycerol and 0.5 mM TCEP and then eluted over a 20 mM–500 mM imidazole gradient. The appropriate fractions were pooled, treated with TEV and dialysed at 4 °C overnight into nucleotide-exchange buffer, 50 mM Tris-HCl, pH 8.0, 500 mM NaCl, 10% glycerol and 0.5 mM TCEP. The dialysed protein was treated with 1mM EDTA, 20 units of alkaline phosphatase (Sigma, P0114) per mg of protein and a 4× molar excess of GppCp (Sigma, M3509), and incubated at room temperature for 2 h. The nucleotide-exchange progress was monitored by ultra-performance liquid chromatography. After exchange, 5 mM MgCl 2 was added back to the protein and additional dialysis was performed to remove the EDTA. To remove the tags, the newly exchanged protein was passed over a HisTrap column that was equilibrated in the wash buffer and collected from the fractionated flow-through. The protein elutes in the flow-through in two separate peaks, the first of which contains more contaminants and more aggregated protein than the second peak. The fractions for the second peak were pooled and concentrated to a volume of less than 12 ml. The protein was passed over an S75 (GE Life Sciences) sizing column that had been equilibrated with the final storage buffer, 25 mM HEPES, pH 7.4, 150 mM NaCl, 5 mM MgCl 2 , 10% glycerol and 0.5 mM TCEP. The pooled and purified protein was stored at −80 °C. Crystallization of apo-SHOC2, data collection and structure determination All of the crystallization trials were performed by the hanging drop vapour diffusion method. Pure apo-SHOC2 protein (truncated at position 88) was exchanged into the same final buffer but without glycerol and the concentration was adjusted to 6 mg ml −1 . Drops were set up by mixing the protein solution with crystallization solution at 2 µl:2 µl and 2 µl:3 µl drop ratios and left to equilibrate against 500 µl of well solution at 4 °C. Large three-dimensional (3D) crystals grew within 1–2 weeks from solutions comprising 350–600 mM MgNO 3 , 200 mM Tris pH 8.5 and 23–30% PEG 4000. Crystals were directly flash-frozen in liquid nitrogen. X-ray data were collected at Brookhaven National Laboratory (NSLSII AMX). Data were indexed and scaled using iMOSFLM 32 . The structure was solved by molecular replacement using Phaser 33 and a homology model. The programs Coot 34 and PHENIX 35 were used for structure refinement. Data collection and refinement statistics are reported in Extended Data Table 1 . Complex formation The holoenzyme was formed by mixing individually purified proteins in a 1:2:2 stoichiometry of SHOC2:PP1C:MRAS(Q71L), and incubated overnight at 4 °C. The formed complex was isolated by passing over an S200 (GE Life Sciences) that had been equilibrated with 50 mM HEPES, pH 7.4, 150 mM NaCl, 10% glycerol and 0.5 mM TCEP. Cryo-EM sample preparation and data acquisition The complex was diluted to 2.75 mg ml −1 and supplemented with 0.025% (w/v) fluorinated octyl maltoside immediately before being applied to cryo-EM grids. Quantifoil 1.2/1.3 300 Au mesh grids (Quantifoil Micro Tools GmbH) were glow-discharged for 30 s using a Gatan Solarus plasma cleaner (Gatan) operating at 20 W and using ambient air. Grid freezing was performed using a Vitrobot Mk IV (Thermo Fisher Scientific) with the blotting chamber held at 100% humidity and 18 °C. A 3.5-µl droplet of sample was applied to the grid, blotted for 5 s and then plunged into liquid ethane. Data were collected at The University of Chicago Advanced Electron Microscopy Core Facility using a Titan Krios G3i electron microscope (Thermo Fisher Scientific) equipped with a BioQuantum K3 camera and energy filter. The camera was operated in CDS mode, and exposure movie data were recorded in super-resolution mode. A total of 4,721 movies were collected. Data acquisition parameters are provided in Extended Data Table 2. Cryo-EM data processing All data processing steps were performed using RELION 4.0-beta2 (ref. 36 ) unless otherwise noted. Micrograph movies were summed and dose-weighted, and contrast transfer function (CTF) parameters were estimated using CTFFind 4.1.14 (ref. 37 ) on movie-frame-averaged power spectra covering around 4 e − Å −2 dose. Micrographs showing extreme high outliers in corrected motion, poor power spectra Thon rings or CTF estimation fits or large non-vitreous ice regions were removed, resulting in 4,499 micrographs that were used for downstream data processing. A random subset of 1,000 micrographs were processed first. Particles from this subset were picked using the ab initio particle picker from the CisTEM 1.0.0-beta software package 38 and filtered over multiple rounds of two-dimensional (2D) classification. The resulting 85,761 particles were sufficient to generate an ab initio 3D model, and a single round of 3D classification was performed to further improve the particle stack. The resulting 40,069 particles were used to train a Topaz 39 particle picking model. This model was then used to pick particles from the entire micrograph set, using a log-likelihood score of −3 as a cut-off. A total of 644,140 particles remained after 2D and 3D classification, which were then used for iterative rounds of 3D refinement and CTF parameter (per-particle defocus, per-micrograph astigmatism and per-image shift position beam-tilt, trefoil and 4th-order aberrations) refinement until no further improvement in resolution was observed. Three-dimensional classification with fixed particle poses was then used to remove remaining outlier particles. The resulting 450,317 particles were then subjected to per-particle motion correction (Bayesian polishing), then further iterative 3D and CTF refinement until resolution improvements ceased. A focused 3D refinement using a mask excluding the distal/C-terminal tail of SHOC2 was used as the basis of a fixed-pose 3D classification to remove any remaining particles that were outliers specifically at the MRAS–PP1C–SHOC2 interface, but this resulted in the removal of only 933 particles. The remaining 449,384 particles were subjected to a second round of Bayesian polishing and a final round of CTF refinement (additionally estimating CTF B -factors) before the final 3D refinement that was used as the basis for model building. Cryo-EM model building The crystal structure of SHOC2 reported here and the publicly available crystal structures of PP1C and MRAS (Protein Data Bank (PDB) IDs 3E7A 40 and 3KKO 41 , respectively) were rigid-body-fitted into the map using Chimera 42 and used as the basis for model building. The model was refined by iterating between automated real-space refinement using PHENIX 35 and manual editing using Coot 34 . The map used for both manual and automate refinement was the result of sharpening by an automatically determined B -factor followed by filtering to local resolution using RELION 4.0-beta2. Model geometry and map-model agreement statistics were calculated using PHENIX and are provided in Supplementary Table 2 . The EMRinger 43 score was calculated using the unsharpened map. BLI All BLI was performed on a ForteBio Octet Red-384 system using streptavidin sensors. C-terminally Avi-tagged full-length SHOC2 that was biotinylated in vitro was loaded onto sensors for all experiments. All experiments were performed at 30 °C at a sensor height of 4 mm and an acquisition rate of 5 Hz in the following buffer: 10 mM HEPES pH 7.5, 150 mM NaCl, 1 mM MgCl 2 , 0.5 mM TCEP and 0.05% TWEEN-20. All experiments started with 60 s of sensor equilibration, followed by loading test sensors to 3 nm. Loaded sensors were then washed with buffer for 60 s. PP1C binding was evaluated at a maximum concentration of 10 μM, diluted 2× back (as limited by PP1C solubility) for 20 s. Formed complexes were then allowed to dissociate in buffer for 100 s. For MRAS binding, 3 nm of loaded SHOC2 was saturated with 10 μM of PP1C until signal equilibrium was reached (20 s). Sensors were then immediately dipped into wells containing up to 10 μM of MRAS and allowed to bind for 200 s, and dissociate for 1,800 s until dissociation was complete or deemed too slow to continue. Data were then fitted to single-exponential models to obtain k a and k d (when applicable), as well as fitting response versus concentration to fit for K D . SV-AUC All SV-AUC was run using A280 migration as detection in a two-slit cell. All proteins were sedimented in isolation and in all possible combinations across individual cells at the following concentrations: SHOC2, 7 μM; PP1C, 10 μM; MRAS, 20 μM. All spins were performed using a Beckman Optima XLA Ultracentrifuge 8-hole rotor at 20 °C, 50,000 rpm. Data were then fitted in the program SEDFIT, fitting all scans until sedimentation was complete in c(s) mode from 0–15 Svedburgs, and the identified peaks provided final S and MW values to attribute an identity to each sedimenting species, as well as the relative per cent of abundance. Modelling and interface energy calculations The cryo-EM structure of the SHOC2 complex was prepared before modelling and simulations. The module of Protein Preparation in Schrödinger Maestro 44 , 45 , 46 was applied to cap termini and repair residues. The minimization and optimization to the protein system was performed under the Amber10:EHT force field ( ) to root-mean-square (RMS) gradient of the potential energy falls below 0.1 kcal mol −1 Å −1 using Molecular Operating Environment (MOE) (2019.01; Chemical Computing Group). Default tether restraints from MOE were applied to the system. Protein models after in silico mutations underwent the same preparation procedure. The interface energy calculation between contacting residue pairs was processed using the module of Protein Contacts in MOE. Six types of contacts were identified: hydrogen bonds (H-bond), metal, ionic, arene, covalent and Van der Waals distance interactions (distance). The proximity threshold was set to 4.5 Å. Atoms separated by more than this distance were not considered to be interacting. The energy threshold was set to −0.5 kcal mol − 1 for H-, H-pi and ionic bonds. MD simulations The Schrödinger Desmond MD 47 engine was used for simulations. An orthorhombic water box was applied to bury the prepared protein system with a minimum distance of 10 Å to the edges from the protein. Water molecules were described using the SPC model. Na + ions were placed to neutralize the total net charge. The prepared system for simulation contained around 95,000 atoms. All simulations were performed following the OPLS4 force field 48 . The ensemble class of NPT was selected with the simulation temperature set to 300K (Nose-Hoover chain) and the pressure set to 1.01325 bar (Martyna-Tbias-Klein). A set of default minimization steps pre-defined in the Desmond protocol was adopted to relax the MD system. The simulation time was set to 200 ns with 2,000 frames evenly recorded (1 frame per 200 ps) during the sampling phase. Post-simulation analysis of the root-mean-square-deviation (RMSD) on α-carbon atoms was performed using a Schrödinger simulation interaction diagram. A Python-based analysis script analyze_trajectory_ppi.py was used to monitor contacting residue pairs during the MD course. Development of the SHOC2 expression vector and mutagenesis library The lentiviral vector pMT_025 was developed by the Broad Institute Genetic Perturbation Platform (GPP). Open reading frames (ORFs) can be cloned in through restriction and ligation at a multiple cloning site (MCS). The ORF expression is driven by the EF1a promoter. A PAC gene is driven by the SV40 promotor to confer puromycin resistance. The library was designed with the same software tools and principles as previously described 49 . The full-length SHOC2 gene was mutagenized. At each codon position, except for the start and stop codons, 19 missense changes and 1 nonsense change were made, but owing to the constraint of avoiding the development of restriction enzyme sites used for cloning in the body of the gene, some intended codon changes were not possible (10 variant positions missing in the designed SHOC2 library). In addition, we incorporated 342 silent changes scattered along the region of interest. It is important to note that in SHOC2 library variant designs, efforts were made to minimize codons that differed from the corresponding template codon by 1 nucleotide 49 . In all, the library had 11,952 variants. The cloning protocol was performed as previously described 49 . The mutagenesis library was synthesized by Twist BioScience, and the library created as a pool of linear fragments representing the full-length SHOC2 ORF with a short flank sequence, around 35 bases, at each end. The two flank sequences were designed to facilitate restriction and ligation cloning of the linear fragment library into the pMT_025 expression vector. The linear fragment library and the vector were each digested with NheI and BamH1, and then ligated with pMT_025 that had been opened with NheI and BamH1. The ligation products were then transformed into Stbl4 competent cells (New England BioLabs). To achieve good library representation, approximately 1,000 colonies per variant were cloned (12 million colonies for the entire SHOC2 variant library). Ultimately, around 20 million colonies were obtained for the SHOC2 library. The colonies were collected and plasmid DNA was extracted using the Maxi preparation kit (Qiagen). The resulting plasmid DNA library was sequenced using the Illumina Nextera XT platform. The distribution of variants was computationally assessed. SHOC2 variant library lentivirus production Lentivirus was created by the Broad Institute Genetic Perturbation Platform (GPP). The detailed protocol is available at . In brief, viral packaging cells (293T) were transfected with pDNA library, a packaging plasmid containing gag, pol and rev genes (for example, psPAX2; Addgene), and VSV-G expressing envelope plasmid (for example, pMD2.G; Addgene), using the TransIT-LT1 transfection reagent (Mirus Bio). The medium was changed 6–8 hours after transfection. Virus was collected 30 h after transfection. SHOC2 DMS viability screen Screening-scale infections were performed with virus in the 12-well format and infected wells were pooled 24 h after centrifugation. Infections were performed with 3 replicates per treatment arm, and a representation of at least 1,000 cells per SHOC2 variant was achieved after puromycin selection. Approximately 3 days after infection and selection, all cells within a replicate were pooled and split into Falcon Cell Culture Multi Flasks and treated in medium with 10 nM trametinib or dimethyl sulfoxide (DMSO) control. Cells were passaged in fresh medium containing drugs or vehicle control (DMSO) every 3–4 days. Cells were collected 16 days after the initiation of treatment and gDNA was extracted (Genomic DNA Extraction Kit, Machery-Nagel). Twelve PCR reactions were performed for each gDNA sample. The volume of each PCR reaction was 100 µl and contained around 3 µg of gDNA. Herculase II (Agilent) was used as the DNA polymerase. All 12 PCR reactions for each gDNA sample were pooled, concentrated with a PCR clean-up kit (Qiagen), loaded onto a 1% agarose gel and separated by gel electrophoresis. SHOC2 DMS PCR amplification and deconvolution The general screen deconvolution strategy and considerations have been described in detail previously 49 . The integrated ORF in genomic DNA was amplified by PCR. The PCR products were shotgun-sheared with transposon, index-labelled and sequenced with next-generation sequencing technology. The PCR primers were designed in such a way that there is an approximately 100-bp extra sequence at each end leading up to the mutated ORF region. We used two primers: (forward: 5′-ATTCTCCTTGGAATTTGCCCTT-3′; reverse: 5′-CATAGCGTAAAAGGAGCAACA-3′). PCR reactions were performed for each gDNA sample with a reaction volume of 50 μl and with 1 μg gDNA. Q5 (New England BioLabs) was used as DNA polymerase. One-third of 96 PCR reactions of a gDNA sample were pooled, concentrated with the Qiagen PCR clean-up kit and then purified by 1% agarose gel. The excised bands were purified first by Qiagen Qiaquick kits, then by AMPure XP kit (Beckman Coulter). Following the Illumina Nextera XT protocol, for each sample, we set up six Nextera reactions, each with 1 ng of purified ORF DNA. Each reaction was indexed with unique i7/i5 index pairs. After the limited-cycle PCR step, the Nextera reactions were purified with the AMPure XP kit. All samples were then pooled and sequenced with the Illumina Novaseq S4 platform. NovaSeq600 S4 data were processed with the software AnalyzeSaturationMutagenesis, ASMv1.0 for short, which was developed by the Broad Institute as previously described 49 . Typically, the paired-end reads were aligned to the ORF reference sequence. Multiple filters were applied and some reads were trimmed. The counts of detected variants were then tallied. The output files from ASMv1.0, one for each screening sample, were then parsed, annotated and merged into a single .csv file that is ready for hit-calling, using software tools that are freely available 49 . DMS analysis The abundance of each variant was calculated by the fraction of reads compared to the total reads of all variants in each end-point, and LFC was determined for day 16, 10 nM trametinib treatment compared to day 16, vehicle (DMSO) control. To better appreciate our variant activity relative to wild-type SHOC2, the DMS LFC was centred to the mean of the distribution of wild-type SHOC2 (silent mutations) and normalized against the mean of SHOC2 nonsense variants. The threshold for GOF or LOF was determined on the basis of two standard deviations above and below the mean of the SHOC2 wild-type (silent mutant) distribution (GOF > 0.6 scaled LFC; and LOF < −0.6 scaled LFC). The evolutionary conservation score (Evo Score) was determined by Aminode: evolutionarily constrained regions and protein–protein interacting residues (PPI) from cryo-EM data are indicated. In silico saturation mutagenesis with FoldX The in silico saturation mutagenesis studies on SHOC2 and the SHOC2 complex that evaluate the protein stability from the perspective of free-energy change (ΔΔ G ) upon mutations were performed using FoldX 50 . MutateX 51 was used for automation. The overall process was to systematically mutate each available residue within a protein or a protein complex to all other possible residue types and to predict ΔΔ G values using the FoldX energy calculation. The RepairPDB function of FoldX was first applied for energy minimization to modify the protein system to reasonable conformations. The BuildModel function was followed for the computational mutagenesis and reporting ΔΔ G values. For application to the SHOC2 complex, the AnalyseComplex function was continued to further evaluate the ΔΔ G of interaction between protein chains upon mutation. Generation of expression constructs SHOC2 ORF (SHOC2 transcript NM007373.3) containing wobble mutants was used to allow for SHOC2 protein expression in the presence of SHOC2 sgRNAs (both the NGG PAM sequence and the first amino acid in the guide sequence were mutated). SHOC2 variants were created using the Q5 Site-Directed Mutagenesis Kit (NEB E0554S) and pDONR221-SHOC2(WT-wobble mutant) as a template and mutations were confirmed by Sanger sequencing. Variants were subsequently cloned into pLX307 using the Gateway LR Clonase II Enzyme mix (11791020), and expression was confirmed in mammalian cells. Cell lines, culturing and generation of SHOC2-knockout and stable cell lines Cells (293T, PA-TU-8902, MIA PaCa-2) were grown in the following medium supplemented with 2 mM glutamine, 50 U ml −1 penicillin, 50 U ml −1 streptomycin (Gibco) and 10% fetal bovine serum (Sigma): DMEM. PA-TU-8902 and MIA PaCa-2 cells were infected with virus expressing SHOC2 sgRNA or non-cutting control (sgCH2-2) made with plentiCRISPR v2-Blast vector (Addgene: 83480). After blasticidin selection (2 μg ml −1 ) for three days, cells were serially diluted, single cell clones were selected and the SHOC2 protein levels were determined by western blotting. All cells were confirmed mycoplasma negative by PCR. Growth in low-attachment assays MIA PaCa-2 with endogenous SHOC2-knockout and stably expressing restored wild-type SHOC2 or various variants were seeded into 96-well Ultra-Low Attachment plates (Corning; 3904) at 5,000 cells per well. Seven days after seeding, cell viability was determined by Cell-Titer-Glo (CTG) (Promega; G7570) using an EnVision Plate Reader (PerkinElmer). Immunoblot analysis Cells were lysed using RIPA buffer (R0278; Sigma-Aldrich), quantified using the BCA Protein Assay Kit (23227; Thermo Fisher Scientific), resolved on 4–12% Bis–Tris gel, and transferred onto nitrocellulose membrane (IB23001; Thermo Fisher Scientific) using the iBlot 2 Dry Blotting System (IB21001; Thermo Fisher Scientific). All immunoblots were incubated with indicated primary antibodies and imaged using the Odyssey CLx infrared imager (LICOR). Densitometry analysis was conducted using Fiji image-analysis software[ 52 ]. Immunoprecipitation studies A total of 4 × 10 6 –7 × 10 6 cells were seeded in a 10-cm dish, and either co-transfected with a Flag-K/H/N/MRAS-expressing vector (3 μg) and Myc–SHOC2 (0.5 μg) for RAS pull-down studies, or co-transfected with a SHOC2-V5 expressing vector (3 μg) and HA-MRAS (3 μg) for SHOC2 variant pull-down studies. Twenty-four hours after the addition of transfection reagent to cells, the medium was changed, and cells were collected after an additional 24 h. For RAS pull-down studies, cells were lysed in 1 ml TNT-M lysis buffer (with 1 mM DTT, protease/phosphatase inhibitor (Sigma cocktail 2+3). After collection of the lysate, 30 μl was saved for input and 7 μl packed Flag M2 beads (Sigma) were added to the remaining lysate. Lysates with beads were rotated at 4 °C for 2 h. Beads were washed (TNT-M wash buffer (50 mM Tris pH 7.5, 150 mM NaCl, 1% Triton-X-100 and 5 mM MgCl 2 )) three times, and beads were boiled in 1.5× LDS. Input and eluted immunoprecipitation samples were immunoblotted for Flag (Sigma F7425 1:4,000), PP1α (Upstate 06-221) and Myc (Abcam ab9106 1:2,000). For SHOC2 variant pull-down studies, cells were lysed in 1 ml immunoprecipitation lysis buffer (40 mM Tris, pH 7.4, 150 mM NaCl, 1% Triton-X-100, 5mM MgCl 2 , 10 mM B-PG, 10 mM pyroPP and 40 mM HEPES). Lysates were quantified using the Pierce BCA Protein Assay Kit, and 3.3 mg lysates were equilibrated to a 1.1-ml volume. One hundred microlitres was saved as input, and 50 μl of Anti-V5-tag mAb-Magnetic Beads (MBL, M215-11) were added to lysate, and incubated at 4 °C overnight rotating. The next day, beads were washed three times, rotating at 4 °C. Beads were boiled in 2× Laemmli buffer, and input/eluted immunoprecipitation samples were immunoblotted for V5 (CST 13202S) and HA (CST 3724TS), and blotted for PP1CB (Thermo Fisher Scientific PA5-78117). In silico modelling of the interaction of the SHOC2 complex with the RAS–RAF dimeric multimer unit The model incorporated a referenced structural model of a RAS–RAF signalosome 22 . The module of protein–protein docking in MOE was used for this modelling. One signalosome unit that includes 2× RAS, 2× RAF,2× MEK and 2× 14-3-3 was extracted from the above-mentioned RAS–RAF signalosome structural model. The dephosphorylation pocket of PP1C in the holoenzyme cryo-EM structure was defined as a pocket for gridding, and a S259 on RAF was pin-pointed for protein–protein docking. One hundred possible docking conformations were sampled. Outcomes were ranked on the basis of their energy profiles (van der Waals, electrostatic and solvation energies). Manual structural inspection was performed to prioritize conformations that enable both RAS proteins within the RAS–RAF signalosome and SHOC2 complex to face the cell membrane. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The coordinates and structure factors for apo-SHOC2 and cryo-EM structures have been deposited in the PDB (apo-SHOC2 X-ray structure, PDB ID 7T7A ; complex cryo-EM structure, PDB ID 7UPI ). The variant information for disease-associated mutations for complex members is publicly available (ClinVar: and COSMIC: ). Code availability Molecular Operating Environment (MOE) and Schrödinger software are publicly available for commercial and non-commercial use. Custom code for DMS analyses are available at: .
Some of the most infamous drivers of cancer are mutations in RAS genes, which lead to tumor growth in about a quarter of all cancer patients. Scientists at the Broad Institute of MIT and Harvard and the Dana-Farber Cancer Institute have determined the molecular structure of a RAS-pathway protein called SHOC2 and two other proteins it binds to. This three-protein assembly, called the SHOC2-MRAS-PP1C ("SMP") complex, regulates the RAS signaling pathway and helps cancer cells with RAS mutations survive. The high-resolution structure of this complex, revealed through X-ray crystallography and cryogenic electron microscopy, suggests possible ways that drugs can bind to it to inhibit the RAS pathway and block cancer growth. The study, published in Nature, highlights a potential therapeutic strategy for a signaling pathway that has been difficult to target with drugs. The work is the result of a collaboration between researchers at Dana-Farber, Deerfield Discovery and Development, a subsidiary of Deerfield Management Company, and in the Broad's Cancer Program, Genetic Perturbation Platform, and the Center for the Development of Therapeutics (CDoT). "By solving the structure, we've learned a lot about how SHOC2 operates in this circuit, which points out ways in which you can therapeutically intervene," said study co-senior author and Broad institute member William Hahn of the Cancer Program. Hahn is also the William Rosenberg Professor of Medicine, executive vice president and the chief operating officer at Dana-Farber. SHOC2 protein structure. Credit: Broad Institute and Dana-Farber Cancer Institute SHOC2 story Hahn, co-senior author Andrew Aguirre, an oncologist at Dana-Farber and a Broad associate member, and Jason Kwon, a postdoctoral fellow in Hahn's lab, first identified SHOC2 as a possible drug target in 2019. They and their colleagues with Broad's Cancer Dependency Map project were studying essential genes in cancers with mutations in RAS proteins. At the time, cancer drugs called MEK inhibitors had been developed to target mutations in RAS proteins, but they were effective in only a fraction of cancer patients. The team wondered whether another protein was enabling mutated RAS proteins to bypass these inhibitors. In their 2019 study, the team used CRISPR-Cas9 to knock out genes one-by-one across the entire genome to see which ones might be causing RAS-mutated cancer cells to resist the effects of MEK inhibitors. The researchers found that when they deleted the SHOC2 gene or decreased its expression, the MEK inhibitors were highly effective at eliminating cancer cells, revealing SHOC2 as a culprit in resistance to MEK inhibitors. Protein puzzle The team decided to further study SHOC2 as a potential target, which included solving its structure. That's when researchers from CDoT and Deerfield Discovery and Development joined the team. "We identified SHOC2 genetically and knew that it was important, but in order to therapeutically exploit it, we have to know the structure of the protein to identify critical, functional regions," said Kwon, project lead and co-first author of the paper. The team worked with CDoT scientists, including co-first authors Behnoush Hajian, a research scientist, Yuemin Bian, a computational scientist, and co-senior author Christopher Lemke, director of protein science and structural biology. They used X-ray crystallography and cryogenic electron microscopy to solve high-resolution structures of SHOC2 and the proteins it interacts with. They found that on its own, SHOC2 is a horseshoe-shaped scaffolding protein with a smooth surface, making it challenging for drugs to bind to it. However, further analyses revealed the structural details of how SHOC2 brings together and binds to two key proteins, PP1C and MRAS, which are also part of the RAS signaling pathway. The team solved the structure of this three-protein SMP complex, which revealed new pockets between the component proteins where researchers might be able to fit a drug. "You look at SHOC2 by itself, and you think 'I can't do anything with that,'" said Broad Institute scientist Alex Burgin, CDoT's senior director. "But when I first saw the SMP complex, right away, I could see multiple pockets where you can imagine binding a drug. And it became really exciting to me as soon as I saw the structure because it created a new path forward." From structure to function The scientists, working with colleagues in Broad's Genetic Perturbation Platform, also analyzed the effect of mutations in each amino acid of SHOC2 on the protein's function, using a method called deep mutational scanning. The data they generated could be used to identify new links between SHOC2 mutations and other diseases. For example, previous work has shown that mutations in this protein cause Noonan-like Syndrome, a congenital condition that affects the development of bodily features. "What's really enabling is that by already having tested every possible mutation, you can rapidly learn and rationalize the structure and be confident what the consequences of those mutations are," said Hahn. "I think this paper really brings to light how powerful that information is." The researchers hope that their findings will clarify the clinical relevance of SHOC2 mutations and help drugmakers find compounds that bind to the SMP complex to shut down SHOC2. The aim would be to create new drugs targeting the SMP complex that could work alone or in combination with MEK inhibitors to suppress the effects of RAS mutations and effectively block cancer growth. "Through great teamwork leveraging diverse expertise in cancer cell signaling, functional genetic screening and structural biology, we've been able to rapidly go from discovery of SHOC2 as a high-priority target to a comprehensive structure-function roadmap for drug development. We hope that this work opens up promising avenues to create much needed new therapies for RAS-mutant cancers," said Aguirre.
10.1038/s41586-022-04928-2
Earth
Fifth of global food-related emissions due to transport, research finds
Arunima Malik, Global food-miles account for nearly 20% of total food-systems emissions, Nature Food (2022). DOI: 10.1038/s43016-022-00531-w. www.nature.com/articles/s43016-022-00531-w Journal information: Nature Food
https://dx.doi.org/10.1038/s43016-022-00531-w
https://phys.org/news/2022-06-global-food-related-emissions-due.html
Abstract Food trade plays a key role in achieving global food security. With a growing consumer demand for diverse food products, transportation has emerged as a key link in food supply chains. We estimate the carbon footprint of food-miles by using a global multi-region accounting framework. We calculate food-miles based on the countries and sectors of origin and the destination countries, and distinguish the relevant international and domestic transport distances and commodity masses. When the entire upstream food supply chain is considered, global food-miles correspond to about 3.0 GtCO 2 e (3.5–7.5 times higher than previously estimated), indicating that transport accounts for about 19% of total food-system emissions (stemming from transport, production and land-use change). Global freight transport associated with vegetable and fruit consumption contributes 36% of food-miles emissions—almost twice the amount of greenhouse gases released during their production. To mitigate the environmental impact of food, a shift towards plant-based foods must be coupled with more locally produced items, mainly in affluent countries. Main Since 1995, amid rapid globalization, worldwide agricultural and food trade has more than doubled, reaching US$1.5 trillion in 2018 1 . Internationally traded food provides 19% of globally consumed calories 2 and thus plays an important role in achieving global food security 3 , 4 , 5 , 6 . Reductions in transportation cost remain one of the main driving forces leading to the rapid growth of food trade 1 . The concept of ‘food-miles’ takes into account the distance travelled by food products from points of production to consumption, and is aimed at indicating the associated environmental impact (for example, energy use, emissions) 7 . ‘Food-miles’ are generally measured as tonne-kilometres (tkm, a unit for measuring freight transport, representing the transport of 1 t of goods by a given transport mode over a distance of 1 km), that is, the distance travelled in kilometres multiplied by the mass in tonnes for each transported food item 8 . Carbon footprint assessments of food-miles have usually been limited to selected food commodities (for example, tomato ketchup in Sweden 9 , canned tomato 10 in Italy, beef 11 and wheat 12 in the United States, and finfish in Western Australia 13 ) due to the large amount of data required to model all food categories. In these existing studies, product life-cycle assessment (LCA) is extensively used to track environmental performance throughout the entire life cycle of food commodities within a predefined boundary, resulting in a wide range of environmental impact estimates, with transport representing between 0.04% and 90% of production-related emissions 9 , 10 , 11 , 12 , 13 . Although such bottom-up LCA studies include detailed information on specific food products, they have limitations: (1) LCA requires defining a system boundary, and thus suffers from truncation errors due to the omission of contributions beyond that boundary 14 and, consequently, (2) comparisons between studies are often not possible because of differences in scope, and (3) the focus of these studies on a small portion of the total food supply makes it difficult to derive more general conclusions about the overall food sector 15 . A few analyses have been carried out for food suppliers at the national (for example, the United Kingdom 16 , Canada 17 , Spain 18 and Iran 19 ), city 20 and global 21 , 22 levels. Most of these studies only contain food-miles emissions resulting from direct food suppliers without considering entire upstream supply chains 16 , 17 , 18 , 19 , 20 , 21 , 22 , attributing 0.61–3.4% (refs. 16 , 17 , 18 , 19 ) of national and 0.8–1.6% (refs. 20 , 21 , 22 ) (0.4–0.86 GtCO 2 e) of global greenhouse gas (GHG) emissions to food transport (see Supplementary Notes and Supplementary Methods for a detailed literature review). To conclude, although carbon emissions associated with food production are well documented 23 , 24 , the carbon footprint of the global trade of food, accounting for the entire food supply chain, has not been comprehensively quantified. Here we provide a comprehensive global estimate of the carbon footprint of global food-miles. We first compare food-miles emissions with the total food-system emissions (stemming from transport, production and land-use change). Food-miles emissions are derived through an multi-region input–output (MRIO) model that integrates physical transportation distance, mass, modes and emissions coefficients into global supply chains ( Supplementary Notes and Supplementary Methods ). The total food-system emissions integrate food-miles emissions, food production and land-use change emissions, where the production emissions are estimated based on a standard MRIO-based carbon footprint calculation method that is widely used in high-impact footprint studies such as on biodiversity 25 , nitrogen emissions 26 and carbon emissions from global tourism 27 ( Supplementary Notes and Supplementary Methods ). We further present: (1) food-miles flow maps covering the entire global supply-chain network, and (2) domestic and international food-miles and emissions broken down by 74 countries/regions, 37 economic sectors and four transportation modes (4 × 74 2 × 37 2 ≈ 30 million direct trade connections). We compare food-miles and emissions from both destination and origin perspectives across all international trade routes to reflect the responsibility borne by importers and exporters. We also demonstrate our methodological advances in a comparison with simplified food-miles approaches. Our conclusions for improved food systems management highlight the need for an integrated strategy to food security and environment protection. Methods are summarized at the end, with further details in Supplementary Notes and Supplementary Methods . Results and discussion On the back of global food expenditure of around US$5 trillion in 2017, we estimate that the global food system (excluding land use and land-use change activities) was associated with 15.8 GtCO 2 e (transport of 3.0 Gt, production of 7.1 Gt and land-use change activities of 5.7 Gt) 22 (Fig. 1 c), representing about 30% of the world’s GHG emissions (52.5 Gt); 28 this is in line with previous estimates of 21–37% (ref. 23 ). Global food trade is becoming increasingly diversified and complex. Our model alone, with 74 countries/regions and 37 sectors (Supplementary Tables 2 and 3 ), involves more than 30 million direct trade connections (Fig. 2 ). Using such detailed interregional and sectoral trade information, we estimate total freight-miles (total freight-miles cover the freight mileage associated with the final demand from all 37 sectors) to be 124.5 trillion tkm (close to a previous assessment of 108 trillion tkm in 2015 (ref. 29 )), out of which 22.2 trillion tkm, or about 18%, is driven by food consumption (food-miles, which represent freight mileage related to the final demand of the 25 food sectors), with international food trade contributing 71% of that (15.7 trillion tkm) (Fig. 1a ; for details see Food-miles and emissions by region and sector). Fig. 1: Overview of domestic, international and global food-miles, food-miles emissions and food-production emissions by sectors. a – c , Food miles ( a ), food-miles emissions ( b ) and food-production emissions ( c ). a and b are based on our food-miles approach ( Supplementary Notes and Supplementary Methods , equations (4)–(8)); c is based on the environmental-extended MRIO footprints of global food production ( Supplementary Notes and Supplementary Methods , equations (2) and (3)). Sectoral breakdown is represented by horizontal bars. Full size image Fig. 2: Top bilateral flows of international trade flows associated with global food consumption. a , Top 100 bilateral flows of international food-miles emissions. b , Top 100 bilateral flows of international food-miles emissions per capita. The arrows connect the origin and destination of supply chains, and the line thickness represents food-miles emissions (Supplementary Notes and Supplementary Methods, equation (8)) in absolute and per-capita terms. RoE, rest of Europe; RoEE, rest of Eastern Europe; RoFSU, rest of the former Soviet Union; RoEA, rest of East Africa, RoWA, rest of West Africa; RoSA, rest of South America; RoEFTA, rest of the European Free Trade Association; BEL, Belgium; CZE, Czechia; DEU, Germany; DNK, Denmark; ESP, Spain; FIN, Finland; FRA, France; GRC, Greece; IRL, Ireland; KOR, South Korea; NOR, Norway; PRT, Portugal; UKR, Ukraine. Source data Full size image Compared with food-miles, which only contribute 18% to total freight-miles, we find that the food-miles emissions accounted for 27% (or 3.0 GtCO 2 e) of total freight-miles emissions; about 18% of the 27% (= 4.5%) stem from international trade (Fig. 1b , based on Supplementary Methods, equations (4)–(8); see Methods ). Due to the inclusion of all upstream supply chains, our estimate of 3.0 GtCO 2 e is 3.5–7.5 times more than previous estimates (0.4–0.86 Gt (refs. 20 , 21 , 22 )). Comparing this with our estimate of 15.8 GtCO 2 e emitted from the total food system demonstrates that food-miles come at a high emission cost: about 3/(3 + 7.1) ≈ 30% of food-system emissions (transport and production) or 3/(3 + 7.1 + 5.7) ≈ 19% of total food-system emissions (transport, production and land-use change) are due to transport. This estimate by far exceeds the transport emissions overhead of other commodities: generally, freight accounts for only 7% of emissions from industry and utilities 30 . These findings reinforce the importance of assessing food-miles emissions, and the need to evaluate entire supply chains, from points of production to consumption, for a complete assessment of food-miles emissions. During food production, high emissions 31 from land-use change, enteric fermentation and manure management for cattle and poultry raised for meat production contribute around 27% of food-system emissions (Fig. 1c , (2 + 0.77)/(3 + 7.1)). In contrast, food products transported at high tonnages (for example, vegetables and fruit, cereal and flour, and sugar beet/cane) and/or products that require temperature-controlled transportation (for example, dairy) contribute the majority of total food-miles emissions (Fig. 1b ). Notably, transport of vegetables and fruit more than doubles their production-related emissions of 0.5 GtCO 2 e (Fig. 1c ) to a total of 1.1 GtCO 2 e (Fig. 1b ), mainly due to their requirement for transportation in temperature-controlled environments and their high masses (3.3 Gt; Supplementary Table 4 ). Given the dominant role of developed and emerging economies in global agricultural markets, we observe large international food-miles and related emissions embodied in supply chains flowing between these economies (Fig. 2 and Supplementary Fig. 6 ). The United States, China, Japan, Germany, the United Kingdom, India, Russia and Brazil, accounting for 47% of the global population and 62% of global total economic output, collectively contribute 48% and 42% to international food-miles (Supplementary Fig. 6 ) and food-miles emissions (Fig. 2a ), respectively. In comparison, on a per-capita basis, supply chains flowing into high-income countries feature predominantly in the per-capita emissions flow map (Fig. 2b ). High-income countries (per-capita GDP > US$25,000) represent about 12.5% of the world’s population but are associated with 46% of international food-miles and food-miles emissions. Since the meat sector dominates global food-production emissions, and dietary change has led to China emerging as a significant meat importer, it is not surprising that some of the most carbon-intensive supply chains relate to red meat consumption in China (Fig. 3 ). Domestically within China, emissions related to freight transportation of livestock and manufacturing equipment for ultimately satisfying the domestic red meat consumption (Fig. 3 , red arrows: 134 kt from livestock to red meat and 326 kt from manufacturing to red meat) are 5–13% of emissions related to the production of the livestock and manufacturing equipment (1,337 and 1,130 kt, respectively). Due to the long distances involved in some international supply chains supporting China’s meat consumption (for example, soybeans from Brazil and vegetables and fruit from the United States), transport-to-production emission ratios reach 58–95%. Such high ratios, however, do not necessarily apply to all transport modes and origin–destination pairs. For example, transporting agricultural chemicals from Canada to the United States by road and to Brazil by ship, for the ultimate purpose of supplying red meat to Chinese consumers, results in ratios of 17% (0.5 kt/3.0 kt) and 4% (0.02 kt/0.05 kt), respectively (Fig. 3 ). This wide range of transport-to-production emission ratios is the result of the interplay of distances, modes and refrigeration needs. For example, road transport features a much higher emission intensity per tkm than shipping (0.2–0.66 kgCO 2 eq tkm −1 versus 0.01–0.02 kgCO 2 e tkm −1 ). These findings demonstrate the importance of distinguishing spatial and modal features when quantifying food-miles emissions. Fig. 3: Examples of supply chains terminating in red meat consumption by households in China. Circles present food production emissions, that is, the MRIO-based carbon footprint of production activities assigned to China’s final demand for red meat ( Supplementary Notes and Supplementary Methods , equations (2) and (3)), arrows represent the food-miles emissions ( Supplementary Notes and Supplementary Methods , equations (4)–(8)). Full size image Food-miles and emissions by region and sector We apply the global multi-region accounting framework established in this study to provide new estimates of sector- and region-specific food-miles and their related emissions, resulting from domestic and international trade, from both consumption and production perspectives, respectively (Figs. 4 and 5 and Supplementary Figs. 7 and 8 ). Given that 93% of international food transportation relies on maritime shipping and 94% of domestic transportation on road haulage (Fig. 5c ), and that their modal emission coefficients vary considerably (Supplementary Table 1 ), the spatial distribution of transport tasks shows that domestic food-miles emissions overtake those of international food-miles by 1.3 times (at 1.7 GtCO 2 e; Fig. 5d ). Fig. 4: Global food-miles emissions broken down by countries/regions. a , Destination-based food-miles emissions. b , Origin-based food-miles emissions. c , Domestic food-miles emissions. d , Food-miles emissions net trade. Destination-based ( a ) and origin-based ( b ) food-miles emissions ( Supplementary Notes and Supplementary Methods , equations (9)–(12)) are obtained by summing the food-miles emissions of supply chains flowing into and out of a region. Domestic food-miles emissions ( c ) refers to supply chains providing domestic production and consumption. Food-miles emissions net trade ( d ) is given by the difference between destination and origin, which is <0 for net food-miles emissions importers and >0 for net food-miles emissions exporters. Source data Full size image Fig. 5: Sectoral breakdown of food-miles and the related emissions resulting from international and domestic trade. a , Food-miles by region and sector. b , Food-miles per capita by region and sector. c , Food-miles by region, sector and mode. d , Food-miles emissions by region and sector. e , Food-miles emissions per capita by region and sector. f , Food-miles emissions by sector, region and mode. For the sake of brevity we aggregated 74 regions and 37 sectors into 9 broad regions and 11 broad sectors. For c and f , the three modes from top to bottom presented for each broad region are road, water and others (that is, rail and air). Results obtained based on Supplementary Notes and Supplementary Methods , equations (4)–(8). Source data Full size image Domestic food-miles and emissions are positively correlated with countries’ areas and populations: China, India, the United States and Russia are the top four emitters, accounting for 64% of the global domestic food-miles emissions (Fig. 4c ). In contrast, international food-miles often depend on the mass and distance of imports from specific trading partners (Fig. 5b ). As a result, compared with domestic food mileage and emissions per capita (Fig. 5c ), international food-miles and emissions per capita vary markedly by region. High-income regions, including Oceania, Europe and North America, clock up per-capita food-miles and emissions 2.7–2.8 times that of other aggregated broad regions (Fig. 5b ). Counting all international food-miles and emissions by destination and origin yields destination- and origin-based food-miles and emissions, respectively, with the difference representing net imports or net exports of food-miles and emissions (Fig. 4d and Supplementary Fig. 7d ). A number of large and emerging economies play a dominant role in the world food trade: large economies such as China, Japan, the United States and Eastern Europe are large net importers of food-miles and emissions, showing that the overall food demand is noticeably higher there than domestic production (Fig. 4d and Supplementary Fig. 7d ). The largest net exporter of food-miles is Brazil, followed by Australia, India and Argentina (Supplementary Fig. 7d ), who have rapidly increased their food production in recent decades due to improved technology and the rapid expansion in food trade 32 . From a sectoral perspective, food sectors such as vegetables, fruit and cereals stand out in the domestic food-miles of all regions. However, direct and indirect suppliers of food producers such as fossil fuels, mining and manufacturing are also prominent when accounting for the entire food supply chain (Fig. 5a and Supplementary Fig. 8a ). Since vegetables and fruit require temperature-controlled transportation (0.2–0.66 kgCO 2 e tkm −1 for ambient and cold transport, respectively), their food-miles emissions are higher than those of commodities that are transported at ambient temperatures (Fig. 5e and Supplementary Fig. 8e ). Finally, breaking down food-miles and emissions into contributions from upstream production layers (Fig. 6 and Supplementary Fig. 9 ), we show that indirect supply-chain emissions are significant: food-miles and related emissions more than double from around 8.4 billion tkm and 1.4 GtCO 2 e at the first layer (transport of food) to 22.2 billion tkm and 3.0 GtCO 2 e at the tenth layer (transport of fertilizer, agricultural machinery, pesticides, etc.) as a result of capturing an increasing part of the underlying global supply-chain network. Fig. 6: Production layer decomposition (PLD) of food-miles emissions. a , PLD by regions. b , PLD by sectors. The horizontal axis represents production layers: 1, emissions resulting from direct consumption; 2, their immediate trade partners, 3, the partners of trade partners, etc. The total food-miles emissions are more than twice the direct emissions. Results obtained based on Supplementary Notes and Supplementary Methods , equations (6) and (7). Compare b with Fig. 1b . Source data Full size image Benefits of localizing food supply One approach to reducing food-miles emissions is to switch to locally produced food. However, the environmental benefits of localizing food supply could be offset when imported food is produced more sustainably 33 , 34 , 35 , 36 or can be transported without requiring refrigerated storage 37 . We therefore modelled the complete global replacement of food imports—11.6% of total food consumption—assuming self-sufficiency, that is purely domestic supply ( Supplementary Notes and Supplementary Methods ). Naturally, this scenario is hypothetical and not entirely realistic; for example, because many regions cannot be self-sufficient in food supply, there exist annual variations in regions’ yield and production, or the local food that acts as a replacement is qualitatively different from the imported variety. Nevertheless, such a scenario offers interesting insights into the general emissions trend caused by food localization. An entirely domestic food consumption scenario can reduce food-miles emissions by 0.27 GtCO 2 e and food production emissions by 0.11 GtCO 2 e. Localizing food in high-income countries alone would reduce emissions by 0.24 GtCO 2 e (transport) and 0.39 GtCO 2 e (production) (Supplementary Fig. 11 ). This reduction potential appears relatively limited but can be understood by considering that emissions are the result of the interplay of distances, modes and refrigeration needs. Switching to local supply reduces long-distance food transportation, which relies heavily on maritime shipping (0.01–0.02 kgCO 2 e tkm −1 ), but increases domestic food supply dominated by road transport, which in turn features a much higher emission intensity per tkm than shipping (0.2–0.66 kgCO 2 e tkm −1 ). Method comparison We compared our results with simplified food-miles approaches that focus only on first-order supply chain connections 16 , 17 , 18 , 19 , 38 (‘direct-only’ approach) or on international trade to one destination 39 (‘international only’ approach), or that use the same amount of tkm per US dollar attribute for commodities, irrespective of their origin/destination and mass 40 (‘distance-mass-ignorant’ approach). This comparison demonstrates that our introduction of the interregional and interindustry travel mass, distances and transportation modes leads to substantial differences, and thus considerably improves the accuracy of food-miles calculations. In particular, the differences between methods range between −50% and −18% for global food-miles and between −56% and 23% for global food-miles emissions (Supplementary Figures 4 and 5 ). Conclusions A growing global population and accelerating pace of climate change pose many challenges that require governments, corporations and members of civil society to work together to ensure sustainable production and consumption of food. With a growing global population demanding increasingly diverse food, and an accelerating rate of globalization, the contribution of food and its transportation to GHG emissions has emerged as an important focus, encapsulated in the concept of ‘food-miles’. Although ‘food-miles’ are not and should not be considered as the only indicator of the environmental impact of food, they are a characteristic of every food commodity, and a better understanding is required to establish national and international policies fit for managing food-related emissions in an era of ever-increasing urgency to avoid the worst impacts of climate change. In this article, we advance the understanding of food-miles and their associated global GHG emissions through a global analysis of unprecedented spatial and sectoral detail, and in terms of contributions of immediate producers and suppliers of food, and their upstream supply-chain network. The contribution of this study to the literature is twofold. First, we report that global food-miles emissions (transport) are about 3.0 GtCO 2 e, equivalent to nearly 30% of food-system emissions (transport and production) or 19% of total food-system emissions (transport, production and land-use change). Using an innovative accounting framework, we find that food-miles emissions are 3.5–7.5 times higher than previous estimates, a finding that requires reconsideration of policies governing global food trade and consumption. In particular, vegetable and fruit consumption make up more than a third of global food-miles emissions, and almost double their production-related emissions. These new findings call for a reconsideration of the trade-off between more sustainable localized food supply versus international food trade, supporting food security. Second, we show that food-miles emissions are driven by the affluent world. On one hand, as we have shown, localizing food supply leads to emissions reductions. On the other hand, international trade plays an important role in providing access to nutritious food 3 , 4 , 5 , 6 and mitigating food insecurity for vulnerable populations 41 in low-income countries 42 . In this respect, however, we show that high-income countries represent only about 12.5% of the world’s population, but are associated with 52% and 46% of international food-miles and emissions, respectively. In contrast, with about half of the global population, low-income countries (per-capita GDP < US$3,000) cause only 12% and 20% of international food-miles and emissions, respectively. This means that reducing food-miles would not necessarily compromise food security, especially when most of such reductions occur in the affluent world, for example, through fiscal measures such as carbon pricing and import duties, with revenues being recycled to protect vulnerable communities from potential food price increases 43 , 44 . To mitigate food system environmental impact, we conclude that the strategy of dietary change to reduce animal product consumption and promote plant-based foods must at least be coupled with switching towards more local production in high-income countries 45 , 46 . This strategy could be supported by tapping into the considerable potential of peri-urban agriculture in nourishing large numbers of urban residents 47 . Our findings thus contribute to public advocacy in providing a more nuanced argument for the notion of sourcing food more locally where appropriate 48 . A low-emissions food system requires management at different supply-chain stages and calls for engagement of different actors 49 . For example, the United Nations Environment Programme 50 concludes that “when we talk about global food systems, we are using a more holistic lens, expanding the conversation to include the entire value chain—not only production and consumption but also food processing, packaging, transport, retail and food services. By considering the entire system, we are better positioned to understand problems and to address them in a more integrated way.” In addition to improving land use and animal husbandry 51 , and reducing food loss at the farm 52 , there exist options for technological improvements of transport channels such as expanding the use of cleaner-energy carriers and vehicles 53 . Further downstream, wholesalers, retailers and hospitality providers need to be made aware of the environmental implications of their respective procurement, distribution and marketing strategies 52 . Both investors and governments are able to assist in creating financial and regulatory environments in which sustainable food supply can thrive 54 . Finally, consumers are key to sustainability in their dietary choices and purchasing behaviour 21 . Suppliers, including producers, processors, wholesalers, retailers and hospitality providers, have the ability to increase the shares of local food markets by improving access, quality, range, attractiveness and service convenience of local fruit and vegetables, meat and poultry, seafood and dairy products 55 , 56 . However, changing consumers’ attitudes and behaviour towards sustainable diets, and avoiding high-impact and/or remote food producers, can bring about environmental benefits on a scale that producers cannot achieve 21 . Our findings support the FAO’s 57 notion that “value-chain development emphasizes systemic analyses and integrated interventions to improve the chain’s performance”, and the advocacy of “encouraging consumption of locally produced food” highlighted in the Intergovernmental Panel on Climate Change’s Special Report on Climate Change and Land 58 . Global governance bodies thus call for improved communication of the environmental impacts of food-miles—such as the results obtained from the systemic analysis in this work—throughout the supply-chain actor community, using multistakeholder partnerships, thereby enabling a more integrated food system management 54 that is focused on achieving food security with minimal environmental impact. Methods Our food-production emissions and food-miles calculation is based on an MIRO framework ( Supplementary Notes and Supplementary Methods ), where the global economy is aggregated into certain sectors and regions with interregional and interindustry transactions being captured across the entire supply chain 59 . We first calculate global food-production emissions based on the standard MRIO-based carbon footprint calculation method ( Supplementary Notes and Supplementary Methods ) that is widely used in high-impact footprint studies such as on biodiversity 25 , nitrogen emissions 26 and carbon emissions from global tourism 27 . The corresponding food production emissions obtained refer to the carbon footprint of production activities assigned to the final demand for global food consumption. We then estimate global food-miles and emissions based on the improved food-miles calculation method ( Supplementary Notes and Supplementary Methods ) by integrating physical transportation distance, mass, modes and emissions coefficients into our MRIO model, which contains nearly 30 million direct trade connections. We compare two accounting perspectives: destination- and origin-based accounting, where the former allocates food-miles and emissions to the destination region, while the latter allocates them to the country of origin. The two perspectives reflect the emission responsibility borne by importers and exporters, respectively ( Supplementary Notes and Supplementary Methods ). In the following, we further describe data sources, limitations and uncertainties associated with our simulations. Data sources We employ a computational-cloud-based collaborative research environment—the Global MRIO Lab 60 , 61 , 62 —to compile tailored global MRIO data, including an inter-regional, inter-industry transactions matrix T and an inter-regional final demand matrix y , based on the most recent global data sources from the United Nations 63 , 64 , 65 , 66 , 67 , 68 and numerous national input–output tables 61 . In essence, the Global MRIO Lab consists of three broad components: (1) raw data repositories are addressed by (2) Matlab code 61 (called AISHA) that in turn uses constrained optimization 69 (an algorithm called KRAS 70 ) to construct an MRIO database that adheres most closely to the abovementioned global data sources. MRIO databases are then saved in (3) user-accessible processed-data repositories. To cover the global supply chain of different food commodities, we distinguish 73 regions and 37 sectors (including 25 food commodities; see Supplementary Fig. 1 and Supplementary Tables 2 and 3 ). Of the 73 regions, 55 represent individual countries/regions whose imports account for 94.3% of global food imports in 2018 (ref. 71 ). The remaining 18 regions are aggregations of countries by their geographical locations. Emissions data Q are taken from the most recent EDGAR database (v.5.0) 28 . Price and mass data specified for different regions and sectors are extracted from the FAOSTAT database 72 and the UN Comtrade Database ( ), where the former provides prices received by farmers for primary crops, live animals and livestock primary products for 180 countries and 212 food-related products, and the latter contains detailed trading statistics reported for over 170 countries with detailed commodities categories. International transportation distances are measured using the geographical distance between the population centroids (adapted from the world-population-centroids database developed based on global night-time lights 73 for 55 individual counties/regions) and the geographical centroids for compound regions, where these bilateral distances ( d rs , Supplementary Fig. 3b ) are extracted based on an open-source geographic information system application, QGIS 3.16. Domestic freight tasks for 56 countries are available from the International Transport Forum (ITF) transport statistics 74 . Based on the log-linear relationship between the domestic tasks of the 56 countries and the product of their population and land area, we then predict the freight tasks data for the missing countries (Supplementary Fig. 2 ). By dividing the domestic freight task by commodity mass (in tonnes), domestic freight distances ( d rr ) can be obtained. One of the main problems hindering food-miles estimation is the lack of data on the mode of transport associated with each food commodity. In this study, we account for food-miles and emissions resulting from multimodal transportation that occurred throughout the supply chain of different food commodities by distinguishing international and domestic transportation characterized by different commodity-specific transportation modes: (1) Internationally, less than 0.25% of global freight is transported by air, while maritime shipping and road cover most of (90%) the movement of goods over long distances 29 . Therefore, we assume that two modes—road and water—are available when trading internationally. Based on the geographical location between two countries/regions (that is, bordering on land or separated by sea), either road or water is applied (Supplementary Fig. 3a ). (2) We consider domestic food-miles and emissions caused by both domestic and international suppliers. The latter comprises the domestic distribution of internationally traded commodities in both destination and origin regions. Domestically, because only 2% of freights are transported by inland waterway in 2015 29 , we assume that three transport modes—rail, road and air—are available in domestic transportation. We adopt the background data for transport in the Ecoinvent database ( ) to estimate the proportion of the three transportation modes for each commodity (Supplementary Fig. 2 ). As a result, four types of transport modes are distinguished: water, rail, road and air. The CO 2 e emissions coefficients ( e h ) for different transport modes are sourced from Ecoinvent v.3.3 (Supplementary Table 1 ) 21 . Considering the data availability of multiple data sources, we adopt (1) 2017 as the reference year for input–output parameters ( T , x , y , Q ), price and mass data ( p ) from FAOSTAT and UN Comtrade databases, and domestic freight distances ( d ) from ITF databases; (2) 2013 for commodity-specific transportation modes ( h ) from Ecoinvent; and (3) 2016 for transport mode-specific emissions coefficients ( e ) from Ecoinvent. Limitations We employ some assumptions about domestic and international freight travel distances and modes due to limited data availability. For domestic travel distances, we took the available domestic freight task data of 56 countries from ITF transport statistics 74 , and used the log-linear relationship fitted by these data to predict the average domestic freight distance of the missing countries. We apply this average domestic transportation distance to all food and non-food commodities, because commodity-specific travel distance data are unavailable. Internationally, we noticed that specific services with different trading partners may involve multiple ports especially for large countries, resulting in variations in international travel distances between region pairs. In our study, we could not distinguish such differences by including subdivisions within each region, due to our study being limited to 73 world regions. Therefore, we measure average international freight travel distances among these regions using the geographic distance between their population centroids that are available for individual countries/regions. This is based on the assumption that, due to the proportionality between population, GDP and electricity consumption and land area illuminated by anthropogenic lights 75 , such population centroids 73 , 75 obtained from night-time light data are able to reflect centroids of human economic activities. In terms of domestic travel modes, due to the scarcity of such data, we apply the same commodity-specific transport mode data obtained from the Ecoinvent database to all regions. We also note that land-use emissions are excluded from food-production emissions as considered in this study. Uncertainty To assess the uncertainty of our food-miles results, we employ the Monte Carlo technique, which is extensively used for the uncertainty analysis of footprint studies 27 . We use accompanying standard deviation estimates for our MRIO data to perform 1,000 simulation runs with perturbed food-miles parameters ( Supplementary Information , section 1.2.5), then calculate 1,000 perturbed food-miles emissions, and estimate their standard deviations from the statistical distribution of the perturbations (Supplementary Fig. 10 ). The results from 1,000 Monte Carlo runs indicate that global food-miles emissions lie between 2.9 and 3.22 GtCO 2 e (95% level of confidence). Regional standard deviations of the food-miles emissions ranged between 1% and 10% of the reference values (Supplementary Table 5 ). Future work Beyond the commodity-specific GHG emissions detailed in this study, future work could further consider food-miles and emissions per traded gram of protein or other macro/micronutrients. Finally, selecting GHG emissions as the currency of our analysis provides an undoubtedly important metric of environmental sustainability, but other factors (for example, biodiversity, water use) also play important roles. Assessing these is a priority for future work. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data supporting the findings of this study are available within the article and its Supplementary Information files, or are available from the corresponding author upon reasonable request. Source data are provided with this paper. Code availability The codes developed for the analyses and to generate results are available from the corresponding author on reasonable request.
In 2007, 'locavore'—a person who only eats food grown or produced within a 100-mile (161km) radius—was the Oxford Word of the Year. Now, 15 years later, University of Sydney researchers urge it to trend once more. They have found that 19 percent of global food system greenhouse gas emissions are caused by transportation. This is up to seven times higher than previously estimated, and far exceeds the transport emissions of other commodities. For example, transport accounts for only seven percent of industry and utilities emissions. The researchers say that especially among affluent countries, the biggest food transport emitters per capita, eating locally grown and produced food should be a priority. Dr. Mengyu Li from the University of Sydney School of Physics is the lead author of the study, to be published in Nature Food. She said: "Our study estimates global food systems, due to transport, production, and land use change, contribute about 30 percent of total human-produced greenhouse gas emissions. So, food transport—at around six percent—is a sizeable proportion of overall emissions. "Food transport emissions add up to nearly half of direct emissions from road vehicles." Nutritional ecologist and co-author, Professor David Raubenheimer, said: "Prior to our study, most of the attention in sustainable food research has been on the high emissions associated with animal-derived foods, compared with plants. "Our study shows that in addition to shifting towards a plant-based diet, eating locally is ideal, especially in affluent countries". Rich countries excessively contribute Using their own framework called FoodLab, the researchers calculated that food transport corresponds to about 3 gigatonnes of emissions annually—equivalent to 19 percent of food-related emissions. Their analysis incorporates 74 countries (origin and destination); 37 economic sectors (such as vegetables and fruit; livestock; coal; and manufacturing); international and domestic transport distances; and food masses. While China, the United States, India, and Russia are the top food transport emitters, overall, high-income countries are disproportionate contributors. Countries such as the United States, Germany, France, and Japan constitute 12.5 percent of the world's population yet generate nearly half (46 percent) of food transport emissions. Examples of supply chains terminating in red meat consumption by households in China. Circles represent food production emissions; arrows represent transport emissions. Credit: Mengyu Li/University of Sydney. Australia is the second largest exporter of food transport emissions, given the breadth and volume of its primary production. Transport emissions are also food type dependent. With fruit and vegetables, for example, transport generates nearly double the number of emissions than production. Fruit and vegetables together constitute over a third of food transport emissions. "Since vegetables and fruit require temperature-controlled transportation, their food miles emissions are higher," Dr. Li said. The locavore discount The researchers calculated the reduction in emissions if the global population ate only locally: 0.38 gigatonnes, equivalent to emissions from driving one tonne to the Sun and back, 6,000 times. Though they acknowledge this scenario is not realistic, for example, because many regions cannot be self-sufficient in food supply, it could be implemented to varying degrees. "For example, there is considerable potential for peri-urban agriculture to nourish urban residents," co-author Professor Manfred Lenzen said. This aside, richer countries can reduce their food transport emissions through various mechanisms. These include investing in cleaner energy sources for vehicles, and incentivising food businesses to use less emissions-intensive production and distribution methods, such as natural refrigerants. "Both investors and governments can help by creating environments that foster sustainable food supply," Professor Lenzen said. Yet supply is driven by demand—meaning the consumer has the ultimate power to change this situation. "Changing consumers' attitudes and behavior towards sustainable diets can reap environmental benefits on the grandest scale," added Professor Raubenheimer. "One example is the habit of consumers in affluent countries demanding unseasonal foods year-round, which need to be transported from elsewhere. "Eating local seasonal alternatives, as we have throughout most of the history of our species, will help provide a healthy planet for future generations."
10.1038/s43016-022-00531-w
Space
Herschel finds past-prime star may be making planets
"An old disk that can still form a planetary system," by E. Bergin et al, is published in Nature, 31 January 2013. www.nature.com/nature/journal/ … ull/nature11805.html Journal information: Nature
http://www.nature.com/nature/journal/v493/n7434/full/nature11805.html
https://phys.org/news/2013-01-herschel-past-prime-star-planets.html
Abstract From the masses of the planets orbiting the Sun, and the abundance of elements relative to hydrogen, it is estimated that when the Solar System formed, the circumstellar disk must have had a minimum mass of around 0.01 solar masses within about 100 astronomical units of the star 1 , 2 , 3 , 4 . (One astronomical unit is the Earth–Sun distance.) The main constituent of the disk, gaseous molecular hydrogen, does not efficiently emit radiation from the disk mass reservoir 5 , and so the most common measure of the disk mass is dust thermal emission and lines of gaseous carbon monoxide 6 . Carbon monoxide emission generally indicates properties of the disk surface, and the conversion from dust emission to gas mass requires knowledge of the grain properties and the gas-to-dust mass ratio, which probably differ from their interstellar values 7 , 8 . As a result, mass estimates vary by orders of magnitude, as exemplified by the relatively old (3–10 million years) star TW Hydrae 9 , 10 , for which the range is 0.0005–0.06 solar masses 11 , 12 , 13 , 14 . Here we report the detection of the fundamental rotational transition of hydrogen deuteride from the direction of TW Hydrae. Hydrogen deuteride is a good tracer of disk gas because it follows the distribution of molecular hydrogen and its emission is sensitive to the total mass. The detection of hydrogen deuteride, combined with existing observations and detailed models, implies a disk mass of more than 0.05 solar masses, which is enough to form a planetary system like our own. Main Commonly used tracers of protoplanetary disk masses are thermal emission from dust grains and rotational lines of carbon monoxide (CO) gas. However, the methods by which these are detected rely on unconstrained assumptions. The dust detection method has to assume an opacity per gram of dust, and grain growth can change this value drastically 15 . The gas mass is then calculated by multiplying the dust mass by the gas-to-dust ratio, which is usually assumed to be ∼ 100 from measurements of the interstellar medium 16 . The gas mass thus depends on a large and uncertain correction factor. The alternative is to use rotational CO lines as gas tracers, but their emission is optically thick and therefore trace the disk surface temperature rather than the midplane mass. The use of CO as a gas tracer thus leads to large discrepancies between mass estimates for different models of TW Hya (from to , where is the solar mass), even though each matches a similar set of observations 13 , 14 . Using the Herschel Space Observatory 17 Photodetector Array Camera and Spectrometer 18 , we robustly detected (9 σ ) the lowest rotational transition, J = 1 → 0, of hydrogen deuteride (HD) in the closest ( D ≈ 55 pc) and best-studied circumstellar disk around TW Hya ( Fig. 1 ). This star is older (3–10 Myr; refs 9 , 10 , 19 ) than most stars with gas-rich circumstellar disks 8 . The abundance of deuterium atoms relative to hydrogen is well characterized, via atomic electronic transitions, to be x D = (1.5 ± 0.1) × 10 −5 in objects that reside within ∼ 100 pc of the Sun 20 . Adding a hydrogen atom to each, to form H 2 and HD, which is appropriate for much of the disk mass, provides an HD abundance relative to H 2 of x HD = 3.0 × 10 −5 . We combine the HD data with existing molecular observations to set new constraints on the disk mass within 100 au , which is the most fundamental quantity that determines whether planets can form. The disk mass also governs the primary mode of giant-planet formation, either through core accretion or gravitational instability 21 . In this context, we do not know whether the Solar System formed within a typical disk, because nearly half of the present estimates of extrasolar disk masses are less than the minimum solar nebula mass 8 . Our current census of extrasolar planetary systems furthermore suggests that even larger disk masses are necessary to form many of the exoplanetary systems seen 22 , 23 . Figure 1: Herschel detection of HD in the TW Hya protoplanetary disk. a , The fundamental J = 1 → 0 line of HD lies at ∼ 112 μm. On 20 November 2011, it was detected from the direction of the TW Hya disk at the 9 σ level. The total integrated flux is (6.3 ± 0.7) × 10 −18 W m −2 . We also report a detection of the warm disk atmosphere in CO J = 23 → 22 with a total integrated flux of (4.4 ± 0.7) × 10 −18 W m −2 . The J = 1 → 0 line of HD was previously detected by the Infrared Space Observatory in a warm gas cloud exposed to radiation from nearby stars 27 . Other transitions have also been detected in shocked regions associated with supernovae and outflows from massive stars 28 , 29 . b , Simultaneous observations of HD J = 2 → 1 are shown. For HD J = 2 → 1, we find a detection limit of <8.0 × 10 −18 W m −2 (3 σ ). We also report a detection of the OH 2 Π 1/2 9/2 → 7/2 doublet near 55.94 μm with an integrated flux of (4.93 ± 0.27) × 10 −17 W m −2 . The spectra include the observed thermal dust continuum of ∼ 3.55 Jy at both wavelengths. PowerPoint slide Full size image With smaller rotational energy spacings and a weak electric dipole moment, HD J = 1 → 0 is one million times more emissive than H 2 for a given gas mass at a gas temperature of T gas = 20 K. The HD line flux ( F l ) sets a lower limit to the H 2 gas mass at distance D ( Supplementary Information ): If HD is optically thick or deuterium is contained in other molecules such as polycyclic aromatic hydrocarbons or molecular ices, the conversion from deuterium mass to hydrogen mass will be higher and the mass will thus be larger, hence the lower limit. The strong temperature dependence arises from the fractional population of the J = 1 state, which has a value of f J = 1 ≈ 3exp(−128.5 K/ T gas ) for T gas < 50 K in thermal equilibrium. Owing to the low fractional population in the J = 1 state, HD does not emit appreciably from gas with T ≈ 10–15 K, which is the estimated temperature in the outer disk mass reservoir (at a radius R ≳ 20–40 au ). The HD mass derived from equation (1) provides an estimate of the mass in warm gas, and is therefore a lower limit on the total mass within 100 au . The only factor in equation (1) that could lower the mass estimate is a higher T gas . The upper limit on the J = 2 → 1 transition of HD ( Fig. 1 ) implies that T gas < 80 K in the emitting region. This T gas estimate yields , but T gas is unlikely to be this high for the bulk of the disk. CO rotational transitions are optically thick and the level populations are in equilibrium with T gas , and so they provide a measure of T gas . Atacama Large Millimeter/submillimeter Array (ALMA) observations of CO J = 3 → 2 emission in a 1.7″ × 1.5″ beam (corresponding to gas within a radius of ∼ 43 au ) ( Supplementary Information and Supplementary Fig. 1 ) yield an average T gas of 29.7 K within 43 au , and . This value is still likely to be too low, because the emission from optically thick CO presumably gives information about material closer to the surface than does HD, and this gas will be warmer than the HD line-emitting region. Thus, essentially all correction factors would increase the mass beyond this conservative limit, which already rules out a portion of the low end of previous mass determinations. To determine the mass more accurately, we turn to detailed models that incorporate explicit gas thermal physics providing for substantial radial and vertical thermal structure. Both published models of the TW Hya disk reproduce a range of gas-phase emission lines, but in one case with (ref. 14 ) and in the other with (ref. 13 ) ( Supplementary Information and Supplementary Table 1 ). These models were both placed into detailed radiation transfer simulations. The results from this calculation and the adopted physical structure are given in Fig. 2 for the model with . Figure 2c shows the cumulative flux as a function of radius for the higher-mass model; over 80% of the emission is predicted to arise from gas within a radius of 80 au . Furthermore, Fig. 2d provides a calculation of the HD emissive mass as a function of gas temperature. This calculation suggests that gas with a temperature of 30–50 K is responsible for the majority of the HD emission. Figure 2: Model of the physical structure and HD emission of the TW Hya circumstellar disk. a , Radial ( R ) and vertical ( Z ) distribution of the H 2 volume density, , calculated in a model disk with mass 0.06 (ref. 14 ). Contours start from the top at and are stepped in units of factors of ten. b , Gas temperature structure as derived by the thermochemical model 14 . Contours are at 10, 25, 50, 75, 100, 150, 200, 250 and 300 K. c , Radial and vertical distribution of the HD J = 1 volume density, n HD J = 1 , predicted in a model disk with the gas density and temperature structure as given in a and b , with an HD abundance relative to H 2 of 3.0 × 10 −5 . Contours start from the top at log 10 [ n HD J = 1 (cm −3 )] = −3 and are stepped in factors of ten. The red line shows the cumulative flux contribution as a function of radius in terms of fractions of the overall predicted flux, 3.1 × 10 −18 W m −2 . To predict the HD line emission, we calculate the solution of the equations of statistical equilibrium including the effects of line and dust opacity using the LIME code 30 . d , Fraction of the HD emission arising from gas with different temperatures, computed as a function of the mass of HD excited to the J = 1 state in gas at temperatures binned in units of 5 K ( M HD J = 1 ( T )) normalized to the total mass of HD with J = 1 ( M HD J = 1 ). In particular, is computed successively in gas temperature bins of 5 K and then normalized to the total mass of HD in the J = 1 state. PowerPoint slide Full size image The model with predicts an HD line flux of F l = 3.8 × 10 −19 W m −2 , which is more than an order of magnitude below the detected level. For this model to reach the observed flux, the disk mass would have to be 20 times greater and so this lower mass is ruled out. The model predicts that F l = 3.1 × 10 −18 W m −2 , which is still a factor of two below the observed value: even the ‘high’ mass estimate is too low. On the basis of this model, we estimate that the disk gas mass within 80 au , where the majority of HD emissions arise, is . Both of these models match other observations: the low-disk-mass model matches CO and 13 CO J = 3 → 2 emission, and the higher-mass model reproduces CO J = 2 → 1, J = 3 → 2 and J = 6 → 5 emission. Both models also reproduce observed emission from other species. However, they differ by a factor of ten in predicting the HD emission. This difference shows the value of HD in constraining masses. The age of TW Hya is uncertain. The canonical age of the cluster is Myr (ref. 9 ). However, there could be an age spread in cluster members, and ages estimated for TW Hya itself range from 3 to 10 Myr (refs 10 , 19 ). Even at the low end of this range, TW Hya is older than the half-life of gaseous disks, which is inferred to be about 2 Myr (ref. 8 ). In the case of the TW Hya association, there is also little evidence for an associated molecular cloud 24 , which is an additional indicator that this system is relatively older than most gas-rich disks. The lifetime of the gaseous disk is important because it sets the available time frame for the formation of gas giants equivalent to Jupiter or Saturn. According to our analysis, TW Hya contains a massive gas disk ( ) that is several times the minimum mass required to make the planets in the Solar System. Thus, this ‘old’ disk can still form a planetary system like our own. The recent detection of cold water vapour from TW Hya yielded indirect evidence for a large water-ice reservoir (equal in mass to several thousand Earth oceans) assuming a disk mass of (ref. 25 ). Our higher mass estimate implies a larger water-ice reservoir, perhaps greater in mass by a factor of two. The mass estimate in this system lies at the upper end of previous mass measurements 8 , hinting that other disk masses are underestimated. The main uncertainty in the masses derived here is the gas temperature structure of the disk. In future, observations of optically thick molecular lines, particularly CO, can be used to trace the thermal structure of gas in the disk. Observations of rarer CO isotopologues will then provide constraints on the temperature in deeper layers 26 . With ALMA, we will readily resolve multiple gas temperature tracers within a radius of 80 au, where HD strongly emits. When these are used in tandem with HD, we will be able to derive the gas mass with much greater accuracy (our simulations suggest to within a factor of 2–3). Moreover, additional HD detections could be provided by the Herschel Space Observatory and with higher spectral resolution by the German Receiver for Astronomy at Terahertz Frequencies on board the Stratospheric Observatory for Infrared Astronomy under favourable atmospheric conditions. These data could be used alongside emission from species such as C 18 O, C 17 O or the dust to calibrate these more widely available probes to determine the disk gas mass. Thus, with the use of HD to complement other observations and constrain models, we may finally place useful constraints on one of the most important quantities that governs the process of planetary formation.
(Phys.org)—A star thought to have passed the age at which it can form planets may, in fact, be creating new worlds. The disk of material surrounding the surprising star called TW Hydrae may be massive enough to make even more planets than we have in our own solar system. The findings were made using the European Space Agency's Herschel Space Telescope, a mission in which NASA is a participant. At roughly 10 million years old and 176 light years away, TW Hydrae is relatively close to Earth by astronomical standards. Its planet-forming disk has been well studied. TW Hydrae is relatively young but, in theory, it is past the age at which giant plants already may have formed. "We didn't expect to see so much gas around this star," said Edwin Bergin of the University of Michigan in Ann Arbor. Bergin led the new study appearing in the journal Nature. "Typically stars of this age have cleared out their surrounding material, but this star still has enough mass to make the equivalent of 50 Jupiters," Bergin said. In addition to revealing the peculiar state of the star, the findings also demonstrate a new, more precise method for weighing planet-forming disks. Previous techniques for assessing the mass were indirect and uncertain. The new method can directly probe the gas that typically goes into making planets. Planets are born out of material swirling around young stars, and the mass of this material is a key factor controlling their formation. Astronomers did not know before the new study whether the disk around TW Hydrae contained enough material to form new planets similar to our own. This artist's concept illustrates the planet-forming disk around TW Hydrae, located about 175 light-years away in the Hydra, or Sea Serpent, constellation. In 2011, astronomers used the Herschel space observatory to detect copious amounts of cool water vapor, illustrated in blue, emanating from the star's planet-forming disk of dust and gas. Credit: NASA/JPL-Caltech "Before, we had to use a proxy to guess the gas quantity in the planet-forming disks," said Paul Goldsmith, the NASA project scientist for Herschel at NASA's Jet Propulsion Laboratory in Pasadena, Calif. "This is another example of Herschel's versatility and sensitivity yielding important new results about star and planet formation." Using Herschel, scientists were able to take a fresh look at the disk with the space telescope to analyze light coming from TW Hydrae and pick out the spectral signature of a gas called hydrogen deuteride. Simple hydrogen molecules are the main gas component of planets, but they emit light at wavelengths too short to be detected by Herschel. Gas molecules containing deuterium, a heavier version of hydrogen, emit light at longer, far-infrared wavelengths that Herschel is equipped to see. This enabled astronomers to measure the levels of hydrogen deuteride and obtain the weight of the disk with the highest precision yet. "Knowing the mass of a planet-forming disk is crucial to understanding how and when planets take shape around other stars," said Glenn Wahlgren, Herschel program scientist at NASA Headquarters in Washington. Whether TW Hydrae's large disk will lead to an exotic planetary system with larger and more numerous planets than ours remains to be seen, but the new information helps define the range of possible planet scenarios. "The new results are another important step in understanding the diversity of planetary systems in our universe," said Bergin. "We are now observing systems with massive Jupiters, super-Earths, and many Neptune-like worlds. By weighing systems at their birth, we gain insight into how our own solar system formed with just one of many possible planetary configurations."
www.nature.com/nature/journal/ … ull/nature11805.html
Chemistry
Growing corn to treat rare disease
www.nature.com/ncomms/journal/ … full/ncomms2070.html Journal information: Nature Communications
http://www.nature.com/ncomms/journal/v3/n9/full/ncomms2070.html
https://phys.org/news/2012-09-corn-rare-disease.html
Abstract Lysosomal storage diseases are a class of over 70 rare genetic diseases that are amenable to enzyme replacement therapy. Towards developing a plant-based enzyme replacement therapeutic for the lysosomal storage disease mucopolysaccharidosis I, here we expressed α- L -iduronidase in the endosperm of maize seeds by a previously uncharacterized mRNA-targeting-based mechanism. Immunolocalization, cellular fractionation and in situ RT–PCR demonstrate that the α- L -iduronidase protein and mRNA are targeted to endoplasmic reticulum (ER)-derived protein bodies and to protein body–ER regions, respectively, using regulatory (5′- and 3′-UTR) and signal-peptide coding sequences from the γ-zein gene. The maize α- L -iduronidase exhibits high activity, contains high-mannose N-glycans and is amenable to in vitro phosphorylation. This mRNA-based strategy is of widespread importance as plant N-glycan maturation is controlled and the therapeutic protein is generated in a native form. For our target enzyme, the N-glycan structures are appropriate for downstream processing, a prerequisite for its potential as a therapeutic protein. Introduction Lysosomal storage diseases (LSDs) are a broad class of genetic diseases that are caused by mutations in proteins critical for lysosomal function; collectively, they represent over 70 disorders 1 , 2 , 3 , 4 , 5 . In many of these diseases, there is a deficiency of a single hydrolase enzyme within the lysosome; all are progressive in nature, as the affected individual is unable to degrade certain macromolecules, a process essential for normal growth and homeostasis of tissues. Many LSDs are amenable to enzyme replacement therapy (ERT)—a process that takes advantage of plasma membrane receptor mechanisms that mediate cellular uptake of a recombinant purified enzyme following its intravenous delivery 6 . ERT has become established for six of the LSDs. One of the mucopolysaccharidoses, MPS I, is an LSD characterized by the deficiency of α- L -iduronidase, an enzyme involved in the stepwise degradation of glycosaminoglycans; in severely affected humans, this genetic disease is characterized by profound skeletal, cardiac, and neurological pathology and death in early childhood 7 . The average annual costs of the ERT drug for MPS I disease (Laronidase or Aldurazyme) range from $300,000 USD (children) to $800,000 USD (adults). Transgenic plants, cultured plant cells and seeds are potentially cost-effective and safe systems for large-scale production of recombinant therapeutic proteins; they offer considerable advantages as production systems, but are not without challenges 8 , 9 . The majority of human lysosomal enzymes are glycoproteins and the nature of their N-glycan structures can influence their stability, transport and biological activity. Within the Golgi complex of both plant and animal cells, enzymes convert many of the original high-mannose N-glycans of proteins to complex N-glycans through sequential reactions that rely on accessibility of the glycan chain(s) to the Golgi-processing machinery 9 , 10 . In the mammalian Golgi complex, many of the N-glycan core structures (Man 3 GlcNAc 2 ) are extended further to contain penultimate galactose and terminal sialic acid residues 10 . In contrast, typically processed N-linked glycans of plant proteins are mostly of a Man 3 GlcNAc 2 structure with or without β-1,2-xylose and/or α-1,3-fucose 9 . For glycoprotein therapeutics destined for parenteral administration (as in ERT), the presence of plant-specific xylose and/or fucose residues is problematic, because the therapeutic is potentially highly immunogenic 9 . Different strategies have been used to modify the N-glycan processing of plant-made recombinant proteins; one strategy has been to manipulate subcellular protein targeting to avoid Golgi transport 8 , 9 . This has involved the addition of C-terminal HDEL/KDEL sequences or similar motifs for endoplasmic reticulum (ER) retention, or the addition of transmembrane and cytoplasmic tail sequences that potentially act as anchors for delivering recombinant proteins to vacuolar compartments independent of the Golgi complex 11 . These strategies suffer the drawbacks of requiring the addition of foreign amino acids that remain on the mature recombinant protein. The HDEL/KDEL strategy is very efficient for controlling the N-glycosylation of some recombinant proteins, especially antibodies 12 , 13 , 14 . However, the efficacy of this strategy is protein-specific 9 . For some recombinant proteins, including human α- L -iduronidase produced in Arabidopsis seeds, there is clearly an insufficient control of N-glycan maturation 15 . The use of vacuolar anchors as a means of controlling complex N-glycan formation 11 is not yet backed up by definitive evidence. The mechanisms by which proteins of eukaryotic cells are targeted intracellularly are most commonly mediated through signals on the protein itself (for example, amino acid or carbohydrate motifs). More recently, it has been recognized that signals on mRNAs can also mediate the targeting of proteins to specific intracellular regions 16 , 17 . γ-Zein storage proteins (prolamins) of maize seeds are deposited in ER-derived protein bodies and do not transit through the Golgi complex 18 . In the present study, we exploited the potential of maize seeds for hosting the production of recombinant human α- L -iduronidase by taking advantage of their capacity for ER–protein body localization of the gene product. We hypothesized that α- L -iduronidase would be targeted to ER-derived protein bodies by mRNA targeting using γ-zein cis -elements. If successful, the 'native' recombinant therapeutic protein (that is, possessing no foreign amino acid motifs) would stably accumulate in maize endosperm cells, but have avoided transit through the Golgi complex and the consequent undesired N-glycan maturation. We found that the regulatory (5′- and 3′-untranslated region (UTR)) and signal-peptide-encoding sequences of the γ-zein gene are sufficient for direct ER–protein body deposition of the majority of the α- L -iduronidase by an mRNA-targeting-based mechanism. Moreover, the α- L -iduronidase was enzymatically active and possessed kinetic characteristics that were comparable to those of the commercial MPS I ERT product (Aldurazyme). After immunoadsorbing the minor contaminant of Golgi-modified enzyme, the affinity-purified maize recombinant α- L -iduronidase contained exclusively high-mannose N-glycans and was amenable to in vitro phosphorylation using a recombinant soluble GlcNAc-1-phosphotransferase, which is the first step to creating the mannose-6-phosphate (M6P) lysosomal sorting motif. We discuss the widespread importance of the targeting of recombinant proteins to ER-derived protein bodies in maize by mRNA targeting for therapeutic protein production. Importantly, no topogenic information from mature-protein-coding sequences of γ-zein was required to accumulate the human recombinant protein; hence, the production mechanism avoids the requirement of engineering a proteolytic cleavage site for removal of non-native amino acids. Results Strategy for expression of human α- L -iduronidase in maize To investigate the capacity of an mRNA targeting mechanism to direct the localized accumulation of our recombinant human enzyme in maize seeds, we designed two constructs—a test construct containing the putative cis mRNA-targeting elements of the 27-kDa γ-zein gene and a control construct with substituted sequences ( Fig. 1a,b ). The test construct contained the 71-bp 5′-UTR, signal peptide-encoding sequences and the 3′-UTR terminator, all from the γ-zein gene. As with the test construct, the control construct contained the γ-zein promoter and 62 bp of the γ-zein gene 5′-UTR, but in contrast, contained 10 bp of 5′-UTR and signal peptide-encoding sequences of the α- L -iduronidase gene; the 3′-region was derived from the Nos ( nopaline synthase ) gene. Figure 1: Constructs and expression of recombinant human α- L -iduronidase in transgenic maize. Gene constructs for expression in maize seeds and the predicted glycosylation status and subcellular localization of the synthesized protein derived from expression of these constructs ( a , b ). Nos , nopaline synthase ( Nos ) gene 3′-region; SP, signal peptide; 3′-UTR, 3′-untranslated region. The human α- L -iduronidase ( IDUA ) gene was driven by the 1638-bp γ-zein promoter (constructs shown in a , b ). Additional γ-zein gene regulatory sequences flanking the α- L -iduronidase mature coding region included the 71-bp 5′-UTR, the signal peptide-encoding sequences (57 bp) and the 3′-UTR terminator (191 bp; a , Test). The control construct ( b ) contained in addition to the 1638-bp γ-zein promoter, 62 bp of the γ-zein gene 5′-UTR, and the 10-bp 5′-UTR and signal peptide-encoding sequences (78 bp) of the α- L -iduronidase gene. In this case, the 3′-region was from the Nos ( nopaline synthase ) gene (3′-UTR and transcription termination sequences). ( c ) Northern blot to detect human α- L -iduronidase (IDUA) transcripts in the transgenic developing maize endosperms. The ~2.2-kb IDUA transcripts were detected in different transgenic lines (top panel 1–8); ribosomal RNAs were stained by ethidium bromide to verify the quantity of RNA loaded (lower panel). UT, untransformed control; 1–7, independent transgenic lines expressing test construct; 8–10, independent transgenic lines expressing control construct. ( d ) Western blot analysis to detect α- L -iduronidase in transgenic maize seeds. Proteins were extracted from endosperms of T2 developing seeds expressing the test construct (left panel) and from endosperms of T2 mature seeds expressing the control construct (right panel). UT, untransformed control; 1–7, indicate independent transgenic lines. Equal protein (50 μg) was loaded in each lane. Full size image Northern blot analysis showed that the α- L -iduronidase gene was transcribed in developing seeds (18–20 days after pollination, DAP) of independent transgenic lines ( Fig. 1c ), albeit to variable extents. Western blot analysis detected α- L -iduronidase protein in mature or developing T2 seeds of most of the lines ( Fig. 1d ). The highest expressing line for the test construct accumulated α- L -iduronidase in the endosperm at 0.12% total soluble protein (T2 seeds). The estimated yield of α- L -iduronidase for the highest expressing line was ~5.2 mg α- L -iduronidase per kg fresh weight (or ~9.4 mg α- L -iduronidase per kg dry weight) at 20 DAP. The estimated yield of α- L -iduronidase derived from expression of the control construct was lower than that derived from the test construct; amongst the transgenic lines analysed, the maximum was ~0.07% TSP. γ-Zein regulatory sequences target mRNA to protein body–ER To determine whether γ-zein regulatory sequences are sufficient for the localization of α- L -iduronidase mRNAs to the protein body–ER of maize endosperm cells, we examined the distribution of α- L -iduronidase mRNAs in cryosectioned developing maize endosperms using in situ RT–PCR. Confocal microscopy showed that α- L -iduronidase mRNAs of the test lines were co-localized exclusively to the protein body regions, having a distribution similar to that of γ-zein mRNAs. In contrast, α- L -iduronidase mRNAs from the control lines were localized to the protein body and non-protein body (cisternal ER) regions ( Fig. 2 ). These data suggest that mRNAs of α- L -iduronidase with 5′-UTR, signal peptide and 3′-UTR sequences from the γ-zein gene are transported to protein body–ER. Figure 2: Subcellular localization of α- L -iduronidase mRNAs in developing transgenic endosperm sections. Endosperm sections expressing the α- L -iduronidase gene (control and test constructs) were subjected to in situ RT–PCR to assess mRNA localization (middle images). ER-derived protein bodies (1 to 2-μm in diameter) were visualized by staining the sections with rhodamine hexyl ester (left images), a dye that preferentially stains the prolamin protein bodies of maize. α- L -Iduronidase mRNAs were in situ -labelled by Oregon Green-dUTP (middle images). In the control sections, α- L -iduronidase mRNAs were seen in both protein body–ER (yellow in the merged image) and in cisternal ER regions (green in the merged image). In contrast, α- L -iduronidase mRNAs derived from the test construct were seen exclusively in the protein body–ER region. Bar, 10 μm. Full size image α- L -Iduronidase is targeted to ER-derived protein bodies To investigate whether α- L -iduronidase was targeted to protein bodies within developing maize endosperm cells, immunolocalization studies using transmission electron microscopy (TEM) were conducted. In the ultrathin sections from the control line, there were few 10-nm gold particles (labelled α- L -iduronidase) in ER-derived protein bodies; 10-nm gold particles were predominantly seen in irregular-shaped organelles ( Fig. 3 ; Supplementary Table S1 ). The irregular-shaped organelles were distinguishable from ER-derived protein bodies and are similar to the legumin-type protein storage vacuoles 19 that may originate from a post Golgi process. However, in the sections from the test line, 10-nm gold particles were mainly found in ER-derived protein bodies ( Fig. 3 ; Supplementary Table S1 ). γ-Zein protein was preferentially localized at the periphery of protein bodies (labelled by 5-nm gold particles; arrows), consistent with the reports of others. Figure 3: Subcellular localization of human α- L -iduronidase protein in developing maize endosperm cells by transmission electron microscopy. Transmission electron microscope images of gold-immunolabelled ultra-thin sections of maize endosperm. α- L -Iduronidase (IDUA) and γ-zein were labelled with 10-nm (arrowheads) and 5-nm (arrows) gold-conjugated secondary antibodies, respectively. Shown are protein bodies (PB) in maize endosperm cells expressing the test construct (left) or control construct (right). PSV=protein storage vacuole. The inset in the left image shows an enlargement of the two sizes of gold particles. Bar=100 nm. Full size image Cell fractionation studies further confirmed the localization of the 'control' and 'test' α- L -iduronidase ( Supplementary Fig. S1 ). These data indicate that cis -localization elements in the γ-zein 5′-UTR, signal peptide-encoding and/or 3′-UTR terminator sequences effectively targeted the α- L -iduronidase RNA molecules to the protein body–ER in developing maize endosperm cells. Furthermore, there was a close relationship between α- L -iduronidase RNA localization and protein localization in the endomembrane system of maize endosperm cells. Recombinant α- L -iduronidase contains high-mannose N-glycans α- L -Iduronidase has six consensus signals for N-linked glycosylation in the ER; these are all utilized in human cells in which the enzyme is targeted to the lysosome via the Golgi complex. In Chinese hamster ovary (CHO) cells hosting production of the recombinant protein, the enzyme is secreted and all six glycosylation sites are used, but the N-glycans themselves display high intrasite heterogeneity 20 . Some of the N-glycans of the mature enzyme remain in a high-mannose form (Asn 372; Asn 415 is mixed high mannose and complex); at least two of the sites are modified to complex forms (Asn 110 and Asn 190); two carry M6P tags (Asn 336 and Asn 451) 20 . The N-glycan profiles of purified α- L -iduronidase of transgenic maize seeds were analysed ( Fig. 4 ; Table 1 ). SDS–polyacrylamide gel electrophoresis (SDS–PAGE) analysis of the purified α- L -iduronidase samples, and the specific activities and yield of recombinant α- L -iduronidase during the purification procedure are shown in Supplementary Fig. S2 and Supplementary Table S2 . As shown, 91.8% of the N-glycan structures detected in the 'test' α- L -iduronidase were of the oligomannosidic type (that is, those containing 1–7 hexose residues in addition to the pentasaccharide N-glycan core) with Man5 being the most abundant (41.1%). The remaining 8.2% were complex/hybrid structures, indicating that some of the human α- L -iduronidase likely transited through the Golgi complex. In contrast, 58.5% of the N-glycans identified in the 'control' α- L -iduronidase belonged to the complex/hybrid type, carrying xylose, fucose or both sugar residues attached to the chitobiose core pentasaccharide. These results further confirm that the 'test' α- L -iduronidase is predominantly localized in ER-derived protein bodies and thereby avoids N-glycan maturation associated with transit through the Golgi complex. The small amounts of recombinant α- L -iduronidase containing the complex N-glycan sugars xylose and/or fucose in the 'test' sample were eliminated by subsequent column chromatography using an anti-horseradish peroxidase affinity column ( Fig. 5 ). Figure 4: N-glycan profiles of maize-derived α- L -iduronidase. Summed MS-spectra of N-glycans from 'control' α- L -iduronidase (IDUA control: top) and 'test' α- L -iduronidase (IDUA test: bottom). In addition to the qualitative N-glycan structure differences, a considerable difference in the signal intensities between the two samples was detected, indicating different amounts of starting material (see Methods ). N-acetylglucosamine (GlcNAc)=blue squares, mannose (M or Man)=green circles, α-1,3-fucose (F)=red triangles, β-1,2-xylose (X)=orange stars. Full size image Table 1 N-glycans identified in maize-derived α- L -iduronidase. Full size table Figure 5: Effects of anti-horseradish peroxidase affinity column on the presence of α- L -iduronidase containing plant complex N-glycans. Shows western blot analysis using an antibody specific for plant complex N-glycans as described in the Supplementary Methods . α- L -Iduronidase was purified from transgenic maize seeds expressing the test-construct. Lane 1, 80-ng purified α- L -iduronidase; lane 2, 80 ng α- L -iduronidase after passing through an anti-horseradish peroxidase affinity column. Full size image α- L -Iduronidase has high activity and can be phosphorylated Michaelis–Menten kinetics were used to characterize the enzymatic properties of the maize 'test' α- L -iduronidase and the commercial CHO-cell-derived enzyme product, Aldurazyme ( Fig. 6a ; K m and k cat in Supplementary Table S3 ). CHO-iduronidase has a k cat of 3.9 μmoles min −1 mg −1 and a K m of 24 μM; the targeted-maize iduronidase has a k cat of 6.4 μmoles min −1 mg −1 and a K m of 78 μM. In addition, the CHO cell-derived α- L -iduronidase and targeted maize α- L -iduronidase exhibited specific activities of 4.1 and 5.8 μmoles min −1 mg −1 , respectively, when measured at a substrate concentration of 1 mM. Figure 6: Characterization of maize-derived α- L -iduronidase. ( a ) Michaelis–Menten plots using the fluorescent substrate 4-methylumbelliferyl-α- L -iduronide for CHO cell-produced α- L -iduronidase (Aldurazyme) (left panel) and maize-produced α- L -iduronidase (right panel). The data are the means of three replicate experiments ±s.e. ( b ) Phosphorylation of maize-produced α- L -iduronidase by the GlcNAc-1-phosphotransferase (α 2 β 2 ). The graph is based on data from one typical experiment. Dephosphorylated CHO cell-produced α- L -iduronidase served as a control. The activity of the GlcNAc-1-phosphotransferase was expressed as pmoles of [ 3 H]GlcNAc-P transferred per h μg −1 of the GlcNAc-1-phosphotransferase. Full size image Most lysosomal enzymes including α- L -iduronidase require an M6P tag for efficient uptake/lysosomal delivery in human cells. We investigated whether the phosphorylation of N-glycan terminal mannose residues of the 'test' α- L -iduronidase could be achieved in vitro using recombinant soluble UDP–GlcNAc:lysosomal enzyme N -acetylglucosamine-1-phosphotransferase 21 ( Fig. 6b ). The efficiency of the phosphotransferase is reliant upon its affinity for the target lysosomal hydrolase 22 ; the maize α- L -iduronidase exhibited a K m (0.87±0.1 μM; Supplementary Table S4 ), which is about 25 times less than that of cathepsin D 22 , suggesting that it is an effective substrate for the phosphotransferase. The extent of phosphorylation of a glycoprotein by the phosphotransferase is influenced by the position of the N-glycans relative to the binding site for the phosphotransferase; this modifying enzyme functions best on target lysosomal hydrolases with Man6–Man8 N-glycans 23 , of which the maize α- L -iduronidase has a significant proportion. The k cat value of the maize α- L -iduronidase (0.52±0.18 min −1 ) is comparable to that of the dephosphorylated CHO cell-produced α- L -iduronidase (0.32±0.12 min −1 ; Supplementary Table S4 ). These data show that in vitro phosphorylation is feasible using the plant-derived mannose-terminated recombinant enzyme. Discussion We have used a seed-based system towards developing an alternative to the current mammalian cell-based protein production systems. A unique mRNA targeting strategy was used to control the plant-specific N-glycosylation of a complex recombinant human therapeutic enzyme—α- L -iduronidase, a protein possessing six sites for N-glycosylation. Our approach exploits the natural capacity of the endosperm storage tissues of maize seeds to stably accumulate recombinant proteins. The cis mRNA targeting elements provided by sequences of the 27-kDa γ-zein gene successfully mediated the targeting of α- L -iduronidase transcripts and proteins to protein body–ER regions and to ER-derived protein bodies, respectively. Transit of the recombinant protein through the Golgi complex and the consequent undesirable N-glycan maturation was largely avoided. From the point of view of recombinant protein production platforms, the current production systems for ERT and the resultant enzyme therapeutics have some drawbacks that justify an examination of potential alternatives. The prices set for ERT are exceedingly high, placing a considerable burden on health-care budgets and blocking access to ERT in underprivileged countries, and there is little commercial interest towards developing ERT for the ultra-rare LSDs. Safety concerns associated with product contamination included the recent and highly publicized viral contamination of the CHO cell cultures at Genzyme Corporation, which resulted in interrupted or dose-reduced treatments, or the initiation of alternative treatments 24 . Perhaps also of importance, there is considerable N-glycan heterogeneity and variability of glycoforms of a recombinant protein depending on the CHO cell culture conditions 25 of possible relevance to immunogenicity issues 26 . Plant- and seed-based systems provide a potential alternative by virtue of a minimal possibility of contamination by human pathogens or prions, and an ability to rapidly increase biomass without the need for expensive fermentation costs 8 . Maize as a host for recombinant protein production has considerable advantages, including the highest biomass yield among seed crops, ease of transformation/scale-up and the availability of strong promoters and other gene-regulatory sequences to facilitate high-level recombinant protein production. Seeds offer a significant advantage in relation to the provision of a stable repository of the human recombinant protein 8 . In our case, the transgenic maize seeds simply need to be placed in cool dry conditions after harvest, and the human recombinant enzyme remains stable. For example, as compared with the specific activity of the purified protein from freshly harvested mature dry seeds, that associated with the seeds stored for 14 months at 4 °C is very similar (5.8 versus 5.1 μmol min −1 mg −1 , respectively). The major costs associated with the seed-based production system are anticipated to be those associated with downstream purification and processing of the recombinant protein. The control of glycosylation is paramount to the ultimate utility of any plant-derived therapeutic; the addition of xylose and/or fucose sugar residues can elicit immunogenic responses in mammals and greatly reduce the efficacy of plant-derived recombinant proteins for pharmaceutical uses 9 . Our unique strategy may permit the synthesis of pharmaceutical and other recombinant proteins containing predominantly high-mannose N-linked glycans; an affinity column step removed the small amounts of recombinant enzyme containing xylose and/or fucose. The use of mRNA localization signals as a strategy for ER retention of plant-synthesized recombinant proteins is also attractive, as it does not require the fusion of mature protein-coding sequences onto the recombinant therapeutic protein. Other plant-based platforms have been developed to control N-glycan maturation. Fusion of the ER retention/retrieval motif KDEL/HDEL or its extended version (for example, SEKDEL) to the C terminus of a recombinant protein has been a widely used strategy to avoid plant-specific N-glycan maturation. In particular, this strategy has been very effective when applied to the plant-based production of antibodies. However, KDEL-tagged α- L -iduronidase produced in Arabidopsis seeds contains predominantly complex and hybrid N-glycans (88%) 15 . RNA interference technology has been employed in plants to silence the expression of the genes encoding α-1,3-fucosyltransferase and β-1,2-xylosyltransferase. Use of this strategy for production of a monoclonal antibody generates the predominant N-glycan species GnGn (GlcNAc 2 Man 3 GlcNAc 2 ), and few or no α-1,3-fucose and β-1,2-xylose residues are detectable on the protein's N-glycans 27 , 28 . Although this strategy addresses problems associated with plant-specific N-glycan maturation in relation to product immunogenicity, its efficiency in some plant hosts is not absolute 8 , 9 , and further, the predominant N-glycan structures are such that the approach is not useful for the production of therapeutics for LSDs in which mannose-terminated N-glycans or M6P tags on the recombinant protein are essential for therapeutic delivery/efficiency. For therapeutic efficacy, the parenterally administered recombinant enzyme must be competent for endocytosis by human cells, followed by intracellular targeting to the lysosome. These receptor-mediated processes are dependent on either the M6P motif or on mannose-terminated N-glycans on the protein. For those lysosomal enzymes (such as α- L -iduronidase and several others) that generally require an M6P tag for efficient targeting/lysosomal delivery to cells other than macrophages, we show that the high-mannose-terminated, plant-derived recombinant enzyme is an effective substrate for the soluble phosphotransferase as a first step. The plant-based platform founded on mRNA targeting could also be used to generate mannosylated therapeutic glycoproteins that specifically target to the mannose-specific cell surface receptors of macrophages and dendritic cells. This is relevant for the production of recombinant glucocerebrosidase for treatment of Gaucher disease, in which successful ERT relies primarily on mannose receptor-mediated uptake, and it would avoid the three-step downstream processing that is currently used to expose core mannoses to generate the commercial CHO cell-derived product. A carrot cell-based system developed by Protalix and Pfizer for production of recombinant glucocerebrosidase has the advantage of avoiding the need for this in vitro enzymatic processing, and the product (Taliglucerase Alfa) has received recent approval by the US Food and Drug Administration for treatment of Gaucher disease. However, the approaches used for production of the carrot glucocerebrosidase would not be suitable for α- L -iduronidase as an MPS I therapeutic in relation to N-glycan status, therapeutic efficacy requirements and potential immunogenicity. Taliglucerase Alfa has extra amino acids on both the N terminus and the C terminus, and the dominant N-glycans contain xylose and/or fucose residues 29 . Our results provide proof of concept for generating recombinant proteins with high-mannose N-glycosylation in maize seeds. They represent an advance in the field of plant-made pharmaceuticals in relation to generating non-immunogenic recombinant therapeutic proteins. To develop the platform further for production of M6P-tagged LSD therapeutics, the level of accumulation of the human recombinant protein needs to be improved (0.12% TSP in the present study) to advance efficient purification for scale-up. There are tenable strategies to achieve this, including screening more independent transgenic lines, applying a combination of selection and conventional breeding 30 , 31 , and subjecting the high-expressing immature zygotic maize embryos to a dedifferentiation–regeneration cycle, which acts to reset the epigenetic status of the transgene 32 . Two sequential in vitro enzymatic steps also need to be achieved to confer the M6P tag on the high-mannose-terminated recombinant protein. For this, two enzymes are required: the GlcNAc-1-phosphotransferase and the 'uncovering enzyme' GlcNAc-1-phosphodiester α-N-acetylglucosaminidase 33 , which removes the covering GlcNAc residue to expose the M6P recognition marker. We have shown that the high-mannose-terminated α- L -iduronidase is an effective substrate for the first step using soluble recombinant GlcNAc-1-phosphotransferase, and work is in progress to achieve the uncovering enzyme step. The therapeutic efficacy of the in vitro processed α- L -iduronidase also needs to be tested in cultured MPS I (deficient) cells and in the MPS I mouse model. In relation to our model and other lysosomal enzymes, the platform to create the human protein with high-mannose N-glycans is particularly advantageous, because the product is in a suitable form (Gaucher disease), or can be subjected to phosphorylation or to emerging alternative modifications for enhanced biodistribution of ERT 34 , 35 , 36 . Towards a more widespread use of the platform to generate other types of protein therapeutics (not just LSD therapeutics), downstream modification to improve the serum half-life of the protein therapeutic may be required. In the past few years, many efforts have been geared towards achieving 'humanized' complex N-glycan modifications in planta and thus allowing for the elaboration of terminal galactose or sialic acid residues onto the recombinant glycoprotein (for example, by transgenic expression of the appropriate mammalian enzymes) 37 , 38 , 39 , 40 . A monoclonal antibody (2G12) is sialylated in Nicotiana benthamiana by transient expression of six mammalian genes encoding various enzymes of the sialic acid biosynthetic pathway, including a cyclic monophosphate–N-acetylneuraminic acid synthetase and a CMP–sialic acid transporter 39 . Likewise, the addition of bisected, triantennary and tetraantennary complex N-glycans has been achieved by the simultaneous expression of human genes encoding various N -acetylglucosaminyltransferases (GnTIII, GnTIV and GnTV) in a glycoengineered N. benthamiana mutant lacking the machinery for plant-specific complex N-glycosylation (that is, the xylosyl and fucosyl transferases) 40 . Although some of these modifications may well improve the serum half-life of a protein therapeutic, notably there are clearly examples where it is advantageous for a therapeutic protein to have a shorter in vivo half-life to avoid cell toxicity or immune responses 41 , 42 , 43 . The present platform is flexible in the sense that it generates a product that is directly amenable to downstream modifications as appropriate. Downstream-phosphorylated human enzymes generated by this plant-based strategy may be part of future LSD therapeutics. Methods Constructs for the expression of human α- L -iduronidase To investigate the potential of targeting the recombinant α- L -iduronidase to protein bodies of maize, the human α- L -iduronidase ( IDUA ) gene 44 (GenBank accession no. M74715 ) was driven by the 1638-bp promoter of the gene encoding 27-kDa γ-zein 45 (GenBank accession no. X53514 ). Additional regulatory sequences flanking the α- L -iduronidase mature coding region in control and test constructs are noted in the text describing Fig. 1a,b . To clone the 5′-UTR, signal peptide-encoding sequences and 3′-UTR terminator of the γ-zein gene, genomic DNA was extracted from maize Hi-II. For cloning of the 5′-UTR and signal peptide-encoding sequences, forward primer 5′-CACAGGCATATGACTAGTGGC-3′ and reverse primer 5′-GGAGGTGGCGCTCGCAGC-3′ were used to PCR-amplify the fragment. For cloning of the 3′-UTR, the forward primer 5′-ACGCGTCGACAGAAACTATGTGCTGTAGTA-3′ and reverse primer 5′-CGGAATTCCCTATTAAAAGGTTAAAACGT-3′ were used. The DNA sequence encoding the γ-zein signal peptide was fused in-frame to the mature coding region of the α- L -iduronidase gene. Constructs were verified by DNA sequencing. Methods for Agrobacterium -mediated maize transformation are noted in the Supplementary Methods section. RNA extraction and northern blot analysis Total RNA was extracted from developing maize seeds using the Qiagen RNAeasy kit (Qiagen Inc., Mississauga, ON, Canada). Ten micrograms of total RNA was loaded into each lane and RNA samples were fractionated on 1.0% agarose formaldehyde gels. Ribosomal RNA was stained by ethidium bromide to verify the quantity of RNA loaded on the gels used for northern blots. Following transfer and fixing of RNA to Hybond-XL nylon membranes (Amersham Life Science, Buckinghamshire, UK), the membranes were hybridized with a 32 P- α- L -iduronidase cDNA probe that had been labelled using the RTS RadPrime DNA Labelling System (Life Technologies, Gaithersburg, MD, USA) and [α- 32 P]-dCTP. Western blot analysis Proteins were extracted from endosperms of developing T2 seeds (20 DAP) or from mature T2 seeds. Fifty micrograms of total soluble protein was fractionated on 10% SDS–PAGE gels and western blot analysis was carried out using the Lumigen TMA-6 detection kit as per the manufacturer's instructions (GE Healthcare UK Limited, Little Chalfont Buckinghamshire, UK) Anti-α- L -iduronidase 46 and anti-γ-zein 47 antibodies were used at dilutions of 1:1,000. T2 seeds were produced by either self-pollination or by crossing with the wild-type maize Hi-II line in cases in which there was male or female sterility. The yield of α- L -iduronidase was estimated by scanning densitometry of western blots that had been generated with protein extracts from five seeds (at 20 DAP) per independent transgenic line. The CHO cell-produced α- L -iduronidase of known concentration was used as a standard. Some western blot analyses were conducted on the purified proteins ( Supplementary Methods ). In situ RT–PCR Developing endosperms from test and control lines were excised from 15 DAP maize seeds. Cryosectioning and fixation were performed as described by Washida et al . 48 . In situ RT–PCR was conducted on cryosections according to Washida et al . 48 with modifications. The RT–PCR reaction included 2 mM MgCl 2 , 0.5 mM MnSO 4 , 20 μM each of dATP, dCTP and dGTP, 10 μM dTTP, 10 μM Oregon Green-dUTP (Molecular Probes, Eugene, OR, USA), 5 mM dithiothreitol, 150 U ml −1 RNase inhibitor (Fermentas Corp., Ottawa, ON, Canada), PCR enhancer, 50 U ml −1 Tth DNA polymerase (Epicentre, Madison, WI, USA) and 0.5 μM of α- L -iduronidase gene-specific primers: 5′-GGCCAGGAGATACATCGGTA-3′ and 5′-CTCCCCAGTGAAGAAGTTGG-3′. Primers for the γ-zein gene were: 5′-TGAGGGTGTTGCTCGTTGCCC-3′ and 5′-CACATCGCCGTCAGTTGCTGC-3′. The sections were covered with the above solution and kept at room temperature for 30 min followed by 60 °C for 30 min. The sections were then subjected to 20–25 amplification cycles (10 cycles for γ-zein) of 94 °C for 1.5 min, 55 °C for 1.5 min, 72 °C for 1.5 min, which was followed by 72 °C for 5 min. After the PCR cycles, the sections were washed and stained with rhodamine B hexyl ester according to Washida et al . 48 , mounted in anti-FADE medium and analysed by confocal microscopy using a Bio-Rad Radiance Plus on an inverted Zeiss Axiovert with DIC optics (Bio-Rad, Missasauga, ON, Canada). TEM immunolocalization of α- L -iduronidase in maize seeds Developing endosperms (15-17 DAP) were fixed with a high-pressure freezer (Bal-Tec HPM 010 High Pressure Freezer, Zurich, Switzerland). Ultrathin sections were acquired using a Leica UltracutT UltraMicrotome (Reichert, Austria). The sections were picked up on 200 mesh nickel grids, and non-specific binding sites were blocked by immersion in blocking buffer (2% normal goat serum in PBS, pH 7.2) for 1–2 h. The sections were then labelled with antibody against α- L -iduronidase (1:20 dilution) for 1 h and rinsed extensively followed by incubation with 10 nm gold-conjugated goat anti-rabbit IgG (whole molecule; 1:100 dilution) for 1 h. After rinsing several times, the sections were labelled with the antibody against γ-zein (1:50) and 5 nm gold-conjugated goat anti-rabbit Fab′ fragments (1:100), sequentially. All the antibodies were diluted in 2% goat serum-blocking solution. The grids were observed under a TEM Model Hitachi-80 (Hitachi, Tokyo, Japan). Determination of N-glycan profiles of α- L -iduronidase Purified 'test' α- L -iduronidase or 'control' α- L -iduronidase (~4 μg) was resolved by 10% SDS–PAGE, and the α- L -iduronidase protein bands were recovered from the gel. N-glycans were released from tryptic peptides obtained after in-gel digestion as described by Kolarich and Altmann 49 , but graphitized carbon liquid chromatography MS/MS (carbon LC-MS/MS) was used for N-glycan analysis 50 . Therefore, the released N-glycans were reduced and desalted before analysis as described previously 50 . An aliquot of the released and reduced glycans was analysed by carbon LC-MS/MS using an Agilent 1100 capillary LC and an Agilent ion trap for detection using a Thermo Hypercarb column (180 μm ID×100 mm length). N-glycans were analysed using negative mode according to Wilson et al . 50 . Oligosaccharide structures were assigned based on mass, MS/MS spectra and on the knowledge about plant N-glycosylation 51 . The relative N-glycan distribution was calculated from the signal intensities of the monoisotopic m/z signals in the combined MS spectrum, which was summed over the entire range where the N-glycans are eluting. If the singly and doubly charged signal was detected, both signals were taken into account. The summed MS spectra of N-glycans showed a considerable difference in the signal intensities between the two samples detected, indicating different amounts of starting material. Equal amounts of test and control α- L -iduronidase were loaded onto the gels for fractionation before N-glycan purification, reduction/desalting and analysis (see above). However, for some unknown reason, some of the protein from the control α- L -iduronidase sample appeared to have precipitated and remained stacked in the wells of the SDS–PAGE gel after electrophoresis. Small N-glycans consisting of a single GlcNAc residue cannot be released by enzymatic treatment and can only be detected on the glycopeptides; this was not investigated in the present study. Modification of α- L -iduronidase by GlcNAc-phosphotransferase For in vitro phosphorylation of maize-derived α- L -iduronidase, the recombinant α 2 β 2 GlcNAc-1-phosphotransferase (0.15 μg) was added to the reaction mixtures containing various concentrations of maize-derived α- L -iduronidase in 50 mM Tris-HCl, pH 7.4, 10 mM MgCl 2 , 10 mM MnCl 2 , 75 μM UDP-[ 3 H]GlcNAc (1 μCi) and 2 mg ml −1 bovine serum albumin in a final volume of 50 μl. Dephosphorylated CHO cell-produced α- L -iduronidase served as a control. The assay was carried out as described by Qian et al . 22 . Apparent K m and k cat values were generated from double-reciprocal plots using a least square approximation for the best-fit line. The K m and k cat values ( Supplementary Table S4 ) are the means of three separate determinations. The graph of the activity of GlcNAc-1-phosphotransferase (α 2 β 2 ) towards α- L -iduronidase ( Fig. 6b ) was based on data from one typical experiment. CHO cell-produced α- L -iduronidase was dephosphorylated using calf intestinal phosphatase (New England Biolabs Ltd., Pickering, ON, Canada) at 0.5 unit μg −1 Aldurazyme overnight at room temperature. Dephosphorylated CHO cell-produced α- L -iduronidase was dialyzed at 4 °C overnight in a buffer containing 20 mM Tris-HCl, pH 7.4, 10 mM MgCl 2 , 150 mM NaCl and 0.05% Triton X-100. Additional information How to cite this article : He, X. et al . Production of α- L -iduronidase in maize for the potential treatment of a human lysosomal storage disease. Nat. Commun . 3:1062 doi: 10.1038/ncomms2070 (2012). Accession codes Accessions GenBank/EMBL/DDBJ M74715 X53514
(Phys.org)—The seeds of greenhouse-grown corn could hold the key to treating a rare, life-threatening childhood genetic disease, according to researchers from Simon Fraser University. SFU biologist Allison Kermode and her team have been carrying out multidisciplinary research toward developing enzyme therapeutics for lysosomal storage diseases - rare, but devastating childhood genetic diseases – for more than a decade. In the most severe forms of these inherited diseases, untreated patients die in early childhood because of progressive damage to all organs of the body. Currently, enzyme treatments are available for only six of the more than 70 diverse types of lysosomal storage diseases. "In part because mammalian cell cultures have been the system of choice to produce these therapeutics, the enzymes are extremely costly to make, with treatments typically ranging from $300,000 to $500,000 per year for children, with even higher costs for adults," says Kermode, noting the strain on healthcare budgets in Canada and other countries is becoming an issue. Greenhouse-grown maize may become a platform for making alpha-L-iduronidase, an enzyme used to treat the lysosomal storage disease known as mucopolysaccharidosis I, according to research published in this week's Nature Communications. The findings could ultimately change how these enzyme therapeutics are made, and substantially reduce the costs of treating patients. The novel technology manipulates processes inside the maize seed that "traffick" messenger RNAs to certain parts of the cell as a means of controlling the subsequent sugar processing of the therapeutic protein. In this way, the researchers have been able to produce the enzyme drug in maize seeds. The product could ultimately be used as a disease therapeutic, although it is still "early days," says Kermode, and several research goals remain to be accomplished before this can become a reality. Kermode says the success of the work underscores the power of multidisciplinary research that included contributions from SFU chemistry professor David Vocadlo, and from UBC Medical Genetics professor Lorne Clarke. It further underscores the importance of connections between SFU and Australia's Griffith University, through collaborative researchers Mark von Itzstein and Thomas Haselhorst. "In 2005, we had the basis of our story worked out," says Kermode. "Taking it to the next level involved their precise analyses to determine the sugar residues on the therapeutic enzyme produced by the modified maize seeds. "When we first looked at the sugar analysis data we were amazed at how well the 'mRNA-trafficking strategy' had worked, and the high fidelity of the process for controlling the sugar-processing of the therapeutic protein. This is critical as sugar processing influences the characteristics of a protein (enzyme) therapeutic, including its safety, quality, half-life in the bloodstream, and efficacy. The work could well extend to forming a platform for the production of other protein therapeutics." Kermode also credits SFU research associate Xu He, the first author of the Nature Communications paper. Her funding sources included NSERC Strategic grants and a Michael Smith Foundation for Health Research Senior Scholar Award, and in related research, a Canadian Society for Mucopolysaccharide and Related Diseases grant.
www.nature.com/ncomms/journal/ … full/ncomms2070.html
Biology
Scientists decode the genome of fall armyworm, moth pest that is invading Africa
Tingcai Cheng et al. Genomic adaptation to polyphagy and insecticides in a major East Asian noctuid pest, Nature Ecology & Evolution (2017). DOI: 10.1038/s41559-017-0314-4 Journal information: Scientific Reports , Nature Ecology & Evolution
http://dx.doi.org/10.1038/s41559-017-0314-4
https://phys.org/news/2017-09-scientists-decode-genome-fall-armyworm.html
Abstract The tobacco cutworm, Spodoptera litura , is among the most widespread and destructive agricultural pests, feeding on over 100 crops throughout tropical and subtropical Asia. By genome sequencing, physical mapping and transcriptome analysis, we found that the gene families encoding receptors for bitter or toxic substances and detoxification enzymes, such as cytochrome P450, carboxylesterase and glutathione- S -transferase, were massively expanded in this polyphagous species, enabling its extraordinary ability to detect and detoxify many plant secondary compounds. Larval exposure to insecticidal toxins induced expression of detoxification genes, and knockdown of representative genes using short interfering RNA (siRNA) reduced larval survival, consistent with their contribution to the insect’s natural pesticide tolerance. A population genetics study indicated that this species expanded throughout southeast Asia by migrating along a South India–South China–Japan axis, adapting to wide-ranging ecological conditions with diverse host plants and insecticides, surviving and adapting with the aid of its expanded detoxification systems. The findings of this study will enable the development of new pest management strategies for the control of major agricultural pests such as S . litura . Introduction The tobacco cutworm, Spodoptera litura (Lepidoptera, Noctuidae), is an important polyphagous pest; its larvae feed on over 100 crops 1 . This pest is widely distributed throughout tropical and subtropical areas of Asia including India, China and Japan. In India particularly, S . litura causes heavy yield loss varying between 10 and 30% 1 . High fecundity and a short life cycle under tropical conditions result in a high rate of population increase and subsequent population outbreaks. In addition, it has evolved high resistance to every class of pesticide used against it 2 , 3 , including the biopesticide Bt 4 . Few complete genome sequences have been reported for noctuids, which include many serious agricultural pests. Asian researchers launched the S . litura genome project as an international collaboration in cooperation with the Fall armyworm International Public Consortium (FAW-IPC), for which a genome project is coordinately underway 5 . By comparative genomic studies with the monophagous species Bombyx mori and other Spodoptera species such as S . frugiperda (which has a different geographical distribution), S . litura genome information can provide new insights into mechanisms of evolution, host plant specialization and ecological adaptation, which can serve as a reference for noctuids and lead to selective targets for innovative pest control. Results and discussion Genome structure and linkage map of S. litura. We sequenced and assembled a genome for S . litura comprising 438.32 Mb, which contains 15,317 predicted protein-coding genes analysed by GLEAN 6 and 31.8% repetitive elements (Supplementary Tables 1 – 4 ). Among four representative lepidopteran species with complete genome sequences 7 , 8 , 9 , S . litura harbours the smallest number of species-specific gene families (Supplementary Fig. 1a and Supplementary Table 9 ). A phylogenetic tree constructed by single-copy orthologous groups showed that S . litura separated from B . mori and Danaus plexippus about 104.7 Myr ago (Ma), and diverged approximately 147 Ma from the more basal Plutella xylostella , whereas Lepidoptera as a whole separated from Diptera about 258 Ma, consistent with reported divergence time estimates 10 (Supplementary Fig. 1b ). To construct a linkage map, a heterozygous male F1 backcross (BC1) population was established between Japanese and Indian inbred strains. The resulting genetic analysis used 6088 RAD-tags as markers to anchor 639 scaffolds covering 380.89 Mb onto 31 chromosomes, which corresponded to 87% of the genome (Supplementary Section 2). Genomic syntenies from S . litura to B . mori and to Heliconius melpomene revealed two modes of chromosomal fusion (Supplementary Tables 10 and 11 and Supplementary Fig. 2 ). In one, six S . litura chromosomes (haploid chromosome number N = 31) were fused to form three B . mori chromosomes ( N = 28). In the other, six sets of S . litura chromosomes were fused, corresponding to six H . melpomene chromosomes ( N = 21) 11 , and another eight S . litura chromosomes were fused, corresponding to four other H . melpomene chromosomes. These changes were consistent with previous reports on chromosome evolution among butterflies including Melitaea cinxia 12 and the moth Manduca sexta 13 ( Supplementary Section 2 ). Massive expansion of bitter gustatory receptor and detoxification-related gene families associated with polyphagy of Noctuidae To elucidate key genome changes associated with host plant specialization and adaptation in Lepidoptera, we compared chemosensory and detoxification-related gene families between the extremely polyphagous lepidopteran pest S . litura and the almost monophagous lepidopteran model organism B . mori . We found large expansions of the gustatory receptor (GR), cytochrome P450 (P450), carboxylesterase (COE) and glutathione- S -transferase (GST) gene families in S . litura (Table 1 ). Chemosensory genes play an essential role in host plant recognition of herbivores. GRs, especially, are highly variable among species, which could be a major factor for host plant adaptation. GRs are categorized into three classes—CO 2 receptors, sugar receptors and bitter receptors—among which bitter receptors are most variable, while CO 2 and sugar receptors are conserved 14 , 15 , 16 , 17 , 18 . Manual annotation identified 237 GR genes in the S . litura genome (Table 2 , Fig. 1a and Supplementary Table 13 ), whereas in the other lepidopteran species investigated to date, most of which are mono- and oligophagous, only about 45–80 GRs are reported 8 , 11 , 14 , 16 , 19 , 20 . Since large expansions of GR genes were also reported recently in S . frugiperda 5 and in another polyphagous noctuid, Helicoverpa armigera 21 , the expansion of GRs may be a unique adaptation mechanism for polyphagous Noctuidae to feed on a wide variety of host plants (Table 2 ). Phylogenetic analysis including GR genes of B . mori , M . sexta , H . melpomene and S . frugiperda showed clearly that greatly expanded bitter GR clades were composed of SlituGR s and SfruGR s exclusively (Supplementary Fig. 3 ), supporting a strong association of a major expansion of bitter receptor genes with the appearance of polyphagy in the Noctuidae. GR expansions mainly occurred by duplications, as many structurally similar GR genes are located in clusters on the same scaffold/chromosome (for example, Chr 12, 14 and 25; Fig. 1a – c ). Interestingly, while many H . armigera GR genes have been identified as intronless 21 , especially in the bitter GR clade, here we found that almost all S . litura GR genes possessed introns. This suggests that different mechanisms led to GR expansion in these two species. Table 1 Comparison of detoxification and chemosensory gene families between the extremely polyphagous pest S . litura and the almost monophagous B . mori Full size table Table 2 GR classification of Lepidoptera species with sequenced genomes Full size table Fig. 1: Massive expansion of S . litura bitter GR genes. a , Comparison of chemosensory and detoxification related gene families between the extremely polyphagous pest S . litura and almost monophagous B . mori . Black thick bars denote the largest bitter GR cluster on Chr 12. R represents receptor. b , There is a large expansion of bitter GR genes on S . litura Chr 14. Thirteen bitter GR genes clustered on S . litura Chr 14 were mainly expressed in moth proboscis and larval maxilla, whereas the corresponding BmorGR gene cluster on Chr 10 composed of BmorGR55-57 was expressed in larval chemoreception organs 16 . c , Expansion of S . litura single-exon bitter GR genes on Chr 25 mainly expressed in moth proboscis. The corresponding BmorGR53 , which is also a single-exon gene, was expressed in larval maxilla. d , Heatmap of S . litura GR expression in various tissues by RNA-Seq. L.Ant., larval antenna; L.Epi., larval epipharynx; L.Leg, larval legs; L.Max., larval maxilla; L.Mid., larval midgut; M.Ant., moth antenna; M.Leg, moth legs; M.P.G., moth pheromone glands; M.Pro., moth proboscis. The vertical red two-way arrow indicates the largest bitter GR cluster on Chr 12, which was mainly expressed in larval maxilla. Thick blue bars represent GR gene clusters on Chr 14 and Chr 25, which were mainly expressed in moth proboscis. R denotes receptor. Full size image Transcriptome and phylogenetic analyses of expanded bitter GR genes in S. litura Transcriptome analysis revealed that at least 109 of the predicted bitter GR genes were expressed, mostly in larval palps and adult proboscis, but a large number were also expressed in other chemoreception organs such as antennae, legs and the pheromone gland (Fig. 1d ). These observations are similar to GR expression patterns reported in adult tissues of H . melpomene 14 and in diverse developmental stages and tissues in H . armigera 21 . Intriguingly, four bitter GR genes on Chr 25 and 14 bitter GR genes on Chr 14 were mainly expressed in moth proboscis (Fig. 1d ), which S . litura uses to suck flower nectar to obtain energy for flying. Comparison with the silkmoth, which does not feed, showed that the expansion of these gene clusters could represent an adaptation to detect toxic plant secondary metabolites present in flower nectar (Fig. 1b , c ). From our phylogenetic analysis (Supplementary Fig. 3 ), expansion of the biggest cluster of bitter GR genes on Chr 12 was Spodoptera -specific. These genes were mainly expressed in larval maxilla, consistent with the idea that a large expansion of bitter GR genes supports the polyphagy of Spodoptera and an ability to detect a large number of toxic metabolites in host plants (Fig. 1d ). The mechanisms by which perception of bitter substances result in specific behaviours are complex, and those underlying bitter receptor function in Lepidoptera have not yet been elucidated. Association of major expansions of SlituP450 genes with intensified detoxification Detoxification of xenobiotics is crucial for ecological adaptation of highly polyphagous pest species to different host plants. This process usually involves several distinct detoxification pathways, from active metabolism of toxins 22 to enhanced excretion activity by ABC transporters 23 , 24 . We annotated 138 P450 genes in the S . litura genome, among which P450 clans 3 and 4 showed large expansions (Fig. 2a , Supplementary Fig. 4 and Supplementary Table 14 ). CYP9a especially was greatly expanded on S . litura Chr 29 compared to the corresponding chromosome of B . mori (Fig. 2a , upper panel). Transcriptome analysis showed that some of the expanded S . litura CYP9a genes were inducible by treatment with xanthotoxin, imidacloprid or ricin ( P450-100 , 103 and 105 ; Fig. 2a , middle panel). CYP9a is reported to be inducible by xanthotoxin in S . litura 25 and S . exigua 26 . Other P450 clan 3 expansions ( CYP337a1 and a2 , CYP6ae9 and CYP6b29 , and CYP321b1 ) were also induced by the toxin treatments (Supplementary Fig. 5a ), suggesting a link between P450 clan 3 expansions and an increase of tolerance to toxin in this pest. To test this hypothesis, we selected P450-74 , 88 , 92 and 98 as members of P450 clan 3 for knockdown experiments. We injected each siRNA of the corresponding P450 into fifth-instar larvae. After feeding with an artificial diet containing imidacloprid, we observed an increase in sensitivity to the insecticide in the treated larvae compared to controls (Supplementary Fig. 7a-d ). Recently, the role of SlituCYP321b1 in insecticide resistance was confirmed by showing that it is overexpressed in the midgut after induction by several pesticides, and that RNAi-mediated silencing of SlituCYP321b1 significantly increased mortality of S . litura larvae exposed to the same pesticides 27 . Fig. 2: Major expansion of the detoxification-related cytochrome P450 and COE gene families of S . litura . a , A comparison of the Cyp9a gene cluster on B . mori Chr 17 with S . litura Chr 29. Top: genomic organization. Cyp9a gene clusters contain four ADH genes in both species, while two GR genes are present only in S . litura . Cyp9a , red; ADH , yellow; GR , blue. Middle: expression heatmap of Cyp9a genes induced by toxin treatment in three tissues. Bottom: diversity of genes associated with the Cyp9a cluster domain including the ADH gene cluster among 16 local populations of S . litura (see also Fig. 4a ). b , Expanded lepidopteran esterase gene cluster on S . litura Chr 2. Top: genomic organization in S . litura and B . mori . COE , red; ACE ( acetylcholinesterase) , green. Bottom: expression heatmap of COE induced by toxin treatment. Toxins: imidacloprid (Imid), ricin and xanthotoxin (Xan). Expression was measured in fat body (fb), midgut (mg) and Malpighian tubule (mp). Full size image Major expansions of SlituGST genes enhance insecticide tolerance of this pest Expansions of SlituGST genes were derived from epsilon classes on Chr 9 and Chr 14; the expression of these genes was also induced by toxin treatment (Fig. 3a – c and Supplementary Table 16 ). We chose SlituGST07 and SlituGST20 as representatives of the expanded clusters on Chr 14 and Chr 9, respectively, for knockdown and imidacloprid pesticide binding assays. We injected the siRNAs into fifth-instar larvae, then fed them an artificial diet containing imidacloprid (50 µg g −1 ). This treatment resulted in lethality in siRNA-injected larvae, while controls remained alive (Fig. 3d , e ), consistent with the idea that expansion of the GSTε class conferred an increase in detoxification ability. Figure 3f , g shows the inhibitory effects of imidacloprid on SlituGST07 and SlituGST20 in a competitive binding assay ( Supplementary Section 6 ). These observations confirmed that expansion of GSTε contributes to the detoxification ability of this pest. Fig. 3: Expansions of detoxification-related GSTε in S . litura . a , Phylogenetic tree of GSTs of S . litura (magenta) and B . mori (blue). Arrows show representative GSTε genes, SlituGST07 and GST20 , for each GSTε cluster used for knockdown and binding assays. We excluded SlituGST45-49 from the phylogenetic tree, since these microsomal GSTs are very short compared with other GSTs and their amino acid sequences are fairly distant from other classes. b , c , Organization and expression of expanded S . litura GSTε genes on Chr 9 and Chr 14 with toxin treatment. Toxins and tissues are the same as Fig. 2a , b . d , e , Increased sensitivity to the pesticide imidacloprid caused by knockdown of GST07 and GST20 . d , Knockdown of fifth-instar larvae, performed by siRNA injection and confirmed by RT-qPCR. At 24 hr after siRNA injection, larvae were fed an artificial diet containing imidacloprid (50 µg g −1 ). The percentage of larvae affected by imidacloprid (dead and almost dead; see Methods) is shown. Ten larvae were used per experiment in three independent replicates and the results are presented with the standard deviation (SD). e , Knockdown reduction rates of GST07 and GST20 (31% and 57%, respectively). Control larvae were injected with siGFP (see Methods). The relative expression is shown as mean + SD of three independent replicates of 10 larvae each, using a Student’s t -test, * P <0.05, ** P <0.01. f , g , Binding assay of SlituGST07 (f) and SlituGST20 (g) with imidacloprid. The inhibitory effect of imidacloprid on SlituGST07 and SlituGST20 was determined using CDNB and GSH as substrates (see Methods). Enzymatic activity of SlituGST07 and SlituGST20 was measured in the presence of various concentrations of imidacloprid. The value from the assay with 1 × 10 −4 mM of imidacloprid was set to 100%. Error bars denote SEM from 3 independent experiments (10 larvae per treatment). Full size image Associating large expansions of SlituCOE genes with intensified detoxification COE genes, which play an important role in the metabolism of a wide range of xenobiotics associated with plants and insecticides 22 , 28 , 29 , 30 , also showed large expansions of lepidopteran and α classes (Table 1 , Supplementary Fig. 6a and Supplementary Table 15 ). RNA-Seq analysis showed that the expanded COE genes were inducible with toxin treatment, suggesting again that their expansion is linked to an increase in detoxification ability (Fig. 2b , lower panel). These results supported knockdown experiments for COE-57 and COE-58 whereby injected larvae fed with an artificial diet containing imidacloprid showed a 60–80% increase in sensitivity compared to controls (Supplementary Fig. 7e,f ). Taken together with our knockdown experiments, transcript induction by imidacloprid indicates that expansion of the P450 , GST and COE families is linked to tolerance of this insecticide. Roles of non-expanded detoxification gene families Although the APN and ABC gene families did not exhibit significant expansion, they were highly induced by ricin treatment (Supplementary Figs. 8 and 9 and Supplementary Tables 17 and 18 ). APN 31 , ABCC2 32 and ABCA2 33 have been shown to function as Cry protein receptors 32 , 33 (see Supplementary Sections 7 and 8 ). Thus, APN and ABC transport proteins may be involved in the response to different classes of xenobiotics. Altogether our results suggest that S . litura probably achieves its impressive polyphagy by adopting a strategy of large expansions of diverse sensory and detoxification-related genes, with probable cross-talk in their regulation, to adapt to a great variety of host plants. Genetic population structure reveals extensive long-distance migration of this pest We analysed the genetic diversity and gene flow of S . litura sampled from 3 locations in India, 11 locations in China and 2 locations in Japan (Supplementary Table 21 ). This yielded a clear geographical map of the genetic diversity of the surveyed local populations and genetic population structure in these countries. We observed extremely high genetic similarity between Hyderabad (central India), Fujian (the southeast coast of mainland China) and Okinawa/Tsukuba (Japan) ( F ST < 0.01, Fig. 4a and Supplementary Table 23 ). The model-based structure analysis 34 provided a predicted population structure consistent with an F ST -based cluster analysis (Fig. 4b and Supplementary Fig. 10a,b ). By incorporating the estimated allele frequency divergence between the ancestral populations, we obtained a very stable picture of population structure relative to the assumed number of ancestral populations ( K ). Here, again, we observed extremely high genetic similarity between central India (Hyderabad and Matsyapuri), the southeast coast of mainland China (Zhejiang, Guangzhou and Fujian) and Japan (Okinawa and Tsukuba). The assignment of individual genomes to the ancestral populations provided a detailed picture of the gene flow (Fig. 4b ). These results are consistent with the study of DNA sequence variation among populations of S . litura in China and Korea 35 . An additional factor affecting population dispersal is oversea migration from southern China to western Japan driven by typhoons 36 , 37 . Geographical data on the Asian monsoon in July–August 38 may support our results, enabling S . litura to undertake a trip of even longer distance from southern India to China and Japan. Fig. 4: Population structure and gene flow of S . litura . a , F ST -based cluster analysis of local populations. Structure results ( b and Supplementary Fig. 10 ) suggest that one of the individuals from the Hunan2 sample belongs to a migration population (Hunan2-4), while the other 3 individuals belong to a local population (Hunan2-1, Hunan2-2 and Hunan2-3). Here, we treated the Hunan2 samples as a mixed population with both migration and local populations. b , Assignment of the individual genomes in the samples to the ancestral populations predicted by structure. We obtained the predicted allele frequency divergences between individual genomes by the predicted allele frequency divergence between the ancestral populations and the membership coefficients of the individual genomes (see Methods). c , Two-dimensional allele frequency spectra in the paired population groups. d , Global picture of the migration route predicted by ∂a∂i. The inset shows the number of migrating chromosomes per generation. The four closed ropes represent the migrating population in India, local populations in China, migrating populations in China, and the populations in Japan. The size of the circles represents the genetic diversity (π). Full size image To understand the global pattern of migration routes, we analysed the joint allele frequency spectrums (Fig. 4c ) by ∂a∂i (diffusion approximation for demographic inference) 39 . ∂a∂i fits the solution of the Fokker–Planck–Kolmogorov equation to the data of the joint allele frequency spectrum, and the estimated values of the coefficients provide direct information on the population histories and migration rates. Based on the F ST -based population structure and the model-based assignment of the individual genomes, we constructed six population groups: two groups in India (India_local and India_migrate), three in China (China_isolate, China_local, and China_migrate), and one in Japan. By applying the isolation with migration model 40 to each of the pairs of population groups, we identified a global route from the Indian migrating population through the Chinese local population, which ranges from the south at Hainan to the north at Hubei (Fig. 4d ). This Chinese local population has a large number of migrants to and from the Chinese migrating population. We observed moderate numbers of migrants from China to Japan and from China to India. ∂a∂i also implied that the local populations in India and China have been shrinking significantly for the past 2000–3000 years. In contrast, the Japanese population has been expanding for the past 5000 years (Supplementary Figs. 11 and 12 and Supplementary Table 24 ). It would be of interest to investigate the extent to which these local populations are also pests and have insecticide resistance. Conclusion This study provides strong evidence on how this polyphagous insect has evolved to become a deleterious and powerful global pest through adaptative changes and subsequent selection of gene expansions. It also provides an explanation for the genetic basis for its high tolerance to pesticides, which involves mechanisms similar to plant allelochemical detoxification. The population genetic analysis revealed the extensive migratory ability of S . litura . Such a deeper understanding through genomics and transcriptomics will enable us to develop novel pest management strategies for the control of major agricultural pests like S . litura and its near relatives, and to design new classes of insecticide molecules. Methods Genome sequencing and assembly An inbred strain of S . litura (the Ishihara strain) was developed by successive single-pair sib matings for 24 generations and reared on an artificial diet at 25 °C. Male moths were used to extract genomic DNA for sequencing. Shotgun libraries with insert sizes of 170, 300, 500 and 800 bp (short insert sizes) and 2, 5 and 10 kb (large insert sizes) were constructed by following the manufacturer’s protocol ( ). After quality control of DNA libraries, ssDNA fragments were hybridized and amplified to form clusters on flow cells. Paired-end sequencing was performed following the standard Illumina protocol. The S . litura genome was assembled using the software program ALLPATHS-LG build 47758 41 . The assembly used default parameters with the exception of using a ploidy setting of 2 (PLOIDY = 2), as recommended for a diploid organism, in the data preparation stage, and a minimum contig size set to 200 bp (MIN_CONTIG = 200) in the running stage (running the RunAllPathsLG command). Gaps within the scaffolds were filled based on the short insert size libraries, using the GapCloser in the SOAP denovo package 42 . Assembled scaffolds were assigned to chromosomes by the order and orientation of a linkage map combined with a synteny analysis between S . litura and B . mori . The sequencing depth and GC content distribution of the assembled genome sequence were evaluated by mapping the short insert size reads back to the scaffolds using SOAP2 43 . Genome annotation Three methods were used for S . litura gene prediction including ab initio, homology-based and transcript-based methods; the GLEAN program 6 was used to derive consensus gene predictions. For ab initio prediction, AUGUSTUS 44 and SNAP 45 were used to predict protein-coding genes. For homology-based prediction, proteins from five insect genomes ( Anopheles gambiae , Drosophila melanogaster , B . mori , Acyrthosiphon pisum and D . plexippus ) were first mapped to the S . litura genome using TBLASTN (E-value ≤ 0.00001), and then accurate splicing patterns were built with GeneWise (version 2.0) 46 . In the transcript-based method, the assembled transcriptome results were mapped onto the genome by BLAT with identity ≥99% and coverage ≥95%. We used TopHat to identify exon–intron splice junctions and refine the alignment of the RNA-Seq reads to the genome 47 , and Cufflinks (version 1.2.0 release) to define a final set of predicted genes 48 . Finally, we integrated the three kinds of gene predictions to produce a comprehensive and non-redundant reference gene set using GLEAN. Gene function information was assigned based on the best hits derived from the alignments to proteins annotated in the SwissProt, TrEMBL 49 and KEGG 50 databases using BLASTP 51 . Motifs and domains of proteins were annotated using InterPro 52 by searching public databases, including Pfam, PRINTS, PROSITE, ProDom and SMART. We also described gene functions using Gene Ontology (GO) 53 . Repeats and transposable element families in the S . litura genome were first detected by the RepeatModeler (version open-1.0.7) pipeline, with rmblast-2.2.28 as a search engine. With the assistance of RECON 54 and RepeatScout 55 , the pipeline employs complementary computational methods to build and classify consensus models of putative repeats. tRNAs were annotated by tRNAscan-SE with default parameters. rRNAs were annotated by RNAmmer prediction and homology-based search of published rRNA sequences in insects (deposited in the Rfam database). snRNAs and miRNAs were sought using a two-step method: after aligning with BLAST, INFERNAL was used to search for putative sequences in the Rfam database (release 9.1). Gene family clustering and phylogenetic tree construction Protein sequences longer than 30 amino acids were collected from nine sequenced arthropod species ( B . mori , P . xylostella , D . plexippus , D . melanogaster , A . darlingi , Apis mellifera, Harpegnathos saltator , Tribolium castaneum and Tetranychus urticae ) and S . litura for gene family clustering using Treefam 56 . We aligned all-to-all using BLASTP with an E-value cut-off of 0.0000001, and assigned a connection (edge) between two nodes (genes) if more than a third of a region was aligned in both genes. An H-score ranging from 0 to 100 was used to weigh the similarity (edge). For two genes, G 1 and G 2 , the H-score was defined as score (G 1 G 2 )/max (score(G 1 G 1 ),score(G 2 G 2 )), where ‘score’ is the raw BLAST score. The average distance was used for the hierarchical clustering algorithm, requiring the minimum edge weight (H-score) to be larger than 10 and the minimum edge density (total number of edges/theoretical number of edges) to be larger than 1/3. 386 single-copy genes from the 10 species were aligned by MUSCLE 57 . We used MODELTEST 58 to select the best substitution model (GTR) and MRBAYES 59 to construct the phylogenetic tree. Then we estimated divergence time and neutral substitution rate per year (branch/divergence time) among species. The PAML mcmctree 60 used to estimate the species divergence time referred to two fossil calibrations, including the divergence time of D . melanogaster and Culicidae (238.5–295.4 million years ago) and the divergence time of D . melanogaster and Hymenoptera (238.5–307.2 million years ago) 61 , 62 . T . urticae (Arachnida) was used as an outgroup, and a bootstrap value was set as 1000. In addition, the evolutionary changes in the protein family size (expansion or contraction) were analysed using the CAFÉ program 63 , which assesses the protein family expansion or contraction based on the topology of the phylogenetic tree. Linkage map Two genetically contrasting strains of S . litura , one developed at the University of Delhi, India (called the India strain) and another available at the National Institute of Agrobiological Sciences, Japan (the Ishihara strain), were employed to generate a mapping population. F1 offspring were obtained by crossing an India male and an Ishihara female. An F1 male was crossed with an Ishihara female as back cross (BC1), and these BC1 offspring were used to develop a RAD library 64 . Genomic DNA was isolated from 116 BC1 individuals, Ishihara male, India female and F1 male, and RAD sequencing libraries were constructed following a standard protocol. Sequencing was carried out using an Illumina HiSeq2000 platform. RAD-seq reads were aligned to the reference genome sequence using Short Oligonucleotide Analysis Package 2 (SOAP2) 43 to analyse the genotypes of each individual at every genomic site. Polymorphic loci relative to the reference sequence were selected and then filtered. SNP markers were recorded if they were supported by at least 5 reads with quality value greater than 20, and ambiguous SNPs (SNP = N ) were eliminated. Only SNP markers that were homozygous and polymorphic between parents, heterozygous in the F1 and followed a Mendelian segregation pattern were selected for linkage map construction. This resulted in the identification of a total of 87,120 RAD markers. Further filtering was done by selecting only SNP markers with a missing rate of <0.09 that were separated by at least 2000 bp. After such stringent filtering, a total of 6088 SNP markers were obtained and subsequently used to develop a linkage map using JoinMap 4.1 65 . The limit of detection (LOD) score = Z = log(probability of sequence with linkage/probability of sequence with no linkage) for the occurrence of linkage was set to 4–20 (start–end). By applying the indicated parameters, we narrowed down the map to 31 linkage groups (Supplementary Fig. 2b ). Syntenic comparison We obtained peptides and genome sequences for B . mori 66 , Papilio xuthus 67 and H . melpomene 11 . If a gene had more than one transcript, only the first transcript in the annotation was used. To search for homology, protein-coding genes of S . litura were compared to those of B . mori , P . xuthus and H . melpomene using BLASTP 51 . For a protein sequence, the best five non-self hits in each target genome that met an E-value threshold of 0.00001 were reported. Whole-genome BLASTP results and the genome annotation file were used to compute collinear blocks for all possible pairs of chromosomes using MCScan software 68 . A region with at least 5 syntenic genes and no more than 15 gapped genes was called a syntenic block. Annotation of the gustatory receptor (GR) gene family A set of described Lepidoptera gustatory receptors (GRs) was used to search the S . litura genome by TBLASTN. Additionally, a combination approach of HMMER 69 and Genewise 46 was used to identify additional GR sequences. Scaffolds that were found to contain candidate GR genes were aligned to protein sequences to define intron/exon boundaries using Scipio 70 and Exonerate 71 . The GR classification and the integrity of the deduced proteins were verified using BLASTP against the non-redundant GenBank database. When genes were split in different scaffolds, the protein sequences were merged for further analyses. Annotation and phylogenetic study of the cytochrome P450 (CYP) gene family Identity between two CYP proteins can be as low as 25% but the conserved motifs distributed along the sequence allow clear identification of CYP sequences. Conserved CYP protein structure is featured by a four-helix bundle (D, E, I and L), helices J and K, two sets of β sheets and a coil called the ‘meander’. The conserved motifs include WXXXR in the C helix, the conserved Thr of helix I, EXXR of helix K and the PERF motif followed by a haeme-binding region FXXGXXXCXG around the axial Cys ligand 72 . All the scaffolds containing candidate CYPs were manually annotated to identify intron/exon boundaries. Protein CYP sequences were compared by phylogenetic studies to the S . frugiperda CYPome 73 for name attribution. Annotation of carboxylesterase (COE), glutathione-S-transferase (GST), aminopeptidase N (APN) and ATP-binding cassette (ABC) transporter gene families Sets of lepidopteran amino acid sequences for each gene family were collected from KAIKObase ( ) and the NCBI Reference Sequence database. Each gene family was then searched in the S . litura genome assembly and predicted gene set by TBLASTN and BLASTP using each set of lepidopteran amino acid sequences. Identified genes were further examined by HMMER3 search (cutoff E-value = 0.001) using the Pfam database to confirm conserved domains in each gene family. In addition, the classification of each gene family was performed with BLASTP in the non-redundant GenBank database. Construction of a phylogenetic tree of CYP, COE, GST, APN and ABC transporter gene families Amino acid sequences of each lepidopteran gene family were automatically aligned by Mafft program version 7 ( ), using an E-INS-i strategy 74 . When the alignment showed highly conservative and non-conservative regions, only the conservative regions were retained for further analysis. Model selection was conducted by MEGA version 6 75 and the LG+Gamma+I mode 76 , 77 , 78 . The maximum likelihood tree was inferred by RaxML version 8 79 using the LG+Gamma+I model. To evaluate the confidence of the tree topology, the bootstrap method 80 was applied with 1000 replications using the rapid bootstrap algorithm 81 . Illumina sequencing (RNA-Seq analysis) Total RNA (1 μg) was used to make cDNA libraries using a TruSeq RNA sample preparation kit (Illumina, San Diego, CA). A total of 78 individual cDNA libraries were prepared by ligating sequencing adaptors to cDNA fragments synthesized using random hexamer primers. Raw sequencing data were generated using an Illumina HiSeq4000 system (Illumina, USA). The average length of the sequenced fragments was 260 bp. Raw reads were filtered by removal of adaptors and low-quality sequences before mapping. Reads containing sequencing adaptors, more than 5% unknown nucleotides or more than 50% bases of quality value less than 10, were eliminated. This output was termed ‘clean reads’. For analysis of gene expression, clean reads of each sample were mapped to S . litura gene sets using Bowtie2 (version 2.2.5), and then RSEM (v1.2.12) was used to count the number of mapped reads and estimate the FPKM (fragments per kilobase per million mapped fragments) value of each gene. Significant differential expression of genes was determined using the criteria that the false discovery rate was <0.01 and the ratio of intensity against control was >2 for induction or <0.5 for reduction. Toxin treatment of S. litura larvae for transcriptome analysis Fifth-instar larvae of the inbred strain were each fed with 1 g of artificial diet supplemented with 1 mg g −1 xanthotoxin. Control larvae were fed an artificial diet without xanthotoxin. For the ricin and imidacloprid treatments, the artificial diet was supplemented with either ground Ricinus communis seeds at a concentration of 50 mg g −1 or imidacloprid at a concentration of 50 µg g −1 , respectively. Ten individuals were used for each treatment and three independent replicates were performed. Whole larvae were used for RNA extraction at 48 h post toxin treatment. Fat body, midgut and malpighian tubule were dissected from the toxin-treated larvae for RNA preparation. Total RNA was extracted from the tissues using Trizol reagent according to the manufacturer’s instructions (Invitrogen, USA) and contaminating DNA was digested with RNase-free DNase I (Takara, China). The integrity and quality of the mRNA samples were confirmed using an Agilent Bioanalyzer 2100 (Agilent Technologies, Santa Clara, CA). GR transcriptome analysis Larval antenna, thoracic legs, ephipharynx, maxilla and midgut were dissected from sixth-instar larvae, while antenna, legs, pheromone glands and proboscis were from moths. Due to very low GR expression levels, we used 100 larvae for RNA preparation. For expression profiling, we recorded all GR genes with expression levels higher than 0.1 FPKM in any tissue (Fig. 1d ; red). Quantitative PCR with reverse transcription (RT-qPCR) Total RNA was subjected to reverse transcription using a PrimeScript™ RT Master Mix (Perfect Real Time) (TaKaRa) in 50 μl reaction volumes (2500 ng total RNA) and then diluted 5-fold. 1 μl cDNA was used per 10 μl PCR reaction volume. PCR was carried out with the following program: 94 °C for 2 min followed by 30 cycles of 94 °C for 10 sec, 50 °C for 15 sec, and 72 °C for 30 sec with rTaq DNA polymerase (TaKaRa) using pairs of gene-specific primers (Supplementary Table 19 ). RT-qPCR of each gene was repeated at least three times in two independent samples. BmActin3 was used as a control for each set of RT-qPCR reactions and for gel loading. siRNA injection for knockdown of SlituGST , SlituP450 and SlituCOE genes 4 µl of siRNA (100 pm µl −1 ) were injected into the haemolymph of each fifth-instar larva, while injection of the same amount (4 µl) of GFP siRNA was used for controls. After 24 h post injection, larvae were reared on an artificial diet supplemented with imidacloprid at 50 µg g −1 until bioassay. siRNA sequences are listed in Supplementary Table 20 . To determine the effect of imidacloprid ingestion, larval condition was scored at 2, 6, 12, 18, 24, 36 and 48 post feeding. ‘Affected’ means that larvae rounded up and did not move after a couple of hours when touched, as if dead (suspended animation). However, several hours later, many affected larvae recovered from their suspended state, probably due to detoxification of ingested imidacloprid. The GST knockdown experiment used 3 replicates of 10 larvae. Post feeding replicates were scored independently for SlituGST-7 and - 20 ; the remaining knockdowns ( SlituP450-0740 , -088 , -092 and -098 , and SlituCOE-057 and -058 ) were conducted as preliminary trials without replicates using 30 larvae per gene. Overexpression and purification of recombinant SlituGST07 and SlituGST20 proteins Competent Escherichia coli Rosetta (DE3) pLysS cells (Novagen; EMD Millipore) were transformed with expression vectors harbouring SlituGST07 cDNA (pET32.M3) or SlituGST20 cDNA (pCold_SUMO) and grown at 37 °C on Luria-Bertani (LB) medium containing 100 µg ml −1 ampicillin. After cells transformed with SlituGST07 cDNA reached a density of 0.7 OD 600 , isopropyl 1-thio-ß-D-galactoside (IPTG) was added to a final concentration of 1 mM to induce the production of recombinant protein and cultured overnight at 30 °C. Cells were then harvested by centrifugation, homogenized in 20 mM Tris-HCl buffer (pH 8.0) containing 0.5 M NaCl, 4 mg ml −1 of lysozyme, and disrupted by sonication. Cells transformed with SlituGST20 cDNA were grown to a density of 0.5 OD 600 , and stored on ice for 30 min before addition of IPTG to a final concentration of 1 mM, followed by a further incubation overnight at 18 °C before harvesting and disruption. Unless otherwise noted, all of the operations described below were conducted at 4 °C. The supernatant was clarified by centrifugation at 10,000 g for 15 min and subjected to Ni 2+ -affinity chromatography equilibrated with 20 mM Tris-HCl buffer (pH 8.0) containing 0.2 M NaCl. After washing with the same buffer, the samples were eluted with a linear gradient of 0–0.5 M imidazole. The enzyme-containing fractions, assayed as described below, were pooled, concentrated using a centrifugal filter (Millipore, Billerica, MA, USA), and applied to a Superdex 200 column (GE Healthcare Bio-Sciences, Buckinghamshire, UK) equilibrated with the same buffer plus 0.2 M NaCl. Each fraction was assayed and analysed by SDS-PAGE using a 15% polyacrylamide slab gel containing 0.1% SDS, according to the method of Laemmli 82 . Protein bands were visualized by Coomassie Brilliant Blue R250 staining. Measurement of GST enzyme activity GST activity was measured spectrophotometrically using 1-chloro-2,4-dinitrobenzene (CDNB) and glutathione (GSH) as standard substrates 83 . Briefly, 1 µl of a test solution was added to 0.1 ml of a citrate-phosphate-borate buffer (pH 7.0) containing 5 mM CDNB and 5 mM GSH. Increase in absorbance at 340 nm min −1 was monitored at 30 °C and expressed as moles of CDNB conjugated with GSH per min per mg of protein using the molar extinction coefficient of the resultant 2,4-dinitrophenyl-glutathione: ε 340 = 9600 M −1 cm −1 . Sampling and sequencing for population genetics study S . litura was sampled from three locations in India (Delhi, Hyderabad and Matsyapuri), 11 locations in China, including Fujian, Guanxi, 2 locations in Guangzhou (Guangzhou and South China Normal University), Hainan, Hubei, Shanxi, Zhejiang, 3 locations in Hunan (Hunan1, Hunan2 and Hunan3), and 2 locations in Japan (Tsukuba and Okinawa). Four individuals were sampled from each location, except for Hunan1 (3 individuals). A total of 63 individuals were used in this study. Mapping and SNP calling First, mapping of reads of each individual to the reference genome was conducted. The proper mapping rate was about 70% for 56 individuals except for 7 individuals (Supplementary Table 21 ). Since the proper mapping rates for four individuals from the Shanxi population and three individuals from Fujian were extremely low, they were excluded from the population genomics analysis. SNP calling was conducted by comparing 56 genomes with the reference genome. Finally, a multiple VCF file was generated including 56 individuals. Sites with missing values or quality values below 20 were screened by VCFtools software 84 . In total, 46,595,432 SNPs were identified and included in this analysis. Genetic diversity, population structure and balancing selection The nucleotide diversity (π) of 14 local populations and pairwise F ST values were calculated using VCFtools software with window size 5000 bp, step 2500 bp. The genomic nucleotide diversity was obtained by averaging over the values of windows. The weighted F ST was calculated using the Weir and Cockerham estimator 85 . Based on the pairwise F ST , hierarchical cluster analysis was conducted using R software. Because of the small sample size in each sampling location, interpretation of population genomic analysis needs careful evaluation of the precision. The precision of π and F ST values were evaluated by parametric bootstrap with coalescent simulation 86 . Haplotypes of windows were generated using the population-specific π values multiplied by 5000 and 4 Nms calculated as 1/ F ST −1. Two haplotypes were generated for each window. A thousand sets of haplotypes were generated independently and concatenated to make a bootstrap sample. For each of 100 bootstrap samples, the π values and pairwise F ST were calculated to estimate the standard errors. The adopted number of sets was less than the number of the scaffolds. Because the genome size of S . litura was about 4 × 10 8 bp, we mimicked the subsampling of windows that were separated by bp on average so that we could estimate approximate independence between the sub-sampled windows. To confirm the observed population structure, we conducted a model-based structure analysis 34 , 87 . Based on the allele frequency divergence among the ancestral populations ( P ) and the membership coefficients that assign the populations to the ancestral populations ( Q ), we calculated the predicted allele frequency divergence between the population ( QPQ t ). We also analysed individual-level membership coefficients and the allele frequency divergence. We further estimated the global pattern of migration by analysing the joint allele frequency spectrums in terms of the population histories and the migration patterns by ∂a∂i (diffusion approximation for demographic inference) 39 . To avoid the complex effect of selection, we analysed SNPs in introns. Out of ~20 million intronic SNPs, we randomly sampled 2 million SNPs. Based on the multi-dimensional scaling of F ST and the assignment of the individual genomes by structure, we constructed six population groups: the Indian local population (with the sample from Delhi), Indian migratory population (with the samples from Hyderabad and Matsyapuri), Chinese isolated population (with the samples from Guangzhou2 and Hunan1), Chinese local population (with the samples from Hunan3, Guangxi, Hainan, three individuals of Hunan2 and Hainan), Chinese migratory population (with the samples from Fujian, and one individual each of Hunan2, Hunan3, Hunan4, Zhejiang and Guangzhou1), and Japanese migrating population (with the samples from Okinawa and Tsukuba). To each pair of population groups we applied the IM (isolation with migration) model 40 with population expansion/shrinkage. The estimated migration rates represent the number of migrating chromosomes per generation. To obtain the population sizes and the time of population splitting from the estimated relative values, we followed a previous study 88 that assumes the generation time of 0.3 year and uses the standard mutation rate of 8.4 × 10 −9 (per site per generation) from Drosophila 89 . The standard errors were obtained by parametric bootstrap of coalescent simulation 86 . Assuming the estimated scenarios of population history, we generated 100 bootstrap samples of 2 million SNPs. To reflect the correlation structure between SNP loci, we assumed that they were evenly distributed on 28 chromosomes. SNPs on different chromosomes are independent. Noting that the mean distance between the neighbouring SNP loci (in bp) was $$\frac{{\rm{4.6}}\times {10}^{8}}{2.0\times {10}^{6}}=2.3\times 1{0}^{2}$$ we set the recombination rate to be ρ = 2.3 × 10 −5 . We also tested two alternative values, ρ = 0 and ρ = 0.01, and obtained similar standard errors.
As part of an international consortium, INRA researchers, in partnership with the CEA and INRIA , have sequenced one of the first genomes of a moth from the superfamily Noctuoidea: Spodoptera frugiperda, or armyworm. This crop pest – until now only known on the American continent – has become invasive in Africa since 2016. Published in Scientific Reports on 25 September 2017, this study opens up perspectives for new methods of biological control and a better understanding of the mechanisms involved in the appearance of pesticide resistance. Spodoptera frugiperda is a moth belonging to the superfamily Noctuoidea. It is also called Armyworm because the caterpillars are sometimes so numerous that they form "carpets" on the ground, similar to an army on the march. Unlike the majority of herbivorous insects, the Armyworm is highly polyphagous: it attacks over one hundred plant species, including crops (maize, rice, sorghum, cotton and soybean). The moth moves in large swarms and is able to fly long distances. Until now limited to America and the Caribbean, Spodoptera frugiperda is estimated to cause 600 million dollars in damage in Brazil every year according to the FAO. In America, 67 cases of insect resistance to pesticides have been identified to date, illustrating the difficulty of controlling these populations. Since January 2016, it has become invasive in Africa, where it destroys maize crops in 21 countries in the south and west parts of the continent. It is currently a threat to the European continent. Sequencing of one of the first genomes of a moth belonging to the superfamily Noctuoidea In the framework of an international public consortium called Fall Armyworm, INRA researchers, in partnership with the CEA and INRIA, sequenced the genome of Spodoptera frugiperda. They described one of the first genomes of a moth belonging to the superfamily Noctuoidea and, more specifically, studied three types of gene families in this insect. Scientists first analysed a group of genes involved in the recognition of host plants which allows them to feed or lay their eggs. By comparing the genome of S. frugiperda with the available genomes of (non-polyphagous) Lepidoptera, researchers found an expansion in the number of genes (230 versus 45 to 74 in other Lepidoptera) encoding a certain type of taste receptor. The latter are located on the moth's trump (proboscis) or under their legs. They are thought to allow them to detect toxins or bitter compounds produced by plants. The only insect known to have such taste receptor expansions is an omnivorous beetle, the red flour beetle, which also attacks a wide range of foods.Researchers also looked at other families of genes necessary to cope withr the chemical defences that plants overproduce when they are attacked (detoxification). They discovered expansions in two of the four major gene families for detoxification (encoding cytochrome P450s (CYP) or glutathione-S-transferases (GST)). These are the same genes that can be involved in resistance to pesticides.Lastly, scientists described a group of genes involved in the digestion of plant tissues. For the purpose of the study, two genomes of S. frugiperda were analysed: two genomic variants based on the insect's host plant – the maize variant and rice variant, which are found throughout their range of distribution in America. The researchers found differences between the variants in terms of the number and sequence of genes necessary to detoxify the toxins emitted by the plants and in the genes necessary for digestion. This data was made available to the international scientific community to allow specialists to identify which variant of the moth is invading Africa. Moreover, this work will also make it possible to envision new biological control methods. It will also improve understanding of the mechanisms by which pesticide resistance appears.
10.1038/s41559-017-0314-4
Biology
First 3D-bioprinted structured Wagyu beef-like meat
Dong-Hee Kang et al, Engineered whole cut meat-like tissue by the assembly of cell fibers using tendon-gel integrated bioprinting, Nature Communications (2021). DOI: 10.1038/s41467-021-25236-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-25236-9
https://phys.org/news/2021-08-3d-bioprinted-wagyu-beef-like-meat.html
Abstract With the current interest in cultured meat, mammalian cell-based meat has mostly been unstructured. There is thus still a high demand for artificial steak-like meat. We demonstrate in vitro construction of engineered steak-like tissue assembled of three types of bovine cell fibers (muscle, fat, and vessel). Because actual meat is an aligned assembly of the fibers connected to the tendon for the actions of contraction and relaxation, tendon-gel integrated bioprinting was developed to construct tendon-like gels. In this study, a total of 72 fibers comprising 42 muscles, 28 adipose tissues, and 2 blood capillaries were constructed by tendon-gel integrated bioprinting and manually assembled to fabricate steak-like meat with a diameter of 5 mm and a length of 10 mm inspired by a meat cut. The developed tendon-gel integrated bioprinting here could be a promising technology for the fabrication of the desired types of steak-like cultured meats. Introduction Over the past decade, cultured meat has drawn tremendous attention from the standpoints of ethics, economics, the environment, and public health, although it is still under debate 1 . More recently, meat analogs that taste like meat but are based on plant proteins have been released commercially 1 , 2 . Although challenges remain unlike with meat analogs, cultured meat is highly sought after due to the possibility of imitating real meat through the manipulation of flavor, muscle/adipose cells’ ratio, and texture 3 , 4 . Bovine cells for cultured meat can currently be secured by two approaches 5 , 6 . One is following the obtention of edible muscle tissues from cattle, their cells are separated into each type such as muscle satellite cells, adult stem cells, and multipotent stem cells, which are then cultured to increase the number of cells. The other is from transforming somatic cells into induced pluripotent stem cells (iPSCs) and differentiating them into each cell type. Primary cultured stem cells, particularly muscle satellite cells, maintain their differentiation capability within 10 passages and thus have a limited number of divisions 7 . But, they should still be safe and acceptable for consumption. Since Post and coworkers unveiled bovine cell fiber-based hamburger, various types of cultured meat have been demonstrated. However, cultured steak with a composition and a structure similar to real steak, comprising mostly adipose cells and aligned muscle cells, is still challenging 4 , 8 , 9 . Various tissue engineering techniques could be applied, such as cell sheet engineering 10 , 11 , cell fiber engineering 12 , cell culture on a 3D-printed scaffold 13 , and 3D cell printing 14 , 15 for mimicking the structural characteristics of steak. Among them, 3D cell printing is promising due to its advantages of scalability and controllability of structure and composition 16 . Especially, a supporting bath-assisted 3D printing (SBP) technique where ink is dispensed inside a gel or a suspension with thixotropy is noteworthy. Under shear forces, the viscosity of a gel or a suspension becomes of low viscosity, enabling the ink dispensing, and it returns to a high viscosity when the shear force is released, maintaining the printed form 17 . Since the SBP is able to overcome not only the restricted ink viscosity range but also the drying problem during prolonged printing in extrusion-based 3D printing in the air-interfaced environment, several studies over the past five years have shown the feasibility of complex tissue fabrication 18 , 19 , 20 , 21 , 22 , 23 , 24 . Steak meat has an aligned structure of skeletal muscle fascicles with a diameter from around 900 μm to 2.3 mm 25 , depending on age and animal parts, formed by assembled skeletal muscle fibers, connected to the tendon for its shrinkage and relaxation movements. The muscle fibers are covered with basement membrane and the muscle fascicles are surrounded by fat together with blood capillaries (Fig. 1a ). The component ratio and location of the muscle, adipose tissues, and blood capillaries are significantly different according to the meat type and its country of origin. For example, red meat in the rump of Japanese Wagyu has only 10.7% adipose tissues, whereas the sirloin of the Wagyu has 47.5% 26 . Accordingly, the development of a methodology for assembling the three types of fibers with the desired location, ratio, and amount will be a key manufacturing technology of cultured steak. Fig. 1: Overview of the work. a Structure of steak. (i, ii) H&E- and (iii) Azan-stained images of a piece of steak. Representative images from three independent experiments are shown. All scale bars denote 100 μm (iv) Schematic of a hierarchical structure in muscle. b Schematic of the construction process for cultured steak. The first step is cell purification of tissue from cattle to obtain bovine satellite cells (bSCs) and bovine adipose-derived stem cells (bADSCs). The second is supporting bath-assisted printing (SBP) of bSCs and bADSCs to fabricate the muscle, fat, and vascular tissue with a fibrous structure. The third is the assembly of cell fibers to mimic the commercial steak’s structure. *SVF stromal vascular fraction. Full size image Here, we demonstrate a three-step strategy for the construction of engineered steak-like meat: (1) collection of edible bovine satellite cells (bSCs) and bovine adipose-derived stem cells (bADSCs) from beef meats and their subsequent expansion, (2) development of the tendon-gel-integrated bioprinting (TIP) for the fabrication of cell fibers and their subsequent differentiation to skeletal muscle, adipose, and blood capillary fibers, and (3) assembly of the differentiated cell fibers to construct engineered steak-like meat by mimicking the histological structures of an actual beef steak (Fig. 1b ). Since the tendon is a key tissue for the muscle fiber alignment and maturation, we fabricated tendon gels by TIP to enable their consecutive connection with the muscle cell fibers, inducing the formation of aligned matured muscle fibers. In this study, a total of 72 fibers comprising 42 muscle, 28 adipose, and 2 blood capillaries were constructed by TIP. They were subsequently assembled to fabricate steak-like meat with a diameter of 5 mm and a length of 10 mm inspired by the histological images of an actual Wagyu beef steak. TIP is expected to become a powerful approach for constructing engineered steak-like meat with desired location, component ratio, and amount of the three types of fibers. Results Verification of the differentiation conditions for the extracted bSCs and bADSCs The bSCs were extracted from the masseter muscle of 27-month-old Japanese black cows obtained from a slaughterhouse using a method modified from a previously reported one 7 . The crude cell fraction separated from the beef meat by collagenase treatment was cultured until passage 3 (P3) for cell sorting. The CD31 − , CD45 − , CD56 + , and CD29 + cells were isolated by fluorescence-activated cell sorting (Supplementary Fig. 1 ), in which Pax7 + bSCs were around 80%. 2D culture of the isolated bSCs was performed to evaluate their proliferation and differentiation potentials into muscle cells even after prolonged passaging. After seeding, the bSC passage was incremented after each cell detachment by trypsinization every two days. The proliferation medium contained not only fetal bovine serum (FBS) and basic fibroblast growth factor but also a p38 inhibitor to maintain the differentiation potential of the proliferating bSCs 7 . The number of seeded bSCs doubled approximatively once per day until P8, and about once every two days thereafter (Fig. 2a ). The differentiation was induced after two days of seeding by switching the basic medium for a differentiation media containing 2% horse serum (HS), which is a well-known differentiation-induction method for muscle cells. The cells were immunostained with the antibody of myosin II heavy chain (MHC) after five days of differentiation induction. We quantified the differentiation capacity regarding the passage number of the seeded bSCs by calculating the ratio of DAPI fluorescence intensity between MHC + and MHC - cells from fluorescence images (Supplementary Fig. 2 ). The bSCs from P3 to P7 expressed a comparable differentiation level, but the differentiation capability of bSCs above P8 significantly decreased (Figs. 2 b and 2c ). Therefore, we conducted experiments using cells prior to P8. Fig. 2: Verification of purified bovine stem cells. a, b Proliferation rate ( n = 3 independent samples) ( a ) and differentiation ratio ( n = 4 independent areas examined over three independent samples) on day 5 of differentiation ( b ) of bSCs from passage 3 (P3) to P12 cultured on a tissue culture plate. Red and blue lines are a slope from P3 to P8 and from P8 to P12, respectively. c Representative immunofluorescence images of differentiation induced bSCs at P7 and P9 stained for myosin II heavy chain (MHC) (green) and nucleus (blue) from at least three independent experiments. Scale bars, 1 mm. d Adipogenesis ratio (left) of 3D gel-drop-cultured bADSCs derived by 12 combinations of free fatty acids (middle) in DMEM on days 5, 9, and 13 ( n = 4 independent experiments, two-way ANOVA paired for the time and unpaired for the treatment with a Tukey’s HSD post test). e, f Lipid-droplet production in 3D-cultured bADSCs, depending on the concentration of ALK5i on day 7 ( e ) and culture day ( f ) in the #1 combination of free fatty acids and 5 μM ( n = 5 (e) and 3 (f) independent experiments, unpaired (e) and paired (f) one-way ANOVA with a Tukey’s HSD post test). g, h Representative immunofluorescence images from three independent experiments ( g ) and mRNA expression levels ( h ) of 3D bADSCs tissue cultured with the media containing seven free fatty acid mixture (#1) and 5 μM ALK5i ( n = 3 independent experiments, paired one-way ANOVA with a Tukey’s HSD post test). i, j CD31 immunostaining quantitation of bADSCs in 2D, depending on serum conditions in DMEM ( i ) and base media ( j ) on day 7 ( n = 3 independent experiments, unpaired one-way ANOVA with a Tukey’s HSD post test (i) and unpaired two-way ANOVA with a Šidák post test (j)). k Representative immunofluorescence images of bADSCs depending on serum conditions on day 7 stained for CD31 (magenta) and nucleus (blue) from three independent experiments. Scale bars, 100 μm. The used bADSCs were extracted from subcutaneous fat. * p <0.05, ** p <0.01, *** p <0.001; error bars represent mean ± s.d. Source data are provided as a Source Data file. Full size image Next, 3D culture with collagen microfibers (CMF)/fibrin gel was performed for assessing the adipogenic differentiation potential of the bADSCs in a variety of media conditions since it is known that the adipogenesis of adipose-derived stem cells (ADSCs) in 3D culture is higher than in 2D culture 27 and that the differentiation-factor efficiency relies on species 28 . Conventional human adipogenic factors like insulin, rosiglitazone, or troglitazone were thus first found with limited lipogenesis (Supplementary Fig. 3 ), leading to the direct addition of free fatty acids (pristanic acid, phytanic acid, erucic acid, elaidic acid, oleic acid, palmitoleic acid, and myristoleic acid) to the culture medium following previous published methods 29 , 30 . The different combinations of the seven aforementioned free fatty acids were also compared and the results showed a higher lipogenesis by lipid storage into the cytoplasmic vesicles of the bovine preadipocytes for the medium containing all the seven free fatty acids (Fig. 2d and Supplementary Fig. 4 ). To further increase the lipogenesis until reaching a matured bovine adipocyte state, the transforming growth factor (TGF) type I receptor activin-like kinase 5 inhibitor (ALK5i) effect was evaluated because this factor is an inhibitor of the TGF-β receptor ALK5 and TGF‐β family ligands, contained in the 10% FBS of the culture medium, which is known to inhibit both adipogenesis and adipocyte hypertrophy 31 . The TGF‐β family also includes myostatin, which is expressed by the myocytes to impair adipogenesis 32 . In the context of future coculture between bovine myoblasts and adipocytes, ALK5i appeared relevant for further inducing the adipogenic potential of the culture medium containing the seven free fatty acids. Several concentrations were thus assessed from 1 to 10 µM. The results showed a tendency to a higher lipogenesis with 5 µM ALK5i (Fig. 2e and Supplementary Fig. 5a ). The lipogenesis of the bADSCs then increased progressively between 3 and 14 days of differentiation (Fig. 2f and Supplementary Fig. 5b ). In addition to lipogenesis, further investigations concerning the two adipogenic markers PPARγ2 and FABP4 were conducted to evaluate the adipogenesis. The immunostaining of PPARγ2, one of the most important transcription factors for fat cell differentiation, showed a slight expression inside the bADSCs after three days of differentiation, which was specifically found in the nuclei due to their role as an early transcription factor, inducing the other adipogenic maturation genes 33 (Supplementary Fig. 6 ). This location in the nuclei is less observed after 14 days of differentiation, implying a more matured state of the bADSCs (Fig. 2g and Supplementary Fig. 6 ). Concerning FABP4, a late specific adipogenic marker necessary for trafficking fatty acids to the membrane for efflux 34 , its expression location was observed in the cytoplasm, increasing following the differentiation duration with a particularly high expression observed in the unilocular mature adipocytes (Fig. 2g and Supplementary Fig. 6 ). The increase of both early and late marker expressions was confirmed by the RNA (qPCR) analysis, which highlighted the significant linear-increasing expression profile of the PPARγ2 marker, followed by the FABP4 to a higher extent testifying the mature state of the bADSCs-derived adipocytes (Fig. 2h ). Recently, ADSCs have been considered to be a useful cell source for angiogenesis in tissue engineering, but unlike human ADSCs, there are no reports on endothelial differentiation of bADSCs 35 , 36 . Knowing that they can lose their differentiation potentials during ADSCs culture expansion 37 , bADSCs were thus used at P1 to evaluate their endothelial differentiation in different conditions. Horse serum (HS) was surprisingly found to be a significant inducer of the CD31 expression, an endothelial cell marker (4 and 15 times more for 1 and 10% HS, compared with the 10% FBS condition), independently of the medium used, DMEM or F12K (Fig. 2i–j and Supplementary Fig. 7 ). The human serum also provided an enhanced endothelial differentiation, compared with the FBS condition, but was impaired by the low cell proliferation observed (Supplementary Fig. 7 ). In addition to CD31 expression, the tubulogenesis was confirmed by culturing the seven-day differentiated cells on Matrigel in media containing 10% HS (Supplementary Fig. 8 ). The DMEM + 10% HS was then used for the endothelial differentiation from bADSCs in this study. Bovine muscle fiber fabrication by supporting bath-assisted 3D printing To organize the isolated bSCs into a cell fiber, we utilized a supporting bath-assisted 3D printing (SBP) consisting of a bioink dispensed inside a supporting bath usually composed of a hydrogel slurry that ensures the printed-structure stability in the z axis. Several studies have also demonstrated its promise in cell printing for its high-shape fidelity even on complex or soft structures, and for its stable printing during prolonged operation 20 , 21 , 23 , 24 . We selected gelatin and gellan gum as supporting bath materials, due to their edible, removable, and cell-compatible properties. Gelatin is a gel at room temperature (RT) and a liquid at 37 °C, therefore, it is easy to remove after printing by incubation at 37 °C 19 . Gellan gum hydrogel is also known to dissolve in 50 mM Tris-HCl buffer at pH 7.4 and at 37 °C 38 . Hydrogel slurry was fabricated by homogenizing bulk hydrogel of gellan gum and gelatin, in which the average particle sizes are 44 μm and 70 μm, respectively, and their thixotropy behavior was confirmed (Supplementary Fig. 9 ). First, we tried to print the bioink containing bSCs, fibrinogen, and Matrigel solution in the culture medium into the supporting bath mixed with granular particles of gelatin (G-Gel) or gellan gum (G-GG) and thrombin for the fabrication of a fibrous muscle fiber mimicking the bundle of muscle fiber in steak (Supplementary Movies 1 and 2 ). With the confirmation of the gel formation followed by the removal of supporting baths, high cell viability was observed for nine days after printing in both the G-Gel and G-GG by live/dead staining (Figs. 3 a, 3b and Supplementary Fig. 10 ). Fig. 3: The characterization of bSC tissue fabricated by SBP. a, b Optical (left), phase contrast (middle), and fluorescence (right, green: live cells and red: dead cells) images of the bSC tissues printed inside granular gellan gum (G-GG) ( a ) and granular gelatin (G-Gel) ( b ) followed by bath removal. Scale bars, 500 µm. c Shape change of bSC tissue fabricated by SBP inside G-Gel from the fibrous form right after printing and bath removal to globular form on day 6 of suspension culture. d Schematic (left), size change in accordance with culture day (middle), and phase-contrast images (right) of needle fixed culture of printed bSCs tissues. Error bars represent mean ± s.d. Scale bars, 500 µm. e 3D-fluorescence images (upper, red: actin and green: MHC) and cell alignment measurements (lower) of the bSC tissues printed inside G-GG and G-Gel and in suspension and needle-fixed cultures on day 3 of differentiation (after six days), respectively. Representative images from at least two independent experiments are shown. Scale bars, 200 µm. Source data are provided as a Source Data file. Full size image When the printed cell fiber was cultured in suspension, the fiber collapsed to a globular form (Fig. 3c ). Studies related to muscle tissue engineering have implied that an anchor structure enables the 3D muscle tissues to not only maintain their initial shape but also to improve the cell alignment, fusion, and differentiation against the muscle fiber’s contraction 15 , 39 , 40 , 41 , 42 , 43 . We placed a printed cell fiber onto a silicone rubber and anchored it with needles fastening both ends to withstand cell contractions (Fig. 3d , left). With the needle-fixed culture, the cell fibers printed inside G-GG and G-Gel retained the fibrous structure, but the diameter shrank by around 60% in G-GG and 80 % in G-Gel at day 9 of culture (Fig. 3d , right). It would be reasonable to suppose that the size decrease was caused by the alignment and fusion of bSCs along with the enzymatic decomposition of fibrin gel by the proteases secreted from the cells 44 , 45 . We also took immunofluorescence images inside the cell fibers to examine the cellular behavior w/ and w/o needle anchoring on day 3 of differentiation. To quantify the improvement in terms of muscle maturation, the cell alignment, i.e., the angle setting for the straight line between needles, was measured from the immunofluorescence images (Fig. 3e , left). The results showed that the cells in the cell fiber of the suspension culture were randomly oriented, regardless of the type of supporting baths, while in the needle-fixed culture the cells in the cell fiber printed inside G-Gel were anisotropically oriented compared with those of G-GG (Fig. 3e , right). We postulated that the difference in the degree of alignment between G-GG and G-Gel arises from the hindrance of cell behaviors by some residual substances that might exist inside or on the printed cell fibers. This substance was found to be the residual G-GG in the cell fiber (Supplementary Fig. 11 ), which may not be degraded or dissolved, limiting the cells in their ECM remodeling required to migrate and fuse with the other cells. On the other hand, G-Gel is easily dissolved at 37 °C and may be degraded by proteases, enabling an active cell behavior, despite the possible residues that might remain in the printed cell fiber. Printing bSCs inside G-Gel and anchoring them are the essential steps for the fabrication of the muscle cell fiber, but the anchoring method may not be appropriate for the scale-up. Therefore, we developed a modified SBP to include a part that can be simultaneously anchored by the printed cell fiber. Fabrication of muscle, fat and vascular cell fibers by TIP The important feature in the modified SBP, which we have named tendon-gel-integrated bioprinting (TIP), is the introduction of tendon gels to anchor the printed cell fibers for culture. Figure 4a illustrates the process of the TIP in which the printing bath is divided into three parts: the bottom tendon gel, the supporting bath, and the upper tendon gel. G-Gel is used as a supporting bath as described in the above section and the volume of tendon-gels is filled with 4 wt% collagen nanofiber solution (CNFs) which has a reversible sol-gel transition from 4 °C to 37 °C (Supplementary Fig. 12 ). To separate the layers and maintain the structure we fabricated polydimethylsiloxane (PDMS) wells (Supplementary Fig. 13 ). After the bSC fiber gelation inside the PDMS well (Supplementary Movie 3 ), incubation for 2 h at 37 °C induced the supporting bath and tendon gels to become a solution and a gel, respectively, and the PDMS well was then put in the culture medium. Fig. 4: Tendon-gel integrated bioprinting (TIP) for muscle, fat, and vascular tissue fabrication. a The schematic of TIP for cell printing. b Optical (upper) and phase-contrast (lower) images of the bSC tissue printed by TIP, keeping the fibrous structure on day 3. The images were taken after fixation. Scale bar, 1 mm. c The H&E-stained image of half of collagen gel (dotted black line)—fibrous bSC tissue (dotted red line) and a magnified image of the fibrous bSC tissue (right). Scale bars, 2 mm (left) and 50 µm (right). d 3D fluorescence image (left) and cell alignment measurement (right) of the TIP-derived bSC tissue stained with actin (red), MHC (green), and nucleus (blue) on day 3 of differentiation. Scale bar, 50 µm. e SEM images of TIP-derived bSC tissue on day 3 of differentiation. Scale bars, 10 µm and 100 µm (inset). f MHC mRNA expression levels of bSCs before printing and TIP-derived bSC tissue on day 3 of differentiation ( n = 3 independent samples, pairwise t-test comparison). g Fluorescence image of TIP-derived bSCs tissue stained with actin (red), MHC (green), and nucleus (blue) on day 14 of differentiation. Scale bar, 50 µm. h The optical images of multiple tissue fabrication (25 ea.) by multiple printing. Black arrows indicate printed cell fibers. i, j mRNA levels ( i ) and protein expression levels ( j ) of TIP-derived fat tissues before printing and at day 14 of differentiation (at day 17 of total culture) ( n = 3 independent samples, pairwise t -test comparison). k Whole fluorescence (left), optical (inset), and magnified (right) images of muscle (on day 4 of differentiation, green: MHC & blue: nucleus), fat (on day 14 of differentiation, red: lipid and blue: nucleus), and vascular (on day 7, magenta: CD31 and blue: nucleus) tissues fabricated by TIP. Scale bars, 1 mm (left) and 100 µm (right). l, m DNA amount per weight (light-gray bars: day 1, and dark-gray bars: day 6 in muscle fiber and day 17 in fat fiber) and ( l ) compressive modulus ( m ) of muscle and fat fibers in the commercial meat (white bar) and TIP-derived (gray bars). The modulus of the muscle fiber on day 3 of differentiation (after 6 days) and the fat fiber on day 7 of differentiation (after 10 days) was measured ( n = 3 independent samples, paired one-way ANOVA with a Tukey’s HSD post test (l) and pairwise t -test comparison (m)). * p <0.05, ** p <0.01, *** p <0.001; error bars represent mean ± s.d. Representative images from at least two independent experiments are shown. Source data are provided as a Source Data file. Full size image On day 3, we could confirm that the printed cell fiber maintained its fibrous shape and kept its connection with the two tendon gels, as seen in the phase contrast and the H&E staining images (Figs. 4 b and 4c ). The cell viability of the bSC fiber by TIP was confirmed up to day 9 (Supplementary Fig. 14 ). It also showed a high alignment of cells on day 3 of differentiation, which seemed comparable with the needle-fixed culture (Figs. 3 e, 4d , and 4e and Supplementary Movie 4 ). MHC expression of the TIP-derived bSC fiber was relatively lower than in the needle-fixed culture (Supplementary Fig. 15 ), but the mRNA level of MHC expression was upregulated by >1000-fold on day 3 of differentiation compared with the naive bSCs (Fig. 4f ). Interestingly, sarcomere structures testifying a matured state for the muscle fibers were shown in some of the TIP-derived bSC fibers (Fig. 4g ), butwe could not show any needle-fixed culture cell fibers after 14 days of differentiation for the comparison. These results imply that the long-term culture in TIP is able to induce a higher muscle-maturation degree compared with the needle-fixed culture. Even though we did not investigate thoroughly here, it may have been caused by the bSC cell adhesion to the collagen gel at anchorage regions for the TIP whereas there is no cell adhesion in the needle-fixed culture (Supplementary Fig. 16 ). TIP is a promising method for muscle fiber fabrication, but it still has a problem which is the occasional bSC fiber detachment from the tendon gels, especially the bottom tendon gels, during the prolonged cultures due to its strong contraction. Increasing the concentration of the CNFs or using additional cross-linking will hopefully provide a solution to this problem. Moreover, double printing by the addition of another cell fiber by general TIP after rotating the PDMS well 180° following the fabrication of the first one, may be another way of solving the problem. When double printing is performed, the two printed cell fibers close to each other fused into one thicker-cell fiber (Supplementary Fig. 17 ). Multiple printing for 25 bSC cell fiber fabrication in one large PDMS well was also performed (Fig. 4h and Supplementary Movie 5 ). We first aimed to be able to produce directly in one PDMS well a large tissue composed of various types of cell fibers, but we finally fabricated the muscle, fat, and vascular cell fibers individually in this study because each differentiation needed to be induced in a specific medium corresponding to each cell fiber based on the information discussed in the first section. The adipogenesis of the bADSCs-derived fat fiber by TIP was confirmed by the mRNA level and protein expression of PPARγ2 and FABP4 same as in 3D culture. Compared with naive bADSCs, PPARγ2 and FABP4 were upregulated by > 6-fold and > 40-fold respectively in their mRNA expression and >2-fold and >2-fold, respectively, in their protein level on day 14 of differentiation (Fig. 4i and j and Supplementary Fig. 18 ). Figure 4k , Supplementary Fig. 19 , and Supplementary Movies 6 – 8 show the whole muscle, fat, and blood capillary cell fibers, respectively, independently fabricated by TIP. Even though each cell fiber was fabricated separately, we believe that if a differentiated media for culturing all three types of cell fibers at the same time is developed, the programmed printing of them in desired locations will be feasible. The characteristics of DNA amount, compressive modulus, and water contents of the muscle and fat cell fibers by TIP were compared with the fibers extracted from a commercial beef (Supplementary Fig. 20 ). The DNA concentration in the TIP-derived muscle fiber did not change, depending on the culture day while it increased in TIP-derived fat fiber over that of the commercial meat on day 14 of differentiation, implying the proliferation and the significant change in the cell numbers in fat fibers during the culture and the differentiation after printing, which was not the case for the muscle fibers (Fig. 4l ). Also, the DNA concentration in the TIP-derived muscle fibers was found six times smaller than the meat fibers (Fig. 4l ), indicating that optimization of the bSC concentration or the ECM concentration in the bioink will be necessary to be equivalent to the real meat. Although the water content showed the disparity between the commercial beef and the TIP-derived cell fibers (Supplementary Fig. 21 ), while compressive modulus in all cell fibers (muscle fiber on day 3 of differentiation and fat fiber on day 7 of differentiation) showed similar values, which were within one order of kPa (Fig. 4m and Supplementary Fig. 22 ). Since the TIP-derived cell fibers were not controlled for tenderness, flavor, and additional nutrient components in this study, these factors will need to be addressed to produce customer-oriented cultured meat. Engineered steak construction by assembly of muscle, fat, and vascular cell fibers The assembly of the TIP-derived cell fibers was attempted to demonstrate the construction of the cultured steak. To mimic the structure of commercial beef, we first took a cross-sectional image of Wagyu with sarcomeric α-actinin and laminin stainings, which denotes the muscle in double-positive and the adipose in laminin-only positive, respectively (Fig. 5a , left). We tried to produce a cultured steak with dimensions of approximately 5 mm × 10 mm × 5 mm (WxLxH) and from the Wagyu’s image, we made the model pattern showing the required number of each muscle, fat, and blood capillary cell fibers as well as their arrangement (Fig. 5a , right). The diameters of the cell fibers obtained by TIP were measured to be approximately 500, 760, and 600 µm, which meant that the required numbers of each cell fiber were 42, 28, and 2, respectively. To distinguish each cell fiber, muscle and vascular cell fibers were stained in red using food coloring, leaving fat cell fiber in white color. After physically stacking the cell fibers like the model image, they were treated with transglutaminase, which is a common food cross-linking enzyme, to accelerate the assembly during two days at 4 °C (Supplementary Fig. 23 ). The final product is shown in Fig. 5b and a cross-sectional image was taken to verify that the structure was analogous to Wagyu (Fig. 5c ), finally implying the feasibility of the TIP-based engineered steak fabrication. Fig. 5: Assembly of fibrous muscle, fat, and vascular tissues to cultured steak. a Assembly schematic- (right) based sarcomeric α-actinin (blue) and laminin- (brown) stained image (left) of the commercial meat. It is assumed that the diameters of the fibrous muscle, fat, and vascular tissues are about 500, 760, and 600 µm, respectively. Scale bar, 1 mm. b, c Optical images of the cultured steak by assembling muscle (42 ea.), fat (28 ea.), and vascular (2 ea.) tissues at ( b ) the top and ( c ) cross-section view of the dotted-line area. Muscle and vascular tissue were stained with carmine (red color), but fat tissue was not. Scale bars, 2 mm. Full size image Discussion In this research, we reported a technology for constructing a whole cut meat-like tissue with muscle, adipose, and blood capillary cell fibers composed of edible bovine cells. Although cultured meat using livestock cells is drawing huge attention, only few studies give information related to their cellular behavior and the used cells were primary cells 4 , 7 , 46 . In our model, after isolation and purification of the bSCs and bADSCs, we first verified their cell behaviors: the proliferation and differentiation of the bSCs (Fig. 2a–c ) and the adipogenesis (Fig. 2d–h ) and the differentiation to endothelial cells (Fig. 2i–k ) from bADSCs, depending on media conditions. Although we used p38 inhibitor to increase the available cell number of bSCs 7 , decreased differentiation capability and proliferation rate of bSCs after passage 8 was shown and it has to be addressed for scalable production of cultured steak in the future. Satellite cells are known to do asymmetric or symmetric cell divisions to regulate cell populations 47 , so an approach based on the division mechanism of satellite cells will be necessary. The use of ADSCs for their endothelial differentiation allowed us to avoid the direct isolation of bovine endothelial cells, which could have added a limiting step to the full process. Moreover, their differentiation in adipocytes, while previously done, still remains little studied. Especially, the induction of the late marker FABP4 was not reported, while its role in the bovine adipogenesis was indeed found of importance for the bovine lipid metabolism-related gene induction 48 , 49 . Our study thus provided additional information on both adipogenic and endothelial differentiations from bovine stem cells, which can have other further applications in the meat-related fields. Furthermore, it was then shown that the resistance to the contraction force during the culture of the bSC-derived cell fiber was essential to realize highly aligned muscle fibrils (Fig. 3c–e ). A modified supporting bath-assisted cell printing method, the tendon-gel-integrated printing (TIP), was thus developed, in which the collagen gel-based tendon tissues can withstand the cell traction force during the bSCs differentiation, leading to a well-maintained fibrous structure and its cell alignment (Fig. 4a–f ). Comparisons of the cell density (Fig. 4l ), compressive modulus (Fig. 4m ), and water content (Supplementary Fig. 21 ) showed the gap between TIP-derived and commercial muscle and fat cell fibers. We demonstrated engineered steak-like tissue, analogous to the structure of commercial beef, through the manual assembly of muscle, adipose, and blood capillary cell fibers produced through the TIP (Fig. 5a–c ). Up to our knowledge, this is the first report to demonstrate the fabrication of whole-cut cultured meat-like tissue that was composed of three types of primary bovine cells isolated from an edible meat block and was modeled into a real meat’s structure. Since the demonstrated cultured steak-like tissue is a small piece and inedible, further elaboration will therefore be required with consideration of TIP-based cell printing scalability and edibility of culture and cell-printing-related materials in the future. In addition, to print bovine cells for cultured meat, we expect that the TIP will also benefit the muscle tissue engineering applications in the future. Methods Isolation and purification of bSCs and bADSCs bSCs were isolated from 160 g of fresh masseter muscle samples (within 6 h of euthanasia) of 27-month-old Japanese black cattle obtained at the slaughterhouse (Tokyo Shibaura Zouki, Tokyo, Japan, and JA ZEN-NOH Kanagawa, Kanagawa, Japan). The freshly harvested bovine muscle was kept on ice, transferred to a clean bench, and washed with cold 70% ethanol for 1 min, followed by cold PBS 1 x 2 times. Then, the fat tissue part was disposed of, the remaining muscle was cut into small pieces with a knife, and minced with a food processor mechanically. The bovine minced muscle was washed with cold PBS 1x with 1% penicillin–streptomycin (PS) (Lonza, 17-745E) for 1 min. The washed muscle was transferred to a bottle and mixed with 160 ml of 0.2% collagenase II (Worthington, CLS-2) in DMEM (Invitrogen, 41966-29) supplemented with 1% PS. The bottle was incubated and shaken every 10 min for 1.5 h at 37 °C. After digestion, 160 mL of 20% FBS in DMEM supplemented with 1% PS was added and mixed well. The mixed solution was centrifuged for 3 min at 80 g and 4 °C. Floating tissues in the supernatant after centrifugation were removed by tweezers and then the collected supernatant was kept on ice as a mononuclear cell suspension. Precipitated debris were mixed with 80 ml of cold 1% PS in PBS 1x and centrifuged for 3 min at 80 g and 4 °C. The supernatant was collected again and mixed with the former mononuclear cell suspension. After that, the cells were filtered through a 100 μm cell strainer. After centrifugation for 5 min at 1500 g and 4 °C, the cells were suspended with 160 ml of cold DMEM with 20% FBS and 1% PS, were filtered through a 100-μm cell strainer followed by a 40-μm cell strainer. The cells were then centrifuged for 5 min at 1500 g and 4 °C. Precipitated cells were incubated with 8 ml of erythrocyte lysis buffer (ACK, 786-650) for 5 min on ice. Then the cells were washed twice with cold PBS 1x supplemented with 1% PS and the cell pellet was mixed with FBS supplement with 10% dimethyl sulfoxide and then reserved at −150 °C. The frozen cells were recovered in a 37 °C water bath and washed with cold PBS 1x twice. The cells were suspended in F10 medium (Gibco, 31550-023) containing 20% FBS, 5 ng/mL bFGF (R&D, 233-FB-025), and 1% PS supplemented with 10 μM p38i (Selleck, S1076), and then seeded at 1.1 × 10 5 cells/well in 6-well cell culture plates (Corning) that were coated with 0.05 % bovine collagen type I (Sigma, C4243). The cells were cultured by changing the medium every two or three days and were passaged when they reach 60 % of confluency until passage 3 for cell sorting by flow cytometry. The cells were suspended in FACS buffer (1% BSA in PBS1x) and stained with APC anti-human CD29 Antibody (BioLegend, 303008, TS2/16, dilution 1/40), PE-Cy TM7 anti-human CD56 (BD, 335826, NCAM16.2, dilution 1/40), FITC anti-sheep CD31 (BIO-RAD, MCA1097F, CO.3E1D4, dilution 1/40), and FITC anti-sheep CD45 (BIO-RAD, MCA2220F, 1.11.32, dilution 1/40) for 30–45 min on ice in the dark. After antibody incubation, the cells were washed twice with cold PBS 1x and reconstituted in PBS 1x with 2% FBS. The CD31 − CD45 − CD56 + CD29 + cells were isolated by Sony Cell Sorter SH800S as bSCs. Subcutaneous bovine adipose tissues were isolated at the slaughter house on the day of slaughter and sent to the laboratory with an ice pack. Following the 24-h duration delivery, the tissues were first washed in PBS 1x containing 5% of penicillin-streptomycin-amphotericin B (Wako, 161-23181). Then, 8–10 g of tissue was separated into fragments to fill the six wells of a 6-well plate and was minced to get around 1mm 3 in size using autoclaved scissors and tweezers, directly in 2 mL of collagenase solution containing both collagenase type I (Sigma Aldrich, C0130) and type II (Sigma Aldrich, C6885) at 2 mg/mL for each in DMEM with 1% antibiotics–antimycotic mixed solution (Nacalai, 02892-54), 0% FBS, and 5% BSA (sterilized by 0.2 µm filtration). After one hour of incubation at 37 °C with 250-rpm agitation, DMEM (Nacalai, 08458-16) with 10% FBS and 1% antibiotics was added and the lysate was filtrated using a sterilized 500 µm iron-mesh filter, before being centrifuged for 3 min at 80 g. The upper human mature adipocyte layer was then removed and the pellet containing the stromal vascular fraction (SVF) was washed two times in PBS 1x with 5% BSA and 1% antibiotics and once in complete DMEM (10% FBS + 1% antibiotics), by 3 min of centrifugation at 80 g between each wash. Finally, the pellet containing the SVF cells was resuspended in DMEM and seeded in a 10-cm dish for expansion by changing the medium every day for three days, and the cells were passaged when they reached 80% of confluency. After P1, the remaining adherent cells are considered as bADSCs and were expanded in DMEM. 2D cell culture bSCs were cultured in high-glucose DMEM (Gibco, 10569-010) containing 1% antibiotic–antimycotic mixed solution (Nacalai, 02892-54), 10% FBS, 4 ng/mL fibroblast growth factor (Fujifilm, 067-04031), and 10 uM p38 inhibitor (Selleck, SB203580) for proliferation, and bSCs within P8 were used for all experiments. For differentiation induction of bSCs, the medium was changed to DMEM containing 2% horse serum and 1% antibiotic–antimycotic mixed solution. To investigate the cell proliferation and differentiation ratio, 50,000 cells were seeded on 24-well plates, and the differentiation medium was replaced every two days after differentiation induction. Culture and differentiation of bADSCs to bovine endothelial cells To monitor the endothelial differentiation of bADSCs, bADSCs at P1 previously expanded in DMEM were seeded at 5000 cells on 48-well plates and kept seven days in eight different conditions (DMEM with 1, 5, or 10% HS, DMEM with 10% FBS, DMEM with 10% human serum, DMEM with 10% calf serum, F12K with 10% HS, and F12K with 10% FBS, all containing 1% antibiotics-antimycotic mixed solution) by renewing the medium every 2-3 days. Collagen microfiber preparation The collagen microfibers (CMF) were first prepared from a collagen type I sponge (Nipponham) after dehydration condensation at 200 °C for 24 h crosslinking to prevent its dissolution in water-based solution. The crosslinked collagen sponge was mixed with ultra-pure water at a concentration of 10 mg/mL (pH = 7.4, 25 °C) and homogenized for 6 min at 30,000 rpm. Then, the solution was ultrasonicated (Ultrasonic processor VC50, SONICS) in an ice bath for 100 cycles (one cycle comprised 20 s of ultrasonication and 10 s of cooling), to make smaller fragments that are able to induce better vascularization and filtrated (40-µm filter, microsyringe 25-mm filter holder, Merck), before being freeze-dried for 48 h (FDU-2200, EYELA) 50 . The obtained CMF was kept in a desiccator at RT. 3D gel-embedded culture To construct the adipose tissues by 3D culture, CMF were first weighted and washed in DMEM without FBS by being centrifuged for 1 min at 16,083 g to get a final concentration in the tissues of 1.2 wt%. The bADSCs were added after trypsin detachment (always used at P1–5) and centrifuged for 1 min at 1970 g to get a final cell concentration of 5 × 10 6 cells/mL. The pellet containing CMF and bADSCs was then mixed in a fibrinogen (Sigma Aldrich, F8630) solution at 6 mg/mL final concentration (the stock solution at 50 mg/mL prepared in DMEM with 1% antibiotics) and the thrombin solution (Sigma Aldrich, T4648) was added to get a final concentration of 3 U/mL (the stock solution at 200 U/mL prepared in DMEM with 10% FBS and 1% antibiotics). Finally, 2 µL drop tissues were seeded in a 96 well plate (Iwaki, 3860-096) and gelated for 15 min in the incubator at 37 °C. Then 300 µL of medium (DMEM with 10% FBS and 1% antibiotics) was added to the drop tissues. For adipogenic differentiation, three days of proliferation were first necessary to allow the bADSC proliferation, until reaching a suitable cell–cell interaction required for the adipogenesis 51 . The medium was then switched for DMEM with 10% FBS containing different adipogenic components to compare: Rosiglitazone (at 20 µM final concentration, Sigma Aldrich, R2408), Insulin (at 10 µg/mL final concentration, Sigma Aldrich, I6634), Troglitazone (at 40 µM final concentration, Sigma Aldrich, T2573), Pristanic acid (at 50 µM final concentration, Funakoshi, 11-1500), Phytanic acid (at 50 µM final concentration, Sigma Aldrich, P4060), Erucic acid (at 50 µM final concentration, Sigma Aldrich, 45629-F), Elaidic acid (at 50 µM final concentration, Sigma Aldrich, 45089), Oleic acid (at 50 µM final concentration, Sigma Aldrich, O1383), Palmitoleic acid (at 50 µM final concentration, Sigma Aldrich, 76169), Myristoleic acid (at 50 µM final concentration, Sigma Aldrich, 41788), TGF type I receptor activin-like kinase 5 inhibitor (ALK5i II, 2-[3-(6-methyl-2-pyridinyl)-1H-pyrazol-4-yl]-1,5-naphthyridine, at 1–10 µM final concentration, Cayman, 14794), or Bovine Endothelial Cell Growth Medium (Cell Applications Inc., B211K-500). The individual seven free fatty acids were compared as well as some other possible different mixtures (pristanic and phytanic acids together, or the other five remaining free fatty acids) following already published possible adipogenesis inducers for bovine ADSCs 29 , 30 . The 300 µL of differentiation medium was then renewed every 2-3 days. Supporting bath-assisted 3D printing (SBP) and culture G-GG was produced by preparing a 1 wt% gellan gum (Sansho) in PBS 1x, grinding it with a rotor–stator homogenizer for 6 min at 30,000 rpm, centrifuging at 2837 g for 3 min, and removing the supernatant. G-Gel was produced by preparing 4.5 wt% porcine gelatin (Sigma Aldrich, G1890) in DMEM containing 1% antibiotic–antimycotic mixed solution and 10% FBS, putting it at 4 °C overnight for gelation, adding the same volume of DMEM to the gelatin gel, grinding it with a rotor–stator homogenizer for 2 min at 30,000 rpm, centrifuging at 2837 g for 3 min, and removing the supernatant. Gellan gum and gelatin were conjugated with fluorescein to measure particle size. After the fabrication of G-GG and G-Gel with fluorescein-conjugated material, the fluorescence images were obtained. Major and minor lengths of 60 particles were measured in G-GG and G-Gel and the particle size was determined by averaging major and minor lengths. The bioink was prepared to be 5 × 10 7 cells/mL of bSCs in the mixture composed of 20 mg/mL fibrinogen (Sigma Aldrich, F8630) in DMEM and Matrigel (Corning, 356234) (6:4, v/v). The supporting bath was prepared by mixing G-GG or G-Gel with 10 U/mL thrombin (Sigma Aldrich, T4648) before printing. After filling the prepared supporting bath in a glass vial and loading the syringe containing the prepared bioink onto the dispenser instrument (Musashi, Shotmaster 200DS), cell printing was conducted inside the supporting bath maintaining the syringe and bed parts of the instrument at 4 °C. All parts, such as syringes, nozzles, and containers used for cell printing, were sterilized with 70% ethanol and UV treatment. The nozzle gauge, moving speed, and dispensing speed was 16 G, 1 mm/s, and 2 µL/s, respectively. The printed structures inside the supporting baths were incubated inside a sterile cabinet at RT for 1 h to ensure gelation. After gelation, the G-GG was gently removed by pipetting and printed structures were immersed in 50 mM Tris-HCl buffer (pH 7.4) at 37 °C for 30 min, and the same process was repeated one more time. G-Gel was dissolved by incubation at 37 °C for 2 h. The obtained cell fibers after removal of the supporting baths were cultured in the basic medium of bSCs for 2–3 days and then replaced with the differentiation medium. Suspension culture was simply conducted by placing the printed cell fiber on a tissue culture plate, and needle-fixed culture was conducted by fixing both ends of the printed cell fibers onto the silicone rubber with a size of 2 cm × 2 cm × 3 mm (W × D × H) placed on a six-well plate. PDMS well fabrication The parts of PLA molds were fabricated by the FDM 3D printer (Creality, Ender-3) after modeling by Fusion360 and slicing by Cura for the PDMS wells. PDMS (Corning, Sylgard 184) was poured into the assembled PLA mold and cured at 50 °C overnight, the PDMS wells were obtained by removal of the PLA molds. TIP and culture In all, 4 wt% CNFs were produced from collagen sponge (Nipponham, Type I & III mixture) based on the previous method. After cutting a small area of the PDMS wells’ side to make the media flow channel, it was sterilized by 70% ethanol and UV treatments and was then put on a slide glass. The PDMS well was filled with 4 wt% CNFs, G-Gel, and 4 wt% CNFs at the bottom, middle, and top layers, respectively. Cell printing was conducted in the same way as the SBP. After printing cells, the printed area at the top layer of PDMS well was covered one more time with 4 wt% CNFs, incubated at RT for 1 h, then it was incubated at 37 °C for 2 h to dissolve the G-Gel and induce the CNFs gelation, and finally was placed in a culture container. The bioinks were prepared in the same way as in SBP for muscle cell fiber, by mixing 5 × 10 6 cells/mL of bADSCs in 1.2 wt% CMF and 20 mg/mL fibrinogen solution for fat fibers and 10 7 cells/mL of endothelial differentiated bADSCs in 1.2 wt% CMF and 20 mg/mL fibrinogen solution for vascular fibers in DMEM. Live/dead staining Printed cell fibers were stained with 2 µM calcein-AM and 4 µM ethidium homodimer-1 (Invitrogen, L3224) in DMEM at 37 °C for 15 min, followed by rinsing with PBS 1x and fluorescence imaging by confocal microscopy (Olympus, FV3000) with 10x or 30x objective lens. The imaging conditions are as below: Channel 1 (Live cell); 488 nm laser (power 0.1 ~ 2%), Ex: 500 ~ 540 nm, HV:450 ~ 550, gain 1 Channel 2 (Dead cell); 561 nm laser (power 0.1 ~ 6%), Ex: 610 ~ 710 nm, HV:450 ~ 550, gain 1 Channel 3 (Nucleus); 405 nm laser (power 2 ~ 5%), Ex: 430 ~ 470 nm, HV:450 ~ 550, gain 1 To measure the cell viability, at least six images were randomly taken. The image-based cell viability (%) was calculated by dividing the number of non-dead-stained nucleus by the total number of the nucleus in each image and averaging it. The nucleus counting was conducted by using a particle-analysis plugin in ImageJ. Histological staining Tissues were washed once in PBS 1x and then fixed in 4% paraformaldehyde (Wako, 163-20145) overnight at 4 °C, followed by three-time washes in PBS 1x. The samples were then maintained in PBS 1x solution before being mounted in paraffin-embedded blocks. Paraffin-embedded blocks and sections were prepared and hematoxylin/eosin (H/E) stained by the Applied Medical Research Laboratory, Inc. Some pieces of commercial (Wagyu) beef steak were immersed in 4% paraformaldehyde solution overnight at 4 °C, and then in 1/15 M phosphate buffer (pH 7.4) containing 30% sucrose. They were rapidly frozen in dry ice acetone and cut into 20 µm thick sections. The tissue sections were processed for immunostaining for sarcomeric alpha-actinin and laminin to depict myotubes of skeletal muscles and basement membranes of the muscles and adipose tissues. The immunostaining was performed by use of specific antibodies against sarcomeric alpha actinin (abcam, ab9465, EA-53, dilution 1/1000) and laminin (Sigma-Aldrich, L9393, polyclonal, dilution 1/100), and the reaction products were visualized in blue (Vector Laboratories, Vector Blue) and brown (DAKO, DAB+ chromogen), respectively. Immunostaining Immunostaining was conducted by a general process, 4% paraformaldehyde fixation at RT for 15 min or at 4 °C overnight, permeabilization with 0.2% Triton X-100 (Sigma Aldrich, T8787) at RT for 15 min, blocking with 1% BSA (Sigma-Aldrich, A3294) for 30–60 min, incubation with the 1 st antibody in 1% BSA at RT for 2 h or at 4 °C overnight, incubation with cocktails containing fluorophore-conjugated 2 nd antibody and 1 µg/mL TRITC-phalloidin (Sigma Aldrich, P-1951) for actin staining or 100 ng/mL NileRed (TCI, N0659) for lipid staining at RT for 1 h, and finally 300 nM DAPI (Invitrogen, D21490) counterstaining. Myosin 4 Monoclonal Antibody (eBioscience, 14-6503-82, MF20, dilution 1/500) for bSCs, Anti-CD31 (Wako, M0823, JC70A, dilution 1/100) for bovine endothelial cells, and PPAR gamma (Abcam, ab45036, polyclonal, dilution 1/100) and FABP4 (LSBio, LS–B4227, polyclonal, dilution 1/100) antibodies for bovine adipocytes were used as 1 st antibodies. Goat anti-Mouse IgG (H+L) Cross-Adsorbed Secondary Antibody, Alexa Fluor 488 (Invitrogen, A-11001, polyclonal, dilution 1/200) and goat anti-Mouse IgG (H+L) Cross-Adsorbed Secondary Antibody, Alexa Fluor 647 (Invitrogen, A-21235, polyclonal, dilution 1/200) were used as 2 nd antibodies. All fluorescence images were taken by confocal microscopy. In the case of printed cell fibers, they were treated with Rapiclear 1.52 (SUNJin Lab) at RT for 30 min for deep-tissue imaging. Rheology measurement Viscosity was measured by the controlled-rate mode of the rheometer (Thermo Scientific, HAAKE RheoStress 6000) to verify the thixotropy of G-Gel and G-GG. The steps were composed at 0.01/s for 30 s, at 30/s for 100 s, and at 0.01/s for 30 s. UV–Vis The disposable cuvette was filled with 1 mL of 4 wt% CNFs, and then transmittance was measured at the temperature-controlled steps by UV–Vis spectrometer (Jasco, V-670) at 4 °C for 2000 s, at 37 °C for 2000 s, at 4 °C for 2000 s, and then at 37 °C for 2000 s. After each temperature change, the photos of the samples were taken. Mechanical test The elastic modulus of these printed fibers was measured with EZ test (SHIMADZU, EZ/CE 500 N). All fibers, including printed cell fiber and fibrous tissues from commercial meat, were fixed by 4% paraformaldehyde and washed several times before measurement. After preparing the different printed fibers, several fibers were stacked on a 24-well insert for the sample surface to become larger than the geometry’s surface area at RT. Spherical mold (5 mm in diameter) was used to measure the elastic modulus at a head-moving speed of 1.0 mm/min. The compressive test protocol was employed to increase the engineering strain, until the testing stress to 200 mN. The modulus is automatically calculated by EZ test in the elastic range (10–20 mN). The total sample size was n = 3 for each fiber type. Water-content measurement The water content is calculated according to the mass before and after freeze-drying. Briefly, the printed fibers in PBS1x were taken out, the surface liquid was removed, and the wet weight (W wet ) of the fibers was measured by a balance. The dry weight (W dry ) of the fibers was measured after freeze-drying (24 h). The water content is given by the following formula (1): $${V}_{\rm {water}}=\frac{{{{{{{\rm{W}}}}}}}_{\rm {wet}}-{{{{{{\rm{W}}}}}}}_{\rm {dry}}}{{{{{{{\rm{W}}}}}}}_{\rm {wet}}}\times 100$$ (1) DNA content measurement Commercial beef was bought from the supermarket and intramuscular fat tissues as wells as muscle tissue parts were cut into small fibers of the same size as the printed fibers (Fig. S19 ). One fiber from the commercial fibers or printed fibers was put per microcentrifuge tube and the tissues were lysed following the DNeasy Blood & Tissue Kit (QIAGEN, 69504) to extract their DNA content, which was quantified by the NanodropTM N1000 device (Thermo Fisher Scientific). Then the DNA amount was presented normalized by the weight of the samples before lysis. Bovine-cell fiber assembly Each cell fiber (muscle, fat, and vascular fibers) was stacked on a plastic container one by one according to the model image obtained from the commercial meat’s histological image. After dispensing transglutaminase solution (10 U/mL, Ajinomoto) onto the stacked cell fibers, they were wrapped up and kept at 4 °C for two days. To take a cross-sectional picture, we cut it and pill off the plastic wrap. Image processing and analysis For the evaluation of bSCs differentiation on 2D culture, at least 4 fluorescence images of the samples stained with DAPI and MF20 were taken in size of about 6 mm × 6 mm, and the number of nuclei in all areas and MHC-positive area was measured by ImageJ. For the measurement of cell alignment degree, we randomly selected the fluorescence images (actin stained with TRITC-phalloidin) after Z-stack imaging of one printed tissue, chose three 2D images showing clear-cell morphologies, and measured the angle of individual cells to printed cell fiber’s major axis by using ImageJ. The number of measured cells in each condition was in the range of 27–56. For the calculation of lipid production from bADSCs, the 3D tissues’ Z-stack images were taken (3 slices with 50 µm step) with the same exposure time, brightness, and contrast, then the summation of Z-slice’s intensity in Nile Red and Hoechst of each 3D tissue was done by Image J. The relative intensity was calculated from the total Nile Red’s intensity divided by the Hoechst intensity in each 3D tissue. The 4x lens (0.16 dry, WD 13.0) of the confocal quantitative image cytometer CQ1 (Yokogawa, Tokyo, Japan). was used, with the following parameters: Channel 1 (Nile Red); 561-nm laser (power 20%), Ex: 617~673 nm, Exposure time 500 ms, Bin 1, Gain 16-bit, low noise, and high well capacity, contrast enhancement maintained constant at 100–3200. Channel 2 (Hoechst); 405-nm laser (power 100%), Ex: 447~460 nm, Exposure time 500 ms, Bin 1, Gain 16-bit, low noise, and high well capacity, contrast enhancement maintained constant at 320–1300. For the differentiation from bADSCs to endothelial cells, the same method was applied with 2D images taken by the CQ1 confocal, using CD31’s total fluorescence intensity normalized by Hoechst total fluorescence-intensity ratios, at the same parameters for all conditions compared. The 3D-reconstructed image and the printed cell fibers’ 3D movie was obtained by Imaris software (Bitplane). Gene expression Gene expression was analyzed using real-time quantitative polymerase chain reaction (RT-qPCR). Adipose drop tissues at days 0, 7, and 14 of differentiation, as well as bioprinted fiber samples at days (before bioprinting) and at days 6 (myoblast fibers), 7 (endothelial fibers), or 14 (adipocyte fibers) were washed in PBS and total RNA extraction was carried out using the PureLink RNA Micro Kit (Invitrogen, Waltham, USA), with the DNAse step, following the manufacturer’s instructions. Samples’ RNA content was quantified with the NanodropTM spectrometer (N1000, Thermo Fisher Scientific, Waltham, USA). For RT-qPCR, the RNA samples were first submitted to reverse transcription into cDNA using iSCRIPT cDNA synthesis kit (Bio-Rad, Hercules, USA), before being amplified using Taqman probes and reagents (Taqman Fast Advanced Mix, Taqman gene expression assays (FAM): MYH2 (Assay ID: Bt03223147_gH), FABP4 (Assay ID: Bt03213820_m1), CD31/PECAM1 (Assay ID: Bt03215106_m1), PPARG (Assay ID: Bt03217547_m1), and PPIA (Assay ID: Bt03224615_g1), Thermo Fisher Scientific, Waltham, USA). The cDNA synthesis and RT-qPCR reactions were conducted using the StepOnePlus Real-Time PCR System (Thermo Fisher Scientific, Waltham, USA) and the gene expression was normalized by PPIA as the housekeeping gene. Western blot Proteins relative expression was assessed by performing western blot. Myoblast (days 0 and 6), endothelial (days 0 and 7), and adipocyte (days 0 and 14 of differentiation) fibers were washed with PBS and homogenized by pipetting in lysis buffer (RIPA Buffer R0278, Sigma-Aldrich with Protease Inhibitor Cocktail 25955-24, Nacalai-Tesque), before being quantified by a BCA Protein Assay Kit (23225, ThermoFisher Scientific). About 4–10 μg of protein was resolved on each lane of 4–15% protein gels (Mini-PROTEAN ® TGX Stain-Free™ 4568084, BIORAD with Running Buffer for SDS-Tris-Glycine (10X), Cosmobio), electrotransferred onto a PVDF membrane (Immun-Blot ® 1620-0176 BIORAD), and probed using specific antibodies: Myosin 4 Monoclonal antibody (eBioscience, 14-6503-82, dilution 1/1000), PPAR gamma antibody (Abcam, ab45036, polyclonal, dilution 1/1000), FABP4 antibody (LSBio, LS–B4227, polyclonal, dilution 1/1000), and β-Actin antibody (Sigma-Aldrich, A5441, AC-15, dilution 1/3000). Proteins were detected by secondary antibodies conjugated to horseradish peroxidase (Anti-Mouse IgG 170-6516, anti-Rabbit IgG 170-6515, dilution 1/5000, and Precision Protein TM StrepTactin-HRP Conjugate 161-0380, BIORAD, dilution 1/5000). Signal detection was performed by an enhanced chemiluminescence-detection reagent (Clarity Western ECL Substrate 1705060, BIORAD) using the ChemiDoc Imaging System (BIORAD). Molecular weights were determined by comparison with the migration of prestained protein standards (Precision Plus ProteinTM KaleidoscopeTM Standards 161-0395, BIORAD). Quantitative estimation of the bands’ intensity was performed using ImageJ software. Statistical analysis Statistical analysis was performed using EzAnova (version 0.98, University of South Carolina, Columbia, SC, USA) and GraphPad Prism 8 (version 8.4.3 (686), San Diego, USA) software. The detail of the number of n corresponding to the number of independent experiments using isolated bovine primary cells from different bovine donors, or independent samples, is displayed on each graph in the figures and in the captions, as well as the exact p values. For paired samples when the same cells were measured at different times (Fig. 2f and h ), a one-way ANOVA with a repeated measures design was used, with the Greenhouse–Geisser correction and the Tukey’s HSD post test for the multiple comparisons. When they were subjected to different treatments (Fig. 2e , I, 4 l, Supplementary Fig. 3 , and Supplementary Fig. 15 ) a classic one-way ANOVA was used, with the Brown-Forsythe and the Bartlett’s tests as well as the Tukey’s HSD post-test for the multiple comparisons. When time and treatments were both involved (Fig. 2d ), a two-way ANOVA was applied with time set as “paired or repeated measured” and the treatment as classic analysis or “unpaired”, which led to a pairwise comparison, with the Greenhouse–Geisser and Huynh–Feldt corrections as well as the Tukey’s HSD post test for the multiple comparisons. When two different treatments were applied to the cells (Fig. 2j ), a classic two-ways ANOVA was performed, with the Šidák post test for the multiple comparisons. ANOVA was used when more than two conditions were compared with each other and for only two conditions, the pairwise t -test comparison was performed (Fig. 4f , i, j, m, Supplementary Fig. 17 , and Supplementary Fig. 21 ). EzAnova software was used only for Fig. 2d analysis, for the other statistical analysis, GraphPad Prism software was used. Error bars represent SD. p values were considered significantly different at least when p < 0.05. When no marks are shown on the graphs, it means that the differences are not significant. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All relevant data supporting the key findings of this study are available within the article and its Supplementary Information files or from the corresponding author upon reasonable request. Source data are provided with this paper.
Scientists from Osaka University used stem cells isolated from Wagyu cows to 3D-print a meat alternative containing muscle, fat, and blood vessels arranged to closely resemble conventional steaks. This work may help usher in a more sustainable future with widely available cultured meat. Wagyu can be literally translated into "Japanese cow," and is famous around the globe for its high content of intramuscular fat, known as marbling or sashi. This marbling provides the beef its rich flavors and distinctive texture. However, the way cattle are raised today is often considered to be unsustainable in light of its outsized contribution to climate emissions. Currently, the available "cultured meat" alternatives only consist primarily of poorly organized muscle fiber cells that fail to reproduce the complex structure of real beef steaks. Now, a team of scientists led by Osaka University have used 3D-Printing to create synthetic meat that looks more like the real thing. "Using the histological structure of Wagyu beef as a blueprint, we have developed a 3D-printing method that can produce tailor-made complex structures, like muscle fibers, fat, and blood vessels," lead author Dong-Hee Kang says. To overcome this challenge, the team started with two types of stem cells, called bovine satellite cells and adipose-derived stem cells. Under the right laboratory conditions, these "multipotent" cells can be coaxed to differentiate into every type of cell needed to produce the cultured meat. Individual fibers including muscle, fat, or blood vessels were fabricated from these cells using bioprinting. The fibers were then arranged in 3D, following the histological structure, to reproduce the structure of the real Wagyu meat, which was finally sliced perpendicularly, in a similar way to the traditional Japanese candy Kintaro-ame. This process made the reconstruction of the complex meat tissue structure possible in a customizable manner. "By improving this technology, it will be possible to not only reproduce complex meat structures, such as the beautiful sashi of Wagyu beef, but to also make subtle adjustments to the fat and muscle components," senior author Michiya Matsusaki says. That is, customers would be able to order cultured meat with their desired amount of fat, based on taste and health considerations.
10.1038/s41467-021-25236-9
Physics
Mini X-ray source with laser light
"Quantitative X-ray phase-contrast microtomography from a compact laser-driven betatron source." Nature Communications, DOI: 10.1038/ncomms8568, 20 July 2015 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms8568
https://phys.org/news/2015-08-mini-x-ray-source-laser.html
Abstract X-ray phase-contrast imaging has recently led to a revolution in resolving power and tissue contrast in biomedical imaging, microscopy and materials science. The necessary high spatial coherence is currently provided by either large-scale synchrotron facilities with limited beamtime access or by microfocus X-ray tubes with rather limited flux. X-rays radiated by relativistic electrons driven by well-controlled high-power lasers offer a promising route to a proliferation of this powerful imaging technology. A laser-driven plasma wave accelerates and wiggles electrons, giving rise to a brilliant keV X-ray emission. This so-called betatron radiation is emitted in a collimated beam with excellent spatial coherence and remarkable spectral stability. Here we present a phase-contrast microtomogram of a biological sample using betatron X-rays. Comprehensive source characterization enables the reconstruction of absolute electron densities. Our results suggest that laser-based X-ray technology offers the potential for filling the large performance gap between synchrotron- and current X-ray tube-based sources. Introduction Since the discovery of X-ray radiation and its powerful imaging capabilities by Röntgen, X-rays have become a part of our daily life in medicine, industry and research. The conventional technique of absorption imaging utilizes the large absorption variations of X-rays in matter (that is, the imaginary part β of the refractive index n= 1− δ+iβ ) of different thickness and composition, limiting its usefulness to structures with high absorption gradients. However, the real part of the refractive index δ leads to phase variations of the X-rays depending on the sample’s electron density. The latter can be explored with much higher sensitivity by phase-contrast techniques 1 , 2 , which are far superior to conventional radiography for detecting structures in soft tissue with its rather homogeneous absorption profile. It is thus ideally suited for three-dimensional (3D) investigations of (pathologic) tissue biopsies in medical research and diagnostics 3 , but also finds applications in materials science. Computed tomography using phase-contrast images taken from different perspectives can provide the full 3D structure of the object with high resolution and enhanced contrast. Phase-contrast imaging is complementary to coherent diffractive imaging 4 , which is employed for tomographic reconstruction in the far field. Phase-contrast imaging can be implemented by free-space propagation (also known as in-line holography) 5 , crystal analyser-based 2 and crystal 3 , 6 or grating 7 interferometer-based techniques. For microscopy applications with micron-scale resolution, propagation-based phase-contrast imaging is the method of choice. It relies neither on additional X-ray optics, nor on substantial temporal coherence of the source. The only requirement is that the transverse coherence length, given by l t = λ · R / σ is larger than 8 . Here λ is the wavelength, R is the source–detector distance, σ is the source size and D = d · l /( d + l ) the defocusing distance with l as source–sample and d sample–detector distances. This is met by third-generation synchrotrons, but their size and cost prevent their proliferation in hospitals and research institutions. Microfocus X-ray tubes provide the desired spatial coherence but suffer from a modest X-ray flux, implying lengthy exposure times. To spark off widespread application of this powerful technique, a compact high-brilliance source is needed. Laser-wakefield acceleration of electrons 9 (LWFA; see Methods) has already been studied as a possible future alternative. LWFA of electron beams exhibit transverse emittances comparable to the best conventional linear accelerators and bunch durations of ∼ 5 fs full-width at half-maximum (FWHM) 10 , 11 , rendering them unique among compact sources. During the acceleration process the electrons are also wiggled transversely by the strong radial fields of the plasma wave, causing them to emit a forward-directed, incoherent X-ray beam, referred to as betatron radiation 12 , 13 , 14 (see Methods). Recent proof-of-principle experiments demonstrated the potential of laser-driven betatron sources for recording single-shot X-ray phase-contrast images based on the free-space propagation technique 15 . Here we demonstrate that this type of source—even at this early stage of development—can be readily used for real-world applications. Due to advances in LWFA stability by using a turbulence-free steady-state gas flow target 16 , 17 , we are able to produce electron bunches with high charge (400 pC) and low fluctuations in absolute charge (30%) and in cut-off energy (10%) for continuous operation of several hours. We also show experimentally in this work that the spectral shape of the betatron radiation is largely insensitive to the electron spectra fluctuation. This leads to the production of a high-flux X-ray beam with remarkable spectral stability ideally suited for multiexposure imaging. Using these beams we were for the first time able to record and quantitatively reconstruct a phase tomogram of a complex object. As an illustrative example, we chose a dried insect (Chrysoperia carnea, green lacewing). We achieved this by evaluating a set of 1,487 single-shot phase-contrast images taken from various angles ( Supplementary Movie 1 ). The microradiography images features a field of view of 7.5 × 6.9 mm 2 and a resolution of 6 μm, limited by the CCD (charge-coupled device) area and pixel size, respectively. For quantitative reconstruction of the tomogram 18 from these images, we performed a careful characterization of the X-ray source in terms of its transverse dimensions and spectrum, which is essential for the reconstruction of the absolute electron densities within the object of interest 19 . Results Source characterization In the experiment, intense short laser pulses were focused at the entrance of a hydrogen-filled gas cell (see Fig. 1 and Methods). For our pulse parameters the highest electron energies with a peak at 400 MeV, lowest divergences (1.3±0.2 mrad) FWHM and moderate charge (50 pC) were produced at plasma electron densities of 5 × 10 18 cm −3 , created from laser-induced field ionization of the target gas. Figure 1: Experimental setup. The laser pulse (1.6 J, 28 fs, 60 TW) (red) is focused by a F /16 off-axis parabola to a 22-μm diameter spot size on the entrance of a 6-mm long gas cell with an plasma electron density of n p =1.1 × 10 19 cm −3 . The residual laser light is blocked by a 10-μm thick aluminium foil in front of the detector. Dipole magnets deflect the accelerated electrons (yellow) away from the laser axis onto a scintillating screen (LANEX). The X-ray beam (pink) transmitted through the foil is recorded by a cooled back-illuminated X-ray CCD with 22.5-μm 2 pixels. The tomogram was acquired in the experimental configuration as shown. The thickness of the Al-foil in front of the sample was 20 μm to protect it from the laser light during the scan. The ‘Al- cake’ and wire-arrangement each protected by an Al foil of 10-μm thickness can be inserted into the X-ray beam for spectral and source size characterization. The Al foil of 10-μm thickness in front of the CCD was present during whole experiment. Full size image The betatron motion leads to the emission of well-collimated X-ray beams with a spectrum peaked at 4.9 keV, measured by their transmission through a stack of filters 20 . However, when the plasma density is increased to 1.1 × 10 19 cm −3 , the electron energy drops to 200 MeV and their divergence increases up to 5 mrad, along with a substantial increase of electron beam charge to 400 pC. Now the X-ray beam divergence triples from (2.3±0.2 mrad) to (6,0±1.1 mrad) and the photon fluence increases by more than an order of magnitude. The energy spectrum stays roughly constant, partly because higher wiggling fields in the dense plasma offset the lower electron energies. The increase in photon number results from the marked increase of trapping efficiency at higher densities. It is evident from Fig. 2a that despite large shot-to-shot variations in the electron spectrum in this high-density regime, the X-ray spectrum is remarkably stable, a behaviour not reported before. We attribute this to the fixed low-energy cutoff of the aluminium laser blocking filter (see Fig. 1 and Methods) on one hand and to the incoherent superposition of emission from many electrons with a broad spectrum on the other hand. This betatron-optimized regime results in the emission of 1.2 × 10 9 photons msr −1 per shot (±20%) above 1 keV, at the position of the object. Figure 2: Characterization of the betatron source. ( a ) Electron and corresponding X-ray spectra as seen by the sample for 18 individual laser shots. The X-ray spectra were reconstructed from the transmission signal of the filter cake with overall thicknesses ranging from 20 to 630 μm (see left inset in b ). Inside the red circle, corresponding to (1.35 × 10 −2 ) msr, (1.6±0.3) × 10 7 photons are detected and analysed for their energy. Even for large electron energy fluctuations, the X-ray spectral shape is remarkably stable. ( b ) Source size measurement: A comparison of the measured intensity distribution integrated along a 100-μm thick tungsten wire (right inset) and the modelled intensity distribution for a Gaussian source spot, taking into account the spectra in a , reveal a best value of 1.8 μm r.m.s. Full size image The source size was derived from the Fresnel diffraction pattern of an object in the X-ray beam, as shown in Fig. 2b . Comparison of a modelled diffraction pattern with the data yields a source size of (1.8±0.1 μm r.m.s.(root mean square)). Assuming pulse durations of 5 fs as suggested by numerical studies, recent reports 10 , 11 and our own yet unpublished measurements, the source exhibits a peak brilliance of 2 × 10 22 photons s −1 mm −2 mrad −2 within a relative spectral bandwidth of 0.1% at the position of our sample. In the current proof-of-principle experiment, the average brilliance and photon flux density at the sample was limited to 1 × 10 7 photons s −1 mm −2 mrad −2 within a relative spectral bandwidth of 0.1% and 2 × 10 7 photons s −1 cm −2 , respectively, owing to a shot rate of 0.1 Hz due to gas load in the chamber and data acquisition limitations. An optimized pumping design and improved data acquisition permitting the full 5-Hz shot rate of the laser would improve these figures by a factor of 50 and yield few-minute scan times. The scalability of the photon energy depends on the electron energy, the plasma density and the wiggler strength parameter, which in LWFA are all interlinked 9 . Clever target engineering, that is, separating acceleration and radiation zone, off-axis injection 21 or laser-betatron resonance effects 22 may strongly enhance the betatron amplitude and hence the critical energy. In ref. 22 , a 20-keV X-ray spectrum with a tail to 1 MeV was achieved with laser pulses containing only three times more energy. The shot-to-shot stability of our X-ray source is excellent for a laser-driven process, yielding >10 7 photons per shot in >99% of the laser shots in the tomographic scan, with low fluctuations of the X-ray spectrum (see Fig. 2a ), and a photon number constant to within ±20% r.m.s., making it suitable for multiexposure tomography. Tomography For the tomography scan the object was placed on a rotating mount into the X-ray beam (see Fig. 1 ). Although a dried sample exhibits different electron densities compared with a fresh tissue, it was chosen for technical simplicity for this proof-of-concept experiment. Its ability to be put within the vacuum chamber avoided unnecessary intermediate air propagation (as the cooled CCD chip needs to be evacuated), and consequently unnecessary transmission losses in X-Ray windows. The raw phase-contrast images recorded on the CCD exhibit the so-called edge-enhancement effect ( Fig. 3a ), which is inherent to propagation-based phase-contrast imaging 8 in the Fresnel diffraction regime. No optical elements between the source and the detector are used, but the wave propagates sufficiently far beyond the sample (1.99 m) for Fresnel diffraction to occur. The edge-enhanced image is useful by itself for visual inspection when high-resolution features with poor absorption contrast ( Fig. 3a ) are of particular interest. However, propagation-induced intensity fringes of a pure phase object are not a direct measure of the phase shift but rather the Laplacian of the phase front 5 . A reconstruction of raw phase projections will thus only yield grey level variations at material interfaces ( Fig. 4a ). Due to the missing link of reconstructed contrast to material properties, a quantitative analysis and automatic segmentation via thresholding is not possible. Figure 3: Lacewing insect (chrysoperia carnea), imaged with Al-filtered betatron X-ray spectrum. ( a ) The image shows a selected single-shot radiograph dominated by X-ray in-line phase contrast. Small details are highlighted due to the strong edge enhancement effect (see insets of magnified sections). ( b ) The corresponding quantitative phase map was retrieved using a single-material constraint. A series of these maps is used for the reconstruction of the insect as shown in Figs 4 and 5 . Scale bar, 0.5 mm (white). Full size image Figure 4: Transverse slices of the sample. ( a ) A reconstructed transverse slice of the lacewing without phase-retrieval (that is, using raw phase-contrast images as in Fig. 3a ) highlights material boundaries but does not allow for a quantitative analysis. ( b ) The same transverse slice reconstructed after phase retrieval, using phase images as in Fig. 3b . ( c ) Reconstructed transverse slices as in b with grey values representing electron density values. The reconstruction exhibits good area contrast allowing for volume rendering and segmentation as shown in Fig. 5 . Electron density scale applies to subfigures b and c . Scale bar, 1 mm (white). Full size image In absorption tomography, projections of the linear absorption coefficient along the beam are directly obtained from the logarithm of the recorded intensity. The subsequent reconstruction exhibits area contrast with grey values directly related to material properties of the sample under investigation. Starting from diffracted intensity measurements at a certain propagation distance, phase-retrieval algorithms are employed to create line projection images of the refractive index decrement δ (phase maps, see Methods). The transport-of-intensity equation (TIE) relates the edge-enhanced image measured at the detector to the phase distribution at the exit plane of the sample. As we only used one propagation distance we employed a single-material constraint to solve the TIE and retrieve phase maps of the insect ( Fig. 3b ; see Methods). As the retrieved phase map is directly related to the integrated decrement δ of the index of refraction, the reconstruction yields information on electron density distribution in the sample. Before reconstruction, the 360 projections taken over 360° were each averaged over four subsequent laser shots and binned by a factor of two in both image dimensions to yield an artifact-free reconstruction (see Methods). Standard filtered back projection was used to reconstruct the transverse slices shown in Fig. 4b,c . The reconstruction reveals a distinct contrast between insect and background, allowing segmentation via simple thresholding. Additional TEM measurement of the insect’s leg revealed that the reconstructed electron densities are in good agreement—under the consideration of the resolution of our setup—with the expected electron densities for chitin (see Supplementary Note 1 ). A 3D rendering of the sample is presented (see Fig. 5 and Supplementary Movie 2 ), including sectioning planes of the 3D volume with grey levels corresponding to electron density. Figure 5: Volume rendering. ( a ) Photograph of the sample. ( b ) 3D rendering of the sample, imaged with Al-filtered betatron X-ray spectrum. ( c , d ) Cutting planes of the 3D volume are shown with their grey levels corresponding to electron density distribution in the sample. Full size image Discussion This result demonstrates that laser-driven X-ray sources have reached the verge of practical usefulness for application-driven research. If further progress regarding mean photon flux can be made, laser-driven sources due to their compactness, relatively low cost and high peak brilliance might become valuable tools for university-scale research and medical application, in particular early detection of tumours with low-dose diagnostics. Here difficulties in interpreting conventional radiograms and the continuous increase in cancer probability (due to prolonged life) have led to substantial interest of the possible improvements offered by phase-contrast imaging. These phase-contrast images provide higher contrast, especially, when normal and malignant tissue exhibit almost the same attenuation coefficient. Such studies already performed on small animals and human tissue samples proved an enhanced contrast and better tumour visibility 23 , 24 as long as a high resolution in electron densities is provided. On the basis of the distribution of the grey values in the background (void) of our reconstructed sample ( Fig. 4 ) we can estimate a conservative limit for our measurement sensitivity of the electron density to 0.1 × 10 23 cm −3 . This resolution would already be sufficient to detect malignant tissue (assuming a mass density of 1.01 g cm −3 and corresponding electron density due to its chemical composition 25 of 3.28 × 10 23 cm −3 ) within the lungs 26 (mass density of 0.38 g cm −3 and an electron density of 1.26 × 10 23 cm −3 ) or even in the brain (mass density of 1.05 g cm −3 with an electron density of 3.49 × 10 23 cm −3 ). If the source energy can be pushed into the tens of keV range, as recent results suggest 27 , such laser-based sources could be an alternative to the conventional radiography with superior resolution avoiding invasive surgery and time-consuming histology and improvement for image-guided radiation therapy, where detection and accurate positioning of the target tissue volumes is required 28 (and references therein). Further applications, may not only exploit the imaging capabilities of such a table top X-ray source, but also benefit from its ultrashort nature. Ultrafast structural X-ray science would be a good example including time-resolved pump probe X-ray absorption 29 , 30 or diffraction 31 , 32 experiments resolving fast atomic and molecular motion. Here the unique combination of broadband radiation, intrinsically perfect synchronization to a laser pulse and few femtoseconds pulse duration would offer a crucial advantage. In view of the ongoing dynamic evolution in high-energy, high-repetition rate laser technology 33 , 34 aiming to scale multi-TW lasers to kHz and beyond, average fluences approaching current state-of-the art compact synchrotron sources are expected to become available in the near future. This might help proliferation of biomedical X-ray diagnostics, X-ray microscopy and non-destructive industrial testing for static—as demonstrated here by a microtomogram of a biological sample—and in particular time-resolved imaging with femtosecond resolution would definitely further benefit from the development. Methods Laser-wakefield acceleration The experiments were performed using the ATLAS Ti:Sapphire laser at the MPI for Quantum Optics. It occupies a table area of ∼ 15 m 2 , and delivers 1.6 J energy, 28 fs duration (60 TW) laser pulses, centred at 800 nm wavelength. They are focused onto the entrance of a Hydrogen-filled gas cell by an off-axis parabolic mirror (f=1.5 m, F/20) to a spot size of 22 μm FWHM. This corresponds to a vacuum peak intensity of 1 × 10 19 W cm −2 . At these intensities, the hydrogen is fully ionized and the laser propagates through plasma. Its strong ponderomotive force drives a plasma wave with frequency . Here n e , e and m e are the density, charge and rest mass of the plasma electrons, and is the vacuum permittivity. The wave phase velocity matches the laser group velocity, so that in the frame of the laser pulse, the plasma wave constitutes a co-moving accelerating field. Electrons from the plasma are trapped into this wave by wavebreaking and accelerated to relativistic energies around 200–400 MeV, depending on the electron density. A magnet deflects the electron beam according to energy onto a scintillating screen (see Fig. 1 ). From the position and brightness on the screen the energy, divergence and charge of the electron bunch can be deduced 35 . Betatron radiation The plasma wavelength confines the wakefield to a radius of ∼ 10 μm, causing strong radial fields that force the electrons into anharmonic transverse betratron oscillations at a frequency during acceleration. Here, γ is the relativistic factor of the electron beam. The strong radial fields lead to large angular excursions, while keeping the beam size within a few microns, triggering the emission of high harmonics of ω β in a co-moving frame. ω β varies during acceleration, causing incoherent emission. In the laboratory frame, the radiation is relativistically boosted to the X-ray regime and merges into a synchrotron-like continuum, described by a critical energy . Here K is the wiggler parameter defined through K ≈ γθ 5, where θ is the opening angle of the emitted radiation. For our experimental conditions, even a small off-axis distance of 1 μm corresponds to a wiggler parameter on the order of 10, leading to high harmonic orders. For the spectrum and source size measurements, the X-rays are freely propagating from the source to an Andor model DO432 BN-DD back-illuminated CCD detector at a distance of 3.26 m. The on-axis laser light is blocked behind the source by two 10-μm thick Al-foils which are transparent for radiation above 1 keV. A filter cake with different material thicknesses for spectrum characterization and a Tungsten wire for source size analysis can be moved into the beam. For the tomography studies the sample is mounted l =0.73 m from the source, d =1.99 m in front of the detector, yielding a 3.7 × magnification. Modelling of Fresnel diffraction The source size was derived by analysing the Fresnel diffraction pattern from a tungsten wire backlit by the X-ray beam. The measured edge diffraction on the detector from a 100-μm thick Tungsten wire (26-cm behind the source) is shown in the inset of Fig. 2b , and was compared to modelled distributions for various source sizes. They were obtained by summing up the Fresnel diffraction from a knife edge, as described in for example, Born and Wolf 36 for all energy bins of the incident spectrum weighted by the CCD response. The beam, showing a Gaussian shape on the CCD chip, was assumed to be Gaussian at the origin. The result for different source sizes is shown in the inset of Fig. 2b . The information about the source size is completely indicated by the first overshot of the profile. To improve the signal to noise ratio we have vertically summed up the profiles, taking into account the curvature of the wire by a cross-correlation between different rows. Our analysis yields a best fit for a source size of (1.8±0.1 μm r.m.s.). Phase retrieval The TIE directly relates the phase distribution in the planes orthogonal to the optical axis to the propagation of the wavefront intensity of the beam. A variety of phase-retrieval algorithms, which solve the TIE, have been proposed, differing in raw data input needed (for example, multiple sample to detector distances) and additional constraints on sample material properties 37 . We use a single-distance quantitative phase-retrieval method that does not require the sample to show negligible absorption 19 . It employs a single-material constraint corresponding to a fixed δ/β ratio representing the sample’s main chemical component. If this assumption is justified and the sample exhibits comparably weak absorption such as in our case, this approach allows for a quantitative reconstruction of electron density values in the sample as shown in 38 . The projected thickness of the sample which is directly related to the phase shift imposed onto the wavefront via ϕ ( r )=−2 π / λ mean δ poly T ( r ), can be retrieved by using the following equation 19 : Here T is the retrieved thickness of the sample, r are the transverse coordinates, k are the Fourier space coordinates, I is the measured intensity, I 0 is the uniform intensity of the incident radiation, M is the magnification of the image, R is the distance from the sample to the detector and μ poly and δ poly are the material-dependent linear absorption coefficient and refractive index decrement, respectively. In the case of a polychromatic X-ray spectrum the most accurate phase-retrieval results are achieved through the calculation of effective μ and δ values. We assume the main chemical composition present in the dried insect to be Chitin (C 8 H 13 NO 5 ). Values of μ poly =70.15 cm −1 and δ poly =1.38 × 10 −5 were calculated from reconstructed X-ray spectra (see Fig. 2a ), tabulated δ(E) and β(E) values at a density of ρ =2.2 g cm −3 39 , and the known detector response function 40 . The phase map as depicted in Fig. 3b ) was reconstructed using equation (1) with values of R =199 cm, M =3.7 and a mean energy as seen by the detector of E mean =8.8 keV ( λ mean =1.4 Å). This mean energy seen by the detector is different from the spectral peak of the source as it takes into account the additional Al-filter and spectral weighting. Treatment of raw images The fluctuations of the X-ray point of origin (12 μm r.m.s. vertical, 18 μm r.m.s. horizontal) are caused by shot-to-shot laser pointing fluctuations. To correct for these, before reconstruction all images were registered using normalized cross-correlation. The shape of the correlation surface is assumed to fit two orthogonal parabolic curves. Sub-pixel registration accuracy is obtained by fitting a paraboloid to the 3 × 3 pixel vicinity of the maximum value of the cross-correlation matrix. Sub-pixel shifting is performed in Fourier space. To account for the Gaussian intensity profile of the X-ray beam the sample is masked out using an edge detection filter and images are background corrected by subtracting a second order polynomial. The vertical alignment of the tomography scan is performed using cross-correlation of integrated pixel values perpendicular to the tomography axis. Horizontal alignment was performed manually. Additional information How to cite this article: Wenz, J. et al . Quantitative X-ray phase-contrast microtomography from a compact laser-driven betatron source. Nat. Commun. 6:7568 doi: 10.1038/ncomms8568 (2015).
Physicists from Ludwig-Maximilians-Universität, the Max Planck Institute of Quantum Optics and the TU München have developed a method using laser-generated X-rays and phase-contrast X-ray tomography to produce three-dimensional images of soft tissue structures in organisms. With laser light, physicists in Munich have built a miniature X-ray source. In so doing, the researchers from the Laboratory of Attosecond Physics of the Max Planck Institute of Quantum Optics and the Technische Universität München (TUM) captured three-dimensional images of ultrafine structures in the body of a living organism for the first time with the help of laser-generated X-rays. Using light-generated radiation combined with phase-contrast X-ray tomography, the scientists visualized ultrafine details of a fly measuring just a few millimeters. Until now, such radiation could only be produced in expensive ring accelerators measuring several kilometers in diameter. By contrast, the laser-driven system in combination with phase-contrast X-ray tomography only requires a university laboratory to view soft tissues. The new imaging method could make future medical applications more cost-effective and space-efficient than is possible with today's technologies. When the physicists Prof. Stefan Karsch and Prof. Franz Pfeiffer illuminate a tiny fly with X-rays, the resulting image captures even the finest hairs on the wings of the insect. The experiment is a pioneering achievement. For the first time, scientists coupled their technique for generating X-rays from laser pulses with phase-contrast X-ray tomography to visualize tissues in organisms. The result is a three-dimensional view of the insect in unprecedented detail. The X-rays required were generated by electrons that were accelerated to nearly the speed of light over a distance of approximately one centimeter by laser pulses lasting around 25 femtoseconds. A femtosecond is one millionth of a billionth of a second. The laser pulses have a power of approximately 80 terawatts (80 x 1012 watts). By way of comparison: an atomic power plant generates 1,500 megawatts (1.5 x 109 Watt). First, the laser pulse ploughs through a plasma consisting of positively charged atomic cores and their electrons like a ship through water, producing a wake of oscillating electrons. This electron wave creates a trailing wave-shaped electric field structure on which the electrons surf and by which they are accelerated in the process. The particles then start to vibrate, emitting X-rays. Each light pulse generates an X-ray pulse. The X-rays generated have special properties: They have a wavelength of approximately 0.1 nanometers, which corresponds to a duration of only about five femtoseconds, and are spatially coherent, i.e. they appear to come from a point source. For the first time, the researchers combined their laser-driven X-rays with a phase-contrast imaging method developed by a team headed by Prof. Franz Pfeiffer of the TUM. Instead of the usual absorption of radiation, they used X-ray refraction to accurately image the shapes of objects, including soft tissues. For this to work, the spatial coherence mentioned above is essential. This laser-based imaging technique enables the researchers to view structures around one tenth to one hundredth the diameter of a human hair. Another advantage is the ability to create three-dimensional images of objects. After each X-ray pulse, meaning after each frame, the specimen is rotated slightly. For example, about 1,500 individual images were taken of the fly, which were then assembled to form a 3D data set. Due to the shortness of the X-ray pulses, this technique may be used in future to freeze ultrafast processes on the femtosecond time scale e.g. in molecules - as if they were illuminated by a femtosecond flashbulb. The technology is particularly interesting for medical applications, as it is able to distinguish between differences in tissue density. Cancer tissue, for example, is less dense than healthy tissue. The method therefore opens up the prospect of detecting tumors that are less than one millimeter in diameter in an early stage of growth before they spread through the body and exert their lethal effect. For this purpose, however, researchers must shorten the wavelength of the X-rays even further in order to penetrate thicker tissue layers.
10.1038/ncomms8568
Earth
Wildfire dataset could help firefighters save lives and property
WildfireDB: An Open-Source Dataset Connecting Wildfire Spread with Relevant Determinants. ayanmukhopadhyay.github.io/files/neurips2021.pdf
https://ayanmukhopadhyay.github.io/files/neurips2021.pdf
https://phys.org/news/2021-12-wildfire-dataset-firefighters-property.html
Abstract In contrast to the well-recognized permafrost carbon (C) feedback to climate change, the fate of permafrost nitrogen (N) after thaw is poorly understood. According to mounting evidence, part of the N liberated from permafrost may be released to the atmosphere as the strong greenhouse gas (GHG) nitrous oxide (N 2 O). Here, we report post-thaw N 2 O release from late Pleistocene permafrost deposits called Yedoma, which store a substantial part of permafrost C and N and are highly vulnerable to thaw. While freshly thawed, unvegetated Yedoma in disturbed areas emit little N 2 O, emissions increase within few years after stabilization, drying and revegetation with grasses to high rates (548 (133–6286) μg N m −2 day −1 ; median with (range)), exceeding by 1–2 orders of magnitude the typical rates from permafrost-affected soils. Using targeted metagenomics of key N cycling genes, we link the increase in in situ N 2 O emissions with structural changes of the microbial community responsible for N cycling. Our results highlight the importance of extra N availability from thawing Yedoma permafrost, causing a positive climate feedback from the Arctic in the form of N 2 O emissions. Introduction Rapid Arctic warming 1 and associated permafrost thaw 2 , 3 are threatening the large C and N reservoirs of northern permafrost soils 4 , 5 , 6 , accumulated under cold conditions where the decomposition rate of soil organic matter (SOM) is low 7 , 8 . Permafrost thaw is now increasingly exposing these long-term inert C and N pools to microbial decomposition and transformation processes. While it is long known that mobilization of permafrost C potentially increase the release of the greenhouse gases (GHG) carbon dioxide (CO 2 ) and methane (CH 4 ) 5 , 9 , 10 , the fate of soil N liberated upon permafrost thaw is poorly studied and more complex. There is evidence that part of liberated N may be emitted to the atmosphere as nitrogenous gases, most importantly as N 2 O 6 , which is a ~300 times more powerful GHG than CO 2 over a 100-year time horizon 11 and a dominant contributor to ozone destruction in the stratosphere 12 . The current increase in atmospheric N 2 O concentration is mainly driven by the growth of human-induced emissions, which comprise 43% of the global N 2 O emissions of 17.0 Tg N year −1 and are dominated by N 2 O release from fertilized agricultural soils 13 . Nitrous oxide emissions, although generally smaller per unit area, occur also from soils under natural vegetation with a 33% contribution to the total global N 2 O emission 13 . Tropical soils with high N turnover rates generally show the largest N 2 O emissions among natural soils, while permafrost-affected soils in cold environments have been thought to be negligible N 2 O sources. This view was challenged by a recent synthesis showing that N 2 O emissions commonly occur from permafrost soils, with a global emission between 0.08 and 1.27 Tg N year −1 , meaning a 1–23% addition to the global N 2 O emission from natural soils 6 . However, this estimate is still highly uncertain due to the overall scarcity of N 2 O flux observations from permafrost-affected soils and the lack of studies from some important permafrost soil types, including the Yedoma studied here. Late-Pleistocene aged Yedoma permafrost occurs as deep deposits (a mean thickness of ~19 m) over an area of > 1 million km 2 in the Northern Hemisphere (Fig. 1 ) 14 . The Yedoma region contains >25% of the circumarctic permafrost C stock 15 , and a yet unaccounted for and likely even larger proportion of permafrost N because of the low C/N ratio of Yedoma SOM (typically < 15) 16 , 17 . The SOM in Yedoma is thought to be easily decomposable because it was incorporated into the permafrost soon after deposition without having much time to be degraded 15 . The high ice content of Yedoma 14 makes it vulnerable for abrupt thaw and ground collapse 18 , allowing rapid mobilization of soil C and N stocks after thaw 15 . Along Arctic rivers and the coastal zone of the Arctic Shelf, thawing of Yedoma permafrost creates steep, tens-of-meters-high Yedoma exposures 19 , 20 , 21 , where many of the conditions known to promote N 2 O emissions from permafrost-affected soils 6 are met, including low C/N ratios, lack of vegetation, and suitable soil moisture content for microbial processes producing N 2 O. Fig. 1: Overview of the studied Yedoma exposures. a Location of the study sites, overlain on the map showing the extent of Yedoma deposits on the Northern Hemisphere 79 and the permafrost zonation 80 . b Kurungnakh exposure. c Duvanny Yar exposure. Photos b and c by J. Kerttula. Full size image Here, we studied N 2 O fluxes on two thawing Yedoma exposures forming retrogressive thaw slumps in Northeast Siberia: In July 2016 on Kurungnakh Island situated in the Lena River Delta and in July 2017 in Duvanny Yar located by the Kolyma River (See Methods section; Fig. 1 ). At both sites, we measured N 2 O fluxes with the static chamber technique 22 and determined the mineral N pools on transects spanning from the top of the thawing Yedoma exposure across the bare and revegetated parts down to the river shore (see Methods section; Supplementary Figs. 1 & 2 ). At the intensive study site Kurungnakh we additionally studied N transformation and N 2 O production rates in the laboratory, as well as the relative abundance of N cycling genes. We revealed an increasing trend in N 2 O emissions with drying, stabilization, and revegetation of the thawed Yedoma sediments. Increased emissions were coupled with changes in the microbial community composition responsible for soil N transformation processes. Results and discussion Nitrous oxide emissions and mineral N pools across thawing Yedoma exposures Our field flux measurements revealed substantial N 2 O release from Yedoma permafrost following thaw. At the Kurungnakh exposure, the N 2 O fluxes from thawed Yedoma surfaces were highly variable (63 (–19–6286) μg N m −2 day −1 ; median with (range)), at the high-end exceeding the typical fluxes from permafrost-affected soils (38 (6–189) μg N m −2 day −1 ; median with (25th–75th percentiles); ref. 6 ) by two orders of magnitude. The N 2 O emissions showed an increasing trend along the measuring transect (Fig. 2a , Supplementary Table 1 ) from the densely vegetated Holocene cover deposits overlying intact permafrost on the top of the riverbank, through the actively eroding upper part of the Yedoma exposure, down to the already stabilized lower part of the slope revegetated by mosses and grasses. The highest N 2 O emissions occurred from Yedoma which had thawed between 5 and 10 years ago (see Methods; Supplementary Fig. 1e , Supplementary Fig. 3 ) and that were revegetated by grasses (548 (133–6286) μg N m −2 day −1 ; mean with (range)). The emissions from these revegetated Yedoma soils in the mid part of the slope were significantly higher than the emissions from undisturbed vegetated Holocene cover and bare freshly thawed Yedoma (Dunn’s test, p < 0.05). Negligible N 2 O fluxes were detected from bare sand on the river shore that receives melt waters from the thawing Yedoma exposure above (Fig. 2a , Supplementary Table 1 ). Fig. 2: Nitrous oxide fluxes and nitrate content at the Kurungnakh and Duvanny Yar exposures. a In situ N 2 O fluxes measured with the chamber technique. b Soil moisture expressed as water-filled pore space. c Extractable nitrate content. See Supplementary Table 2 for extractable ammonium content. Box plots show lower and upper quartiles, median (thick black line), smallest and largest values without outliers (thin black line) and outliers (circles); n = 5 biologically independent samples, except for ‘Bare earlier thawed Yedoma’ and ‘Yedoma revegetated with grasses’ in Duvanny Yar, where n = 10. Lower case letters indicate significant differences between studied soils, tested separately for each study site (Kruskal-Wallis test with pairwise comparisons with Dunn’s test; p < 0.05). For N 2 O fluxes in a, positive values indicate emissions, and negative values indicate uptake. Note the logarithmic scale on y-axes in a and c . WFPS water-filled pore space, DW dry weight, ND Not determined. Full size image The spatial pattern in the N 2 O emissions in situ was confirmed by laboratory incubations with Kurungnakh soils, where the highest N 2 O production under anoxic conditions was found in Yedoma revegetated with grasses and the lowest in bare earlier thawed Yedoma and vegetated Holocene cover (Kruskall–Wallis test, p = 0.0004–0.007; Fig. 3a , Supplementary Table 4 ). Higher N 2 O production under the anoxic treatment than in the oxic treatment (Wilcoxon signed-rank test, p < 0.0001) shows that denitrification is the main N 2 O production pathway. The yet higher N 2 O production in anoxic conditions in the presence of acetylene in all soils except bare freshly thawed Yedoma (1.7 to 4.7 times depending on the surface type; Wilcoxon signed-rank test, p = 0.01) suggests that, in addition to N 2 O, a significant amount of N 2 is emitted from the studied soils to the atmosphere. Fig. 3: Nitrous oxide production and nitrogen transformation rates in Kurungnakh soils. a Nitrous oxide production with different headspace conditions. Acetylene inhibits the last step of denitrification, N 2 O reduction to N 2 , and can be used to estimate the total denitrification rate. b Nitrogen transformation rates including gross N mineralization, net N mineralization and net nitrification. Net N mineralization and nitrification rates were determined with initial N addition (2.1–2.6 mg N (kg DW) –1 ) due to low inherent mineral N content in part of the soils. Box plots show lower and upper quartiles, median (thick black line), smallest and largest values without outliers (thin black line) and outliers (circles); n = 5 biologically independent samples. Lower case letters indicate significant differences between studied soils, tested separately for each treatment ( a ) or process ( b ; Kruskal-Wallis test with pairwise comparisons with Dunn’s test; adjusted p < 0.05). Note the logarithmic scale on y-axis in a . One outlying point has been removed from net nitrification data for vegetated Holocene cover in b . DW dry weight, ND Not determined. Full size image The N 2 O fluxes at the Duvanny Yar exposure were less variable and lower than at the Kurungnakh exposure (Kruskall–Wallis test, p = 0.047), ranging from –147 to 222 μg N m −2 day −1 (Fig. 2a , Supplementary Table 1 ). The highest N 2 O emissions were again detected from thawed and revegetated Yedoma surfaces, but this time from those with mosses. There, the N 2 O emissions were significantly higher than from bare freshly thawed Yedoma (Dunn’s test, p = 0.010). Further, we found elevated N 2 O concentrations (mean ± SD 2.4 ± 2.1 ppm; compared to atmospheric N 2 O concentration of 0.33 ppm) in the soil pore gas in Yedoma covered by high dead standing biomass of the pioneering plant Descurainia sophioides (Supplementary Fig. 5a ). We estimated (Supplementary Fig. 5b ) the N 2 O flux at these surfaces at 314 μg N m −2 day −1 (median; range 26–3090), which is comparable to the N 2 O fluxes from the thawed and revegetated Yedoma in Kurungnakh, and by far higher than the fluxes from permafrost-affected soils generally 6 . Of all N 2 O fluxes measured at the both study sites, 23% were negative, with highest uptake rates < −25 μg N m −2 day −1 observed from bare Yedoma sites with high soil water content (WFPS = 67–100%; Fig. 2a ). Uptake of atmospheric N 2 O is commonly observed in wetland soils, where energetically more favorable electron acceptors, such as O 2 or NO 3 − are absent 6 , 23 . However, we occasionally measured small negative N 2 O fluxes also from vegetated Holocene cover with very dry topsoil (Fig. 2a ; WFPS = 6–15%), supporting previous observations of N 2 O uptake in dry oxic soils 24 , 25 . From all studied microsites, the median N 2 O flux was negative only in vegetated Holocene cover in Kurungnakh (−4 μg N m −2 day −1 ) and in freshly thawed Yedoma in Duvanny Yar (−9 μg N m −2 day −1 ) (Supplementary Table 1 ). Nitrous oxide production relies on mineral N supply in excess of the immediate needs of microbes and plants 6 , 26 , 27 . High-latitude soils usually have a very low content of mineral N species, particularly of nitrate (NO 3 − ) 28 , which can be expected to limit N 2 O emissions. At the Kurungnakh exposure, only the high-emitting revegetated Yedoma surfaces had measurable NO 3 − content, while no NO 3 − was detected elsewhere (Fig. 2c , Supplementary Table 2 ). The opposite spatial pattern in ammonium (NH 4 + ) content (Dunn’s test, p < 0.05; Supplementary Table 2 ) could reflect high NH 4 + consumption by nitrification in thawed and revegetated Yedoma (see below). In Duvanny Yar, NO 3 − content was generally low, except in bare and moss-covered thermokarst mounds called baydzherakhs (Supplementary Fig. 2c, e ), where high NO 3 − content up to 116 mg N (kg dry weight (DW)) −1 was found (Fig. 2c ). Strong NO 3 − accumulation indicates high nitrification activity in these dry, well-aerated soils, and availability of precursors for N 2 O production by anaerobic denitrification in deeper, water-saturated soil layers or after rain events. Effect of moisture and vegetation on N 2 O fluxes and N transformation rates In previous studies, the highest emission rates of N 2 O from permafrost-affected soils have been found from soils without living plant cover, such as bare peat surfaces of permafrost peatlands or retrogressive thaw slumps lacking vegetation 6 , 22 , 29 , 30 . The main reason behind the generally higher emissions from unvegetated than vegetated soils is obvious: when plants are not taking up N from soil, the reactive N forms are entirely available for microbial activities and growth, including the microbial N transformations producing N 2 O 6 . So, why did the highest emissions in this study occur at sites revegetated after thaw, and not from bare parts of the exposure as expected? An explanation for the high N 2 O emissions from revegetated compared to bare Yedoma could be that plant colonization indicates stabilization of the thawing Yedoma slope after the initial stages of rapid degradation and thaw slumping. This stabilization occurs at a time span between years to decades and is coupled with decreased sediment and water input and improved drainage 19 . The high-emitting revegetated Yedoma surfaces on Kurungnakh were located in the middle part of the exposure (Supplementary Fig. 6a ) with intermediate soil moisture content (water-filled pore space (WFPS) 42–84%), as well as the high-emitting Yedoma surfaces revegetated with mosses in Duvanny Yar (Fig. 2a ). The revegetated Yedoma surfaces with grasses in Duvanny Yar were located lower down the slope and had higher moisture content (WFPS 69–90%). The role of soil moisture as a primary environmental control on N 2 O fluxes, and the bell-shaped dependence of N 2 O fluxes on soil moisture peaking at the intermediate soil moisture range are well-documented 6 , 27 . At intermediate soil moisture content, both oxic and anoxic microsites coexist, providing suitable environments for the two main microbial processes responsible for N 2 O production in soils: aerobic nitrification (oxidation of NH 4 + via nitrite (NO 2 − ) to NO 3 − , N 2 O as by-product) and anaerobic denitrification (reduction of NO 3 − and NO 2 − to gaseous N forms NO, N 2 O and N 2 ) 31 . Intermediate soil moisture content is also optimal for N mineralization (N release from organic matter as a result of microbial decomposition), which is supressed by very wet or very dry soil conditions 7 . At permafrost thaw sites, liberation of mineral N species from permafrost directly at thaw enhances N availability in the short-term 32 , 33 , but in the long-term, post-thaw N mineralization is a more important mechanism of mineral N supply 34 . While there was no correlation between NH 4 + content and N 2 O flux and only a weak positive correlation between NO 3 − content and N 2 O flux ( R = 0.25, p < 0.05 , n = 30), we found strong positive correlations between N 2 O emissions and net N mineralization ( R = 0.68, p < 0.001, n = 30) and net nitrification rates ( R = 0.68, p < 0.001, n = 30). The improved drainage and associated enhancement of nitrification (NO 3 – supply) was likely an important trigger for the substantial N 2 O release from post-thaw Yedoma. On Kurungnakh the net N mineralization rates were higher in Yedoma revegetated with grasses than in bare freshly thawed Yedoma (Fig. 3b , Supplementary Table 3 ). The negative net N mineralization, i.e., net N immobilization, in freshly thawed Yedoma can be explained by high uptake of mineral N species into microbial biomass, exceeding the rate of N liberated from organic matter. In contrast, high net N transformation rates in revegetated Yedoma indicate that the microbial needs for mineral N are well met as a result of continued mineralization after thaw, which allows N 2 O emissions to occur even in the presence of plant N uptake. Even stronger net immobilization than in freshly thawed Yedoma was found in vegetated Holocene cover (Fig. 3b , Supplementary Table 3 ), suggesting limited N mineralization in this dry soil (WFPS 13 ± 6 %; Supplementary Table 1 ) with a high C/N ratio (38 vs. 14–15 in Yedoma; Supplementary Table 2 ) 35 . The increasing trend with post-thaw age was even stronger for net nitrification than for net mineralization (Fig. 3b , Supplementary Table 3 ). Despite the fact that plants can effectively compete for N species with soil microbes and suppress N losses by inhibition of nitrification and denitrification processes 36 , plants may also promote soil N cycling processes. Rhizosphere priming 37 , 38 , 39 is the term used for the summed effects of different mechanisms by which plants enhance SOM decomposition. These mechanisms include rhizospheric deposition of labile carbon compounds, which provide an easily available source of energy and C, and organic acids, which help to release protective organic-mineral associations, as well as the effect of roots on soil aggregation. Positive priming of N mineralization has been previously reported in high-latitude ecosystems 40 , 41 and in nutrient-rich croplands grown with perennial grasses 42 . In our study, there was a tendency towards higher gross N mineralization rate in Yedoma revegetated with grasses compared to freshly thawed Yedoma (Fig. 3b ). Also, the positive correlations between N 2 O emissions with C content ( R = 0.52, p = 0.05, n = 15), N content ( R = 0.60, p = 0.02, n = 15) and CO 2 fluxes (ecosystem respiration; R = 0.41, p = 0.002, n = 55) in the optimal moisture range of WFPS 45–85% suggest that plant-derived organics might stimulate N cycling processes at the Kurungnakh exposure. Additionally, grasses may have caused changes in the soil porosity and macropore structure that favor N 2 O production 43 , 44 . However, it is difficult to separate the plant effects on N 2 O emissions from the effects of moisture changes. Similarly, the increase in ecosystem respiration rates with slope stabilization and revegetation was observed in both exposures (Supplementary Fig. 6b , Supplementary Table 1 ), reflecting the joint effect of drying and increased plant C input, which cannot be distinguished from each other at this stage. Changes in microbial community composition related to N cycling and N 2 O production By using a targeted metagenomics tool designed to capture the genes responsible for key functions of the N cycle (Ref. 45 ; see Methods), we here reveal another important mechanism driving the increase in N 2 O emissions with time after thaw: changes in microbial community composition. We observed significant changes across the Yedoma exposure in the relative abundance of nitrification and denitrification genes with time passed after thaw, associated drainage and plant colonization (Fig. 4 , Supplementary Fig. 8 ). These changes occurred within just a couple of years and led to strikingly different microbial community structure related to N cycling in thawed Yedoma compared to the Holocene cover deposits that feature well-developed cryosols prevailing in the region. The sparse studies that have previously reported changes in N cycling genes associated with permafrost thaw are in line with our findings here: Significant increases in denitrification genes ( norB, nirS , nosZ ) have been observed following thaw 30 , 46 . Fig. 4: Relative abundance of selected N cycling genes at the Kurungnakh exposure from all functional gene sequences captured with the targeted metagenomics tool. a Relative abundance of amoA gene (including bacterial and archaeal). b Relative abundance of nir gene (including both nirK and nirS ). c Relative abundance of nosZ gene. d Ratio of ( nirK + nirS) / nosZ genes. The studied surfaces are arranged according to the distance from the Yedoma cliff border, with intact Holocene cover on the top of the Yedoma exposure on the left and earliest thawed revegetated Yedoma on the right side. Small gray symbols indicate values for individual samples, large red symbols indicate means, and error bars indicate standard error of mean ( n = 3 biologically independent samples). Lower case letters indicate significant differences between studied soils (Kruskal-Wallis test with pairwise comparisons with Dunn’s test; unadjusted p < 0.05). VH Vegetated Holocene cover, BYF Bare freshly thawed Yedoma, BYE Bare earlier thawed Yedoma, VYM Yedoma revegetated with mosses, and VYG Yedoma revegetated with grasses. Full size image In detail, we found an increase in the relative abundance of the amoA gene (first step of nitrification) from all the captured genes from 0.6% in freshly thawed Yedoma to 2.5–3.5% in revegetated Yedoma surfaces (Fig. 4 ). These results were supported by increasing copy numbers of bacterial and total (archaeal + bacterial) amoA gene with time after thaw (Supplementary Fig. 7 ). At the same time, the proportion of the nir genes ( nirK + nirS ) contributing to N 2 O production doubled from 15% to 29% in Yedoma revegetated with grasses, which was further coupled with halved proportion of the nosZ gene (catalyzing the reduction of N 2 O into N 2 ; Fig. 4 ). These opposite trends in nir and nosZ genes resulted in a significant increase of the ( nirK + nirS )/ nosZ ratio, a commonly used indicator of N 2 O production potential in soils (Fig. 4 ), which was shown to increase with permafrost thaw in mineral upland soils 30 . Vegetated Holocene cover had the lowest relative abundances of amoA and nir , and the highest relative abundance of nosZ among all studied soils, which together explain the low N 2 O emissions there. In addition to the above-mentioned denitrification genes, also the relative abundance of the nrfA gene encoding nitrite reduction to ammonia was very low in vegetated Holocene cover (0.4%), intermediate in bare Yedoma (1.7–1.8%) and highest in revegetated Yedoma (2.0–3.2%; Supplementary Fig. 9 ). The nrfA gene is a key functional gene in dissimilatory NO 3 − reduction to ammonium (DNRA), and its increasing abundance with post-thaw age reflects the improved availability of NO 2 – from nitrification. The norB gene responsible for nitric oxide (NO) reduction to N 2 O did not show similar gradual increase from bare to revegetated Yedoma sites as other nitrification and denitrification genes (Supplementary Fig. 9 ). This might be a result of a methodological bias: since norB has been less studied than nir and nosZ 47 , the gene databases used for developing the probe capture tool are not including enough probe diversity for norB to cover Arctic variants. Also, the nitric oxide reductase encoded by norB has a role in detoxification of NO, giving this enzyme a broader importance than just catalyzing an intermediate step of the denitrification pathway 48 . To test whether the low N 2 O production in bare freshly thawed Yedoma was not merely a consequence of the high water-saturation, we dried the soil (25% reduction in the water content) and repeated the incubation under oxic and anoxic conditions with and without acetylene addition (see Methods). Drying indeed caused a 6-fold increase in N 2 O production under oxic treatment from the initial very low production rate (Wilcoxon signed-rank test, p = 0.03; Supplementary Fig. 10 ). Under anoxic treatment, drying with and without C addition even reduced the N 2 O production ( p = 0.01). But, when we amended the soil with NO 3 − in addition to C, the N 2 O production increased drastically in all three headspace treatments (725, 12 and 379-fold in oxic treatment, anoxic treatment and anoxic treatment with acetylene, respectively; p = 0.01). This shows that even after creating favorable conditions for nitrification by drying, N 2 O production in anoxic conditions was still limited by NO 3 − due to the low abundance of ammonia oxidizers. Taken together, our results represent clear in situ evidence for the microbial limitation of N cycle and N 2 O emissions from thawing Yedoma permafrost due to low abundance of ammonia oxidizers, confirming the findings of a recent laboratory incubation study, which discovered this phenomenon 49 . Similarly to ammonia oxidizers in the present study and in the earlier laboratory study 49 , it has been shown previously that also methanogens represent a bottle-neck in Yedoma permafrost biogeochemistry: 50 a small part of the microbial community carrying out an important function associated to permafrost-climate feedbacks. On the studied Yedoma exposures, the low CH 4 emissions from bare freshly thawed Yedoma (<0.5 mg C m −2 day −1 ) despite the high water saturation suggest that also there methane production was limited by lack of methanogenic archaea (Supplementary Fig. 6 , Supplementary Table 1 ). Most of the studied soils showed minor CH 4 emission or consumption rates (Supplementary Table 1 ), and even the highest median CH 4 emission 1.95 mg C m −2 day −1 , observed in revegetated yedoma with mosses in Kurungnakh, was modest compared to the emissions from polygonal wetlands (2–35 mg C m −2 day −1 ) 51 and small ponds (4–35 mg C m −2 day −1 ) 52 in the same region. Lack of recovery of methanogenic function with post-thaw age is easy to explain with drying associated with slope stabilization at such Yedoma exposures 19 , which creates unfavorable conditions for anaerobic methanogens. At the same time, our results demonstrate that although N 2 O production in recently thawed Yedoma permafrost is restricted by the microbial community composition, retrogressive thaw slumps provide ideal conditions for the development of active N 2 O producing microbial community, leading to high N 2 O release within less than a decade. This highlights that short-term laboratory experiments indicating microbial limitations in the C and N cycles of permafrost soils 49 , 50 do not well represent the real changes in microbial community and their functioning with time. N losses from thawing Yedoma permafrost and their implications We show here that thawing Yedoma exposures host sites with optimal conditions for intense microbial N cycling and associated N 2 O production. Although N 2 O emissions may decrease with further slope stabilization as a result of continuous N losses and establishment of full vegetation cover 30 , the retrogressive thaw at the same time keeps releasing fresh sediments, rich in N available for microbial activities. According to our remote sensing analysis using ArcticDEM and UAV data, the Yedoma cliff of Kurungnakh retreated in 2012–2019 as a result of permafrost thaw at a rate of 3.7 (2.5–5.7) m year −1 (median with (25th–75th percentiles); Supplementary Figs. 3 & 4 ; see Methods section for details). Based on typical ice content of Yedoma deposits and the total N content of freshly thawed permafrost, we could estimate that this retrogressive thaw liberated at the thaw front as much as 1.7 kg of total N per m 2 per year, which was associated with a release of 39 g of mineral N per m 2 per year (see Methods section for details). These are remarkably high amounts of added N compared to the main pathways of external N input in high-latitude ecosystems: biological N fixation of 20–200 mg N m −2 year −1 53 and atmospheric N deposition of <200–300 mg N m −2 year −1 54 . The additional N from Yedoma permafrost will have important consequences for plant growth and associated C fixation 55 , lateral N losses to waterbodies 20 , 56 and gaseous N losses to the atmosphere 34 , and importantly N 2 O fluxes, as shown here. In parallel to the retrogressive thaw front, the zone with optimal conditions for high N 2 O emissions on the middle part of the slope will likely shift spatially but persist as an active zone along this retreating Yedoma shore. Based on the emissions from disturbed Yedoma revegetated with grasses on Kurungnakh (median multiplied with a snow-free-season-length of 100 days), we estimated that under these optimal conditions thawed Yedoma will lose 54.8 mg N m −2 of N 2 O to the atmosphere just in one year. This corresponds to 0.14% of the mineral N originally liberated at the permafrost thaw front from a similar area in a year (see above). This is seven times lower than the IPCC N 2 O emission factor for N fertilization in managed mineral soils (1%) 57 , but still high considering that it occurs in a pristine northern soil, which are generally N limited 28 . While it is important to remember that such high N 2 O emissions will occur in particular settings (Yedoma exposed to surface, suitable moisture content, sufficient time after thaw for establishment of N 2 O producing microbial community), these conditions are not limited to the retrogressive thaw slumps along rivers studied here. Similar disturbed N-rich Yedoma with successional plant cover are widespread along thermokarst lake shores, coasts, slopes, and valleys across the Yedoma region (Supplementary Fig. 11 ). Widespread occurrence of such landforms suggests that our findings are the first indication for substantial N 2 O emissions over large areas in the Arctic. We show that N liberated from this ancient permafrost during thaw is highly available for mineralization and further microbial activities. With rapid Arctic warming and associated permafrost thaw, the huge N resources contained in Yedoma will become increasingly available with important implications on ecosystem functioning and climate feedbacks at local to global scales. Methods Study sites Nitrous oxide fluxes were studied at two study sites located in Northeast Siberia, Russia: the Kurungnakh exposure (N 72°20’, E 126°17’), located on Kurungnakh-Sise Island in the Lena River Delta and the Duvanny Yar exposure (68°38’ N, 159°09’ E), located by the Kolyma River (Fig. 1 ). Both study regions are underlain by continuous permafrost, and the climate is continental Arctic with mean annual air temperature of –12.3 °C and −11 °C and annual rainfall amounts to 169 mm and 197 mm in Kurungnakh and Duvanny Yar, respectively 58 , 59 , 60 . More information about the climatic conditions in the region, depositional characteristics and vegetation can be found in Supplementary Note 1 and the references therein. Altogether, the following surfaces types were chosen for the study ( n = 5–10): 1) vegetated Holocene cover deposits overlying undisturbed Yedoma permafrost on the top of the exposure; 2) freshly thawed Yedoma bare of vegetation, close to thawing ice-wedges in the upper part of the exposure; 3) earlier thawed Yedoma bare of vegetation; 4) disturbed Yedoma in the lower, stabilized parts of the slope, revegetated by mosses; 5) disturbed Yedoma revegetated by grasses; and, only on Kurungnakh, 6) bare sand close to the river shore receiving Yedoma melt-waters by a small stream running through the exposure (Supplementary Figs. 1 & 2 ). In situ N 2 O fluxes In situ nitrous oxide (N 2 O) fluxes were measured by the static chamber technique 22 , twice in July 2016 on Kurungnakh and once in July 2017 in Duvanny Yar (see Supplementary Note 1 ). Five gas samples were drawn from the chamber headspace within a 50-minute enclosure time and transferred into pre-evacuated 12 ml glass vials (Labco) for storage until the analysis. Soil temperature and moisture as volumetric water content (VWC) were recorded in the vicinity of the chamber. The N 2 O mixing ratios were determined with a gas chromatograph (GC; Agilent 7890B Agilent Technologies, Santa Clara, CA, USA) equipped with an autosampler (Gilson Inc., WI, Middleton, USA), an electron capture detector (ECD) for N 2 O and a flame ionization detector (FID) for CH 4 . Fluxes of N 2 O were calculated from the slope of the linear increase of the N 2 O mixing ratio in the chamber headspace as a function of time. Besides initial visual inspection, the quality control of gas flux results was based on inspection of Root Mean Square Error (RMSE) in ppm (RMSE > 3 * SD) as compared to the variability of standard gas mixtures in a similar range. Methane (CH 4 ) fluxes were obtained from the same chamber measurements as the N 2 O fluxes. Carbon dioxide (CO 2 ) fluxes in the dark including the plants (ecosystem respiration) were measured with the dynamic chamber technique 61 using an infrared gas analyzer (Li-840, LiCor Lincoln, Nebraska, USA in Kurungnakh; EGM-4, PP Systems, Amesbury, MA, USA in Duvanny Yar). Soil sampling and analysis Soil samples were taken from the topsoil (0–10 cm), cleaned from stones and roots and homogenized by sieving (mineral soils; 5 mm mesh size) or by hand-mixing (organic soils). Bulk density was determined from volumetric soil samples after drying until constant weight at 60 or 105 °C for organic and mineral soils, respectively. Particle density was determined by a pycnometric method. Total content of C, organic C and N, as well as δ 13 C of SOC and δ 15 N in the bulk soil were analyzed with an elemental analyzer (Thermo Finnigan Flash EA 1112 Series, San Jose, CA, USA). For organic C analysis, inorganic C was removed from a subsample with the acid fumigation method 62 . Water-filled pore space (WFPS) was calculated from VWC measured in situ, using bulk density and particle density determined as described above. Soil pH was measured from slurries with a soil:H 2 O ratio of 1:4 ratio (w/v). For determination of mineral N content, ammonium (NH 4 + ) nitrate (NO 3 − ) were extracted from freshly sampled soils at the field laboratory (1 M KCl, a 1:3 volume ratio of soil to extractant). The extracts were frozen for storage until the analysis by spectrophotometric methods as previously described 61 . See Supplementary Note 1 for further details about the soil analysis. Gross and net N transformation rates Nitrogen transformation rates were determined in the field laboratory from freshly sampled soils to imitate the processes occurring in the field during the flux measurements as realistically as possible. Due to time limitation and logistical challenges related to the fieldwork in Duvanny Yar, we did not measure N transformation rates from Duvanny Yar soils, but only from soils sampled in our primary study site Kurungnakh. For the determination of gross N mineralization and nitrification rates, we used the pool dilution method, which is based on labeling the product pool (NH 4 + for mineralization, NO 3 − for nitrification) with the heavy N isotope 15 N 63 , 64 , 65 . Due to the low mineral N content and high N immobilization, we were not able to determine gross nitrification in any of the studied soils and gross mineralization in some of the soils. However, even in these cases, we could use the data to calculate the net N mineralization and nitrification rates as described below. In brief, two sets of samples (2 g of fresh soil) were prepared for both N mineralization and nitrification measurements. We added 500 μl of 0.25 mM, 10 at-% ( 15 NH 4 ) 2 SO 4 solution to the N mineralization samples, and 500 μl of 0.50 mM, 10 at-% K 15 NO 3 solution for nitrification samples. This N addition amounted to 2.1–2.6 mg N (kg DW) −1 depending on the soil moisture content. After labeling, the samples were incubated for 24 h at the approximate in situ temperature of ~5 °C. Nutrient levels (NO 3 − and NH 4 + ) were determined from samples extracted at two-time points of 4 and 24 h with 2 M KCl as described above. Content of 15 N in NH 4 + extracts was analyzed by continuous-flow isotope ratio mass spectrometer (IRMS; Thermo Finnigan DELTA XPPlus, San Jose, CA, USA) coupled to an elemental analyzer (Thermo Finnigan Flash EA 1112 Series) and an open split interface (Thermo Finnigan Conflow III) after conversion to solid phase by the microdiffusion method as previously described 22 . Net N mineralization rates were calculated by dividing the difference of the total mineral N content (NH 4 + and NO 3 − ) between the first and second sampling points with the incubation time. Net ammonification and nitrification rates were calculated similarly from the change in NH 4 + and NO 3 − contents, respectively. N 2 O production and total denitrification rates in laboratory incubations Experiment 1: Nitrous oxide production under different headspace conditions Rates of N 2 O production and total denitrification were determined by incubation experiments from soils sampled in Kurungnakh, our primary study site (See above in Gross and net N transformation rates). The soils were kept frozen during storage and shipment to the laboratory, thawed, homogenized by hand, and further stored at 4 °C. After three days of acclimatization at the incubation temperature, the soil samples (10 and 25 g fresh weight for organic and mineral soils, respectively; n = 5) were incubated at field moisture content at 10 °C under three different headspace treatments: 1) oxic, 2) anoxic and 3) anoxic with acetylene. For oxic treatment (1), laboratory air was used as headspace. For anoxic treatments with and without acetylene (2 and 3), the flasks were closed inside a glove bag after flushing several times with N 2 gas (purity ≥ 99.999%). Acetylene was added into the third treatment at 10 vol-% to block the last step of denitrification, reduction of N 2 O to N 2 , thus making N 2 O as the final denitrification product 66 . Gas samples were taken at five-time points at days 0, 1, 2, 3, and 6, and analysed for N 2 O mixing ratios with GC as described above. For oxic treatment, flux per mass of dry soil was calculated from the slope of a linear regression fitted to the first four sampling points with constant N 2 O production rate. For the anoxic treatments, we report the maximum N 2 O production between two sampling points, because we often observed N 2 O consumption (treatment 2 without acetylene) or steady state (treatment 3 with acetylene) after initial N 2 O production, indicating reduction of N 2 O to N 2 . See Supplementary Note 1 for more details about the incubation experiments. Experiment 2: Response of nitrous oxide production to different moisture conditions and carbon and nitrogen sources The aim of the second incubation experiment was to investigate the factors limiting N 2 O production in freshly thawed Yedoma. We dried the freshly thawed Yedoma to reduce the water content by 25%, weighed 17 g FW to incubation flasks, and incubated under three different headspace treatments as described above. For each headspace treatment, we applied three different amendments within a volume of 250 μl per flask: control (milli-Q H 2 O addition), addition of C (glucose; 67 μg C (g DW) −1 , equal to 0.3% of SOC), or addition of C (as above) and NO 3 − (4.7 μg N (g DW) −1 , equal to 0.3% of TN). The GC analysis and calculation followed the procedure described above for Experiment 1. Molecular studies on microbial community participating in N cycling For molecular studies, we sampled five surface types in the primary study site Kurungnakh ( n = 3). The studied surfaces represented different stages of thermokarst and post-thaw succession: vegetated Holocene cover, freshly or earlier thawed Yedoma, revegetated Yedoma with mosses or with grasses. We extracted DNA from these samples in three technical replicates as described previously 67 , see Supplementary Note 1 for details. Quantitative PCR (qPCR) of archaeal and bacterial amoA and 16 S rRNA genes was performed using the reaction and cycling conditions as described previously and summarized in Supplementary Table 5 67 , 68 , 69 , 70 . All reactions were performed in duplicates. The specificity of qPCR amplification products was verified by melting-curve analysis and gel electrophoresis. To detect the changes in N cycling-relevant microbial community structure with permafrost thaw and post-thaw succession, we studied the relative abundances of the key N cycling genes using a captured metagenomic tool. The method has been validated and tested for overall performance and specificity of the probes used for sequence capture 45 , and it has been successfully applied for studying N cycling genes in bioreactors treating aquaculture effluents 71 . This method is designed for targeting and sequencing the organisms carrying the key N cycling genes involved in the following processes: N 2 fixation ( nifH ), nitrification ( amoA ), NO 3 − reduction ( narG, napA ), denitrification ( nir ( nirK + nirS ) , norB, nosZ ), dissimilatory nitrate reduction to ammonium (DNRA) ( nrfA ) and anammox ( hdhA ) using gene-specific probes following the NimbleGen SeqCap EZ protocol by Roche NimbleGen, Inc. A detailed description of the method can be found in Supplementary Note 1 . Statistical analysis All statistical analyses were conducted with R, version 3.6.1 72 . We used histograms, Q-Q plots and the Shapiro-Wilk normality test for testing normal distribution of the data. Differences in N 2 O fluxes, soil physicochemical characteristics and N transformation rates between the surface types were tested separately within each study site (fluxes, soil characteristics) or for each treatment (N 2 O production in soil incubations). For non-normally distributed data, we used the non-parametric Kruskall–Wallis test followed by pairwise comparisons with Dunn’s test of the FSA package 73 . For normally distributed data, we used Welch’s one-way analysis of variance (ANOVA) followed by pairwise comparisons with the Games-Howell post hoc test of the userfriendlyscience package 74 . The equality of variances prior to ANOVA was tested with the Bartlett’s test. For testing the differences in molecular data (relative abundances, copy numbers & gene ratios; n = 3) between the studied soils the Kruskall–Wallis test with Dunn’s post hoc tests was also used. Treatment differences in the incubation experiments were tested with Wilcoxon signed-rank test. The role of soil characteristics, mineral N content and N transformation rates as drivers of in situ N 2 O fluxes was explored by the non-parametric Spearman correlation analysis with the Hmisc package 75 . Rate and volume of thermal erosion and related N mobilization at the Kurungnakh exposure We estimated the rate and volume of thermal erosion for the south-eastern part of the riverbank on Kurungnakh Island (length of the section 1.7 km) using Arctic DEM datasets from March 2012 (WorldView-1/WorldView-1 imagery) and April 2014 (WorldView-2/WorldView-2 imagery) 76 and a digital elevation model (DEM) from unmanned aerial vehicle (UAV) imaging from July 2019. The analysis was done in QGIS v 3.14 (with SAGA and GRASS). Detailed description of the method is available in Supplementary Note 1 . The calculation of the Yedoma retreat between 2012 and 2014 and 2014 and 2019 was based on the following steps: 1) Delineating the shoreline in 2019 based on digital elevation model (DEM) from UAV imaging in 2019; 2) delineating the Yedoma cliff boundary on the top of the exposure based on the DEM and orthophoto mosaic from UAV imaging in 2019, and 3) calculating distances between the shoreline and the cliff boundary 2019 and building intersection points for these lines with the cliff boundary determined from Arctic DEM datasets from March 2012 and April 2014. This algorithm resembles procedures that are available from the free AMBUR package 77 . Between 2012 and 2019 the Yedoma boundary retreated as a result of thermal erosion by 3.7 (2.5–5.7) m year −1 (median with 25th–75th percentiles). Then, we calculated the volume of eroded material along the studied cliff boundary from DEM difference (DEM-2019 subtracted from ArcticDEM-2014 and ArcticDEM-2012; the resulting value, if positive, means subsidence due to thaw and erosion). For this calculation, we used the ‘Raster surface volume instrument’ in QGIS. The resulting volume estimate is 223,502 m 3 . We further approximated how much total and mineral N was annually liberated at the Kurungnakh exposure as a result of permafrost using the annually eroded volume per unit area, the total ground-ice content of 82 vol % 14 , and bulk density of 1.22 g cm −3 , total N content of 0.16% and mineral N content of 35.3 mg N (kg DW) −1 detected in freshly thawed Yedoma in this study (Supplementary Table 2 ). This resulted in total N release from thawing permafrost at the Kurungnakh cliff boundary of 1.7 kg N m −2 year −1 and mineral N release as NH 4 + of 39 g N m −2 year −1 . Data availability Processed data files supporting the findings can be accessed in Zenodo, 78 . The metagenomic data are deposited to the SRA database under the BioProject link PRJNA771879. Change history 21 January 2022 Supplementary Information file was missing from this article and has now been uploaded. The original article has been corrected.
A team at UC Riverside led by computer science assistant professor Ahmed Eldawy is collaborating with researchers at Stanford University and Vanderbilt University to develop a dataset that uses data science to study the spread of wildfires. The dataset can be used to simulate the spread of wildfires to help firefighters plan emergency response and conduct evacuation. It can also help simulate how fires might spread in the near future under the effects of deforestation and climate change, and aid risk assessment and planning of new infrastructure development. The open-source dataset, named WildfireDB, contains over 17 million data points that capture how fires have spread in the contiguous United States over the last decade. The dataset can be used to train machine learning models to predict the spread of wildfires. "One of the biggest challenges is to have a detailed and curated dataset that can be used by machine learning algorithms," said Eldawy. "WildfireDB is the first comprehensive and open-source dataset that relates historical fire data with relevant covariates such as weather, vegetation, and topography." First responders depend on understanding and predicting how a wildfire spreads to save lives and property and to stop the fire from spreading. They need to figure out the best way to allocate limited resources across large areas. Traditionally, fire spread is modeled by tools that use physics-based modeling. This method could be improved with the addition of more variables, but until now, there was no comprehensive, open-source data source that combines fire occurrences with geo-spatial features such as mountains, rivers, towns, fuel levels, vegetation, and weather. Eldawy, along with UCR doctoral student Samriddhi Singla and undergraduate researcher Vinayak Gajjewar, utilized a novel system called Raptor, which was developed at UCR to process high-resolution satellite data such as vegetation and weather. Using Raptor, they combined historical wildfires with other geospatial features, such as weather, topography, and vegetation, to build a dataset at a scale that included the most of the United States. WildfireDB has mapped historical fire data in the contiguous United States between 2012 to 2017 with spatial and temporal resolutions that allow researchers to home in on the daily behavior of fire in regions as small as 375-meter square polygons. Each fire occurrence includes type of vegetation, fuel type, and topography. The dataset does not include Alaska or Hawaii. To use the dataset, researchers or firefighters can select information relevant to their situation from WildfireDB and train machine learning models that can model the spread of wildfires. These trained models can then be used by firefighters or researchers to predict the spread of wildfires in real time. "Predicting the spread of wildfire in real time will allow firefighters to allocate resources accordingly and minimize loss of life and property" said Singla, the paper's first author. A visualization of the dataset is available here.
ayanmukhopadhyay.github.io/files/neurips2021.pdf
Chemistry
Hemoglobin acts as a chemosensory cue for mother mice to protect pups
Takuya Osakada et al, Hemoglobin in the blood acts as a chemosensory signal via the mouse vomeronasal system, Nature Communications (2022). DOI: 10.1038/s41467-022-28118-w Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-022-28118-w
https://phys.org/news/2022-02-hemoglobin-chemosensory-cue-mother-mice.html
Abstract The vomeronasal system plays an essential role in sensing various environmental chemical cues. Here we show that mice exposed to blood and, consequently, hemoglobin results in the activation of vomeronasal sensory neurons expressing a specific vomeronasal G protein-coupled receptor, Vmn2r88, which is mediated by the interaction site, Gly17, on hemoglobin. The hemoglobin signal reaches the medial amygdala (MeA) in both male and female mice. However, it activates the dorsal part of ventromedial hypothalamus (VMHd) only in lactating female mice. As a result, in lactating mothers, hemoglobin enhances digging and rearing behavior. Manipulation of steroidogenic factor 1 (SF1)-expressing neurons in the VMHd is sufficient to induce the hemoglobin-mediated behaviors. Our results suggest that the oxygen-carrier hemoglobin plays a role as a chemosensory signal, eliciting behavioral responses in mice in a state-dependent fashion. Introduction Animals use the olfactory systems to acquire information about the external environment. Mice, for example, show vigorous sniffing behavior that allows for sensing not only volatile airborne chemicals but also nonvolatile compounds, through direct contact in the nose 1 . Volatile odorants are recognized by odorant receptors in the olfactory epithelium and often elicit acute behavioral responses, such as attraction or aversion 2 , 3 , 4 , 5 . In contrast, nonvolatile cues are detected by vomeronasal receptors in the vomeronasal organ (VNO) located beneath the nasal cavity, and usually convey sociosexual information associated with stereotypical behavior or emotional changes 6 , 7 . Secretions, such as urine, tear fluid, and saliva contain both volatile and nonvolatile olfactory cues. For example, male mouse urine contains major urinary protein 3 (MUP3), which elicits intermale aggression 8 . Male mouse tears contain exocrine gland-secreting peptide 1 (ESP1), which enhances female sexual behavior called lordosis and male aggressiveness 9 , 10 , 11 , 12 . Juvenile mice secrete ESP22, which suppresses sexual behavior in adult mice to lessen the number of competitors 13 , 14 . In addition to these intraspecies signals, the VNO also receives inter-species signals, such as rat cystatin-related protein 1 (ratCRP1), which acts as a pheromone in rats but as a predator signal in mice 15 . Olfactory cues appear to help animals take appropriate reactions to environmental changes by affecting the internal physiological state of the animal. The VNO expresses two types of G protein-coupled receptors (187 V1Rs and 121 V2Rs) that recognize various environmental cues 16 , 17 . Only a few vomeronasal ligand-receptor pairs have been revealed; ESP1 and ESP22 are detected by single specific V2R receptors, V2Rp5 (Vmn2r116) 10 and V2Rp4 (Vmn2r115) 14 , respectively. The signals received by the specific V2Rs are conveyed to the accessory olfactory bulb (AOB) first, and then to the limbic brain regions such as amygdala and hypothalamus wherein specific neural circuits regulate distinct output behaviors 10 , 12 , 14 . The vast majority of V2Rs, however, remain orphan, limiting our understanding of sensory mechanisms underlying VNO-mediated behavioral output. When we discovered the ESP1 molecule in 2005, we also reported that the submaxillary gland contains a molecule, which activates vomeronasal sensory neurons (VSNs) in the mouse VNO 9 . It was since discovered that the activity was due to contamination of blood in the gland, which motivated us to identify a molecular basis and neural mechanism in both the peripheral and limbic brain regions for the vomeronasal stimulatory activity in the blood. In this study, we first identify a vomeronasal stimulatory molecule in the blood and define its receptor. We then reveal the behavior evoked by the blood factor, and a neural pathway responsible for the output. Results Mouse VSNs activated by C57BL/6 male mice blood hemoglobin We initially investigated a molecule in mice blood, which activate VSNs. When C57BL/6 male mice were exposed to 1 µl of blood, we observed induction of expression of c-Fos, an immediate early gene, or phosphorylation of ribosomal protein S6 (pS6), a sensitive neural activity indicator, in a subset of neurons in the basal layer of the vomeronasal epithelium that expresses Gα o and vomeronasal type 2 receptors (V2Rs) (Fig. 1a, b ) 18 , 19 . Blood-dependent c-Fos expression was also observed in the posterior region of the AOB, the first center of the vomeronasal sensory system where axons of V2R-type neurons project (Fig. 1c ) 20 . The response was dose-dependent wherein 3 µl of blood induced the maximal c-Fos response in the mitral/tufted cell layer (M/T) of the AOB (Fig. 1d ). These results suggest that blood contains compound(s) that activate V2R-type VSNs. Fig. 1: Mouse VSNs activated by C57BL/6 male mice blood hemoglobin. a Representative immunohistochemical image (left) and number of c-Fos-positive cells per VNO section from male mice stimulated by blood (1 µl) or control buffer (right). n = 3 for control and blood. Error bars, S.E.M. Arrowheads represent example c-Fos-positive cells. Scale bar, 100 µm. b Representative immunohistochemical images of anti-pS6 and anti-Gαo staining (left) and number of total pS6-positive cells in the Gαo and Gα i2 zone of the VNO section from blood- or control buffer- stimulated male mice (right). Arrowheads represent example pS6-positive cells (left) and double-positive VSNs (right). n = 3 for control and blood. Error bars, S.E.M. Scale bar, 100 µm. c Representative immunohistochemical image (left) and number of total c-Fos-positive cells in the glomerular layer (Gl) and mitral/tufted cell layer (M/T) of the AOB section from male mice stimulated by blood (1 µl) (right). Arrowheads represent c-Fos positive cells. n = 3 for control and blood. Error bars, S.E.M. Arrowheads represent example c-Fos-positive cells. Scale bar, 100 µm. d The number of c-Fos-expressing cells in the M/T cell layer of the AOB sections from male mice stimulated with the indicated amount of blood. n = 3 for 0, 0.01, 0.03, 0.3, and 3 µl, and n = 4 for 0.1, 1, and 10 µl. Error bars, S.E.M. e Separation of blood by centrifuge. Separated blood components: plasma, cell lysate and residue were used for SDS-PAGE analysis and a c-Fos-inducing assay in C57BL/6 male mice. n = 3. Error bars, S.E.M. f , g Two-step HPLC purification with DEAE ( f ) and C4 columns ( g ). Chromatogram and c-Fos-inducing activity of each fraction are shown. The cell lysate fraction was used for DEAE column chromatography and the resultant active fraction was used for C4 column chromatography. The three fraction peaks were defined as heme, α-globin and β-globin by absorption spectrometry and mass spectrometry (Supplementary Fig. 1b-d ). h c-Fos-inducing activity of recombinant β-globin. n = 3. Error bars, S.E.M. Scale bar, 100 µm. Arrowheads highlight example c-Fos-positive cells in the VNO (top) and in the M/T cell layer of the AOB (bottom). i Representative EVG recording of hemoglobin-dependent negative change in local field potential of the VNO. j Dose-dependent electrical responses of male VNO to hemoglobin. n = 10. Error bars, S.E.M. Full size image Since the vomeronasal stimulatory activity was present in the cell lysate, but not in plasma (Fig. 1e ), we performed two-step column purification of the cell lysate using ion-exchange DEAE (Fig. 1f ) and reverse phase C4 (Fig. 1g ) columns and examined c-Fos-induced activity in the AOB. Absorption spectrum and mass spectrometry analyses suggested that the first peak with the absorption of around 400 nm was heme; the second and third peaks were identified as α- and β-globin, respectively (Supplementary Fig. 1 ). c-Fos induction was only observed in mice stimulated within the peak of β-globin (shown in magenta) (Fig. 1f, g and Supplementary Fig. 1a ). Indeed, recombinant β-globin induced c-Fos expression in the VNO and posterior zone of the AOB respectively (Fig. 1h ). A dose-dependent electrical response was also observed by purified hemoglobin in the electrovomeronasogram (EVG) recording (Fig. 1i, j ). These results demonstrated that the molecule, which induced activation of VSNs, was β-globin in the blood. Gly17 on hemoglobin is a crucial interaction site with the receptor We then investigated subtype and species specificity of hemoglobin (Hb)-derived activation of VSNs. BALB/c strain mouse blood contains two types of hemoglobin, Hb minor and Hb major , that can be separated using DEAE columns; we found that only Hb major had vomeronasal stimulatory activity (Fig. 2a, b and Supplementary Fig. 2a ) 21 . C57BL/6 strain mice have only one active form of hemoglobin (Fig. 2a, b ). The threshold amount of hemoglobin for activating VSNs was 100–300 µg (Supplementary Fig. 2a ), which was equivalent to 1–2 µl of the blood in mice. We also used hemoglobin from many kinds of species as a stimulant to mice to see the difference between species. Hemoglobin from rat, guinea pig and human activated VSNs in mice (Fig. 2b and Supplementary Fig. 2b ). Conversely, the hemoglobin from horse showed lower stimulatory activity, and no activity was found in blood from frog or fish (Fig. 2b and Supplementary Fig. 2b ). Comparison of the amino acid sequences of hemoglobin among the examined species (Fig. 2c and Supplementary Fig. 2c ) revealed that two residues, Gly17 and His78, were changed only in the hemoglobin that showed less activation of VSNs (i.e. Gly17 = >Ala (G17A) in Hb minor and = >Asp (G17D) in horse, His78 = >Asn (H78N) in Hb minor ) (Fig. 2c ). Then, we made two mutant Hb major (G17A and H78N) hemoglobin and checked their activities by c-Fos staining with sections from mice exposed to mutant Hb major . The G17A mutant of Hb major lost the activity, while the H78N mutant did not (Fig. 2d, e ) (mean ± SEM, control; 8.0 ± 0.76, G17A; 17.4 ± 4.4, H78N; 65.0 ± 19.5, BALB/c Hb; 50.3 ± 9.4), suggesting that Gly17 on the surface of hemoglobin is involved in the interaction with the receptor(s) (Fig. 2f ). Fig. 2: Gly17 on hemoglobin acts as a crucial interaction site for ligand-receptor binding. a Types of hemoglobin in BALB/c and C57BL/6 strain blood cell lysate. Two types of hemoglobin (β-globin) in the BALB/c mouse strain were separated by HPLC with a DEAE column. b Number of c-Fos-positive cells in the M/T cell layer of the AOB sections from male mice stimulated with hemoglobin from BALB/c and C57BL/6 male mice blood and blood from various kinds of vertebrates. n = 6 for C57BL/6, Rat, Horse, Human, Guinea pig, Frog, and Zebrafish, n = 8 for BALB/c-major, and n = 9 for BALB/c-minor. Error bars, S.E.M. c Alignment of β-globin amino acid sequences. An amino acid G17 is different in BALB/c minor, horse, frog, and zebrafish β-globin. d Representative immunohistochemical images of total c-Fos-positive cells in the M/T of the AOB sections from G17A-mutant-, H78N-mutant-, and control buffer-stimulated male mice. n = 3 for H78N, n = 9 for control and G17A. Arrowheads represent example c-Fos-positive cells. Scale bar, 100 µm. e Number of c-Fos-inducing cells in the M/T cell layer of the AOB sections from G17A-mutant-, H78N-mutant-, hemoglobin (Hb) from BALB/c-, and control buffer-stimulated male mice. n = 3 for H78N, n = 6 for BALB/c Hb, and n = 9 for control and G17A. Error bars, S.E.M. control vs. BALB/c Hb; p = 0.007, G17A vs. BALB/c Hb; p = 0.046, control vs. H78N; p = 0.058, and G17A vs. H78N; p = 0.060 by the two-sided Steel-Dwass test. f Three dimensional structure of human hemoglobin (1A3N, RCSB Protein Data Bank, ) 47 . Blue and green cartoons in the model represent α-globin and β-globin, respectively. The position of the 17th glycine is highlighted by a red dot. Full size image Vmn2r88 is a specific vomeronasal receptor for hemoglobin The location of the hemoglobin-activated neurons in the VNO suggests that hemoglobin is recognized by the V2R-type receptor family, which constitutes 121 members in mice 17 . To identify hemoglobin receptor(s), we performed double in situ hybridization using cRNA probes that recognize various V2R clades and an immediate early gene, early growth response protein 1 ( Egr1 ), for the VNO of mice stimulated with hemoglobin 10 , 14 , 18 . Egr1 -positive cells were co-expressed with cRNA probes that detected all members of the V2Rf clade (Fig. 3a, b ), but not other V2R clades (Fig. 3a, b , and Supplementary Fig. 3a ). Using more specific probes, we found that Egr1 -positive cells were identified by the V2Rf5 probe that detected five V2Rs ( Vmn2r88 , 89 , 121 , 122 , and 123 ) of the V2Rf clade (Fig. 3c ). Finally, a higher hybridization temperature and a specific probe for Vmn2r88 , 89 , and 122 (which also detect Vmn2r121 and 123 ) allowed us to pinpoint Vmn2r88 (Fig. 3c ). We next generated Vmn2r88 -deficient mice using the CRISPR/Cas9 genome-editing system (Supplementary Fig. 3b ) 22 and checked the hemoglobin response (Fig. 3d ). Vmn2r88-positive cells were disappeared in Vmn2r88 -deficient mice (Fig. 3d, e ). When mice were exposed to a control vehicle, no cell was activated in mice with (+/+) or without (−/−) Vmn2r88 using pS6 staining, and in the case of hemoglobin exposure, no hemoglobin-dependent activated cell was observed in the VNO sections from Vmn2r88 -deficient mice (Fig. 3d, e ). These histological analyses in the peripheral sensory neurons suggest that hemoglobin is recognized by a single type of V2R, Vmn2r88. Fig. 3: Vmn2r88 is a specific vomeronasal receptor for hemoglobin. a Dual-color ISH staining of a VNO section from a hemoglobin-stimulated C57BL/6 mouse labeled with the Egr1 cRNA probe (green) and V2Rf clade-specific cRNA probe (magenta). n = 3. Scale bar, 50 µm. b , c Bar graph representing the average cell number in VNO sections from C57BL/6 mice stimulated with hemoglobin. In Fig. 3b , probes to distinguish each V2R clade were used. In Fig. 3c , probes to narrow down candidate genes in V2Rf clade (named V2Rf1-V2Rf5) and to further distinguish each gene in V2Rf5 clade (Vmn2r88, 89 and 122) were used. Black, yellow, and gray parts represent only V2R -positive ( b ) or V2Rf -positive ( c ), V2R and Egr1 double-positive, and only Egr1 -positive cells respectively. Hybridization temperatures are shown below the graphs. d Vmn2r88 and pS6 immunostaining of VNO sections from Vmn2r88 +/+ or Vmn2r88 −/− male mice exposed to hemoglobin (Hb) or distilled water (control). Open arrowheads show pS6-positive cells; closed arrowheads show cells double-labeled for pS6 and Vmn2r88. In the mutant mice, corresponding pS6 and Vmn2r88 expression completely disappeared. n = 3 for Vmn2r88 +/+ -control, and Vmn2r88 -/- -Hb, and n = 5 for Vmn2r88 +/+ -Hb. Scale bar, 50 µm. e The number of pS6- (green), Vmn2r88- (magenta), and double-positive cells (yellow) per VNO section. In the group of Vmn2r88 +/+ -Hb 16 sections from each of 5 animals were quantified and 16 sections from each of 3 animals were counted in the other groups. Double-positive cells completely disappeared in the sections from Hb-stimulated Vmn2r88 -/- mice. Full size image Hemoglobin enhances c-Fos expressing cells in the dorsal region of the VMH and PAG only in lactating mothers We investigated whether there was sexual dimorphism or state dependency in the detection of hemoglobin. Both Vmn2r88-expressing neurons in the VNO and cells in the mitral/tufted cell layer of the AOB were activated by hemoglobin not only in males (Fig. 1 h and 3e ) but virgin and lactating female mice (Supplementary Fig. 4 ). To examine which brain regions are activated by hemoglobin, we performed c-Fos in situ hybridization mainly targeted for the medial amygdala (MeA), bed nucleus of the stria terminalis (BNST), posteromedial cortical amygdaloid nucleus (PMCo), medial preoptic area (MPA), and VMH, these being regions that receive input from the AOB 23 . Some increase in the number of c-Fos -positive neurons was observed in the MeA, mainly the posteroventral region of the MeA (MeApv), in all types of mice tested (males, and virgin and lactating females) upon stimulation with hemoglobin (Fig. 4a, b ). Interestingly, we observed a significant increase in the number of c-Fos -positive neurons in the dorsal VMH (VMHd) (Fig. 4d, e ) and dorsal periaqueductal gray (PAGd) (Fig. 4g, h ) only in lactating mothers. No apparent activation was seen in the BNST, PMCo (Fig. 4c ), ventrolateral VMH (VMHvl) (Fig. 4d, e ), and MPA (Fig. 4f ) in any type of mouse. Hemoglobin-dependent c-Fos induction was not present in sections from Vmn2r88 -deficient lactating mice (Supplementary Fig. 5 ), confirming that the signals are hemoglobin-Vmn2r88-specific. Our histological results showed mother-specific c-Fos enhancement in the VMHd and PAGd, suggesting that hemoglobin possesses some specific information for lactating females. Fig. 4: Hemoglobin enhances c-Fos expressing cells in the dorsal region of the VMH and PAG only in lactating mothers. a Representative ISH images of the posterior region of the MeA from C57BL/6 male, virgin female, and lactating mothers pre-stimulated with control- or hemoglobin (Hb)-cotton swabs. c-Fos cRNA probe (red) was used in conjunction with nuclear DAPI staining (blue). Abbreviations: MeApd, MeA posterodorsal region; MeApv, MeA posteroventral region; D, dorsal; and V, ventral. n = 3 for male-control and Hb, virgin female-Hb, n = 4 for virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. Scale bar, 100 µm. b The number of c-Fos -positive cells in each sub-region of the MeA stimulated with control buffer or Hb. Abbreviations: MeAa, MeA anterior region. For counting MeAa; n = 3 for male-control and Hb, virgin female-Hb, n = 4 for virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. 6 (MeAa) and 8 (MeAp) sections from each animal were quantified. Error bars, S.E.M. (MeApd) p = 0.004, and (MeApv) virgin female; p = 0.057, and mother; p = 0.001 by the two-sided Wilcoxon rank-sum test. c The number of c-Fos -positive cells in the BNST and PMCo stimulated with control buffer or Hb. n = 3 for virgin female-Hb, n = 4 for male-control and Hb, virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. 7 (BNST) and 5 (PMCo) sections from each animal were quantified. Error bars, S.E.M. d Representative ISH images of the VMH from males, virgin females, and lactating mothers, stimulated with control buffer or hemoglobin. c-Fos cRNA probe (red) was used in conjunction with nuclear DAPI staining (blue). Abbreviations: d, dorsal region; vl, ventrolateral region. n = 3 for virgin female-Hb, n = 4 for male-control and Hb, virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. Scale bar, 100 µm. e Quantification of c-Fos -positive neurons in the VMH. The number of sections counted to determine the number of c-Fos -positive neurons in each brain area of each animal was 10. Error bars, S.E.M. n = 3 for virgin female-Hb, n = 4 for male-control and Hb, virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. p = 0.034 by the two-sided Wilcoxon rank-sum test. f Number of c-Fos -positive cells in the MPA stimulated with control buffer or Hb. n = 3 for virgin female-Hb, n = 4 for male-control and Hb, virgin female-control, n = 6 for mother-control, and n = 7 for mother-Hb. 4 sections from each animal were quantified. Error bars, S.E.M. g Representative ISH images of the dorsal region of the PAG from lactating mothers stimulated with control buffer or hemoglobin. c-Fos cRNA probe (red) was used in conjunction with nuclear DAPI staining (blue). n = 5 for mother-control and Hb. Scale bar, 100 µm. h Quantification of c-Fos -expressing neurons in the PAGd. n = 3 for virgin female-Hb, n = 4 for male-control and Hb, virgin female-control, n = 5 for mother-control and Hb. 4 sections from each animal were quantified. Error bars, S.E.M. p = 0.007 by the two-sided Wilcoxon rank-sum test. Full size image Hemoglobin enhances digging behavior in lactating mothers Mice in their natural environment encounter blood under specific conditions, such as upon an injury due to intermale aggression, damage by predator attack, and pup delivery. Therefore, we first investigated the effects of hemoglobin on social behaviors such as aggression and sexual behavior. However, there was no obvious change in male-male aggression, maternal aggression, or sexual behavior in virgin females upon hemoglobin exposure (male-male aggression [total events of attack behavior (mean ± SEM)]; cast male intruder; 7.2 ± 2.3, cast male intruder with hemoglobin; 9.5 ± 8.2, n = 4, maternal aggression [total events of attack behavior (mean ± SEM)]; cast male intruder; 26.1 ± 3.2, cast male intruder with hemoglobin; 29.8 ± 5.4, n = 9, female sexual behavior [total events of lordosis behavior (mean ± SEM)]; control; 0.75 ± 0.75, hemoglobin; 0.67 ± 0.49, n = 4 in control, n = 6 in hemoglobin). Since the AOB-MeApv-VMHd-PAGd pathway seems to be activated by hemoglobin only in lactating mothers (Fig. 4 ), and this conceivable pathway is supported by previous studies about circuits responsible for vomeronasal information and hypothalamic neural circuits 12 , 23 , 24 , we next looked at the behavior of lactating mothers upon exposure to hemoglobin. When a hemoglobin-soaked cotton swab was presented to mothers after pups were removed, the mothers showed robust digging behavior (Fig. 5a, b ). The same digging behavior was also observed with exposure to fresh blood (Fig. 5b ) (Mother: mean ± SEM, control; 55.1 ± 7.8, hemoglobin (Hb); 111.8 ± 18.1, ESP1; 34.0 ± 14.0, fresh blood; 118.4 ± 17.8). This behavior was not due to pup removal because the same behavior was seen in the trials with mothers together with their pups (Supplementary Fig. 6 and Supplementary Movie 1 ) (mean ± SEM, digging time, control; 5.5 ± 3.4, hemoglobin (Hb); 22.5 ± 5.4, number of digging, control; 1.4 ± 0.87, hemoglobin (Hb); 9.0 ± 1.9), and consequently, we observed a significant delay in the pup retrieval assay (Supplementary Fig. 7 ) (mean ± SEM, digging time, control; 164.8 ± 33.5, hemoglobin (Hb); 289.3 ± 38.3). Hemoglobin-dependent digging enhancement was not observed in Vmn2r88 -deficient lactating females, suggesting that this behavior is hemoglogin-Vmn2r88-specific (Fig. 5c ) (mean ± SEM, + / + (control); 48.2 ± 5.3, +/+ (hemoglobin); 123.1 ± 19.4, -/- (hemoglobin); 58.2 ± 12.2). In contrast, no increase in digging behavior was seen in a virgin female or male mouse, upon stimulation with hemoglobin (Fig. 5d ), consistent with histology data that showed a difference in the activation pattern of brain regions. Fig. 5: Hemoglobin enhances digging behavior in lactating mothers. a Schematic illustration of the timeline of the cotton exposure assay. In the trials of lactating mothers, their pups were removed one hour before the cotton exposure. b – d Quantification of the digging time duration (sec) of wild type C57BL/6 lactating mothers ( b ), Vmn2r88 -mutant lactating mothers ( c ), males, and virgin females ( d ) with pre-exposure to cotton balls transfused with hemoglobin (Hb), and fresh blood. Wild type C57BL/6 mothers; n = 15 for control, n = 6 for ESP1, n = 11 for Hb, and n = 8 for fresh blood, Vmn2r88 -mutant lactating mothers; n = 5–9, virgin females; n = 9, males; n = 8. Error bars, S.E.M. ( b ) control vs. fresh blood; p = 0.028, control vs. Hb; p = 0.049, ESP1 vs. fresh blood; p = 0.048, and ESP1 vs. Hb; p = 0.044, and ( c ) +/+ (control) vs. +/+ (Hb); p = 0.017 and + /+ (Hb) vs. -/- (Hb); p = 0.041 by the two-sided Steel-Dwass test in panel ( b ) and ( c ). e , f Quantification of the total freezing duration ( e ) and digging time duration ( f ) of lactating mothers stimulated with the indicated concentration (from 100-fold to 50000-fold dilution with mineral oil) of a 2MT transfused cotton swab. n = 3 for 1000-fold and 50000-fold, n = 5 for 100-fold, n = 8 for 5000-fold and 10000-fold. Error bars, S.E.M. ( e ) 100-fold vs. 10000-fold; p = 0.007, 100-fold vs. 5000-fold; p = 0.007, 1000-fold vs. 10000-fold; p = 0.015, and 1000-fold vs. 5000-fold; p = 0.015, and ( f ) p = 0.043 by the two-sided Steel-Dwass test. Full size image Digging behavior may be a reflection of stress-, fear- or anxiety-related emotional changes 25 , 26 . Thus, we first examined whether 2-methyl-thiazoline (2MT), an odorant that causes innate fear responses, such as freezing, can also elicit digging behavior in mothers 27 . 2MT (100-fold dilution) induced freezing in lactating females as previously described, and as the concentration decreased, the freezing behavior disappeared (Fig. 5e ). Conversely, significant increases in digging behavior were observed at the concentrations around which freezing behavior disappeared (5000-fold or 10,000-fold, Fig. 5f ). These data sets showed that hemoglobin-induced digging behavior only in mothers and that 2MT at a lower concentration also evoked the same behavioral output. The response towards 2MT exposure suggests that digging enhancement can be interpreted as risk assessment, a mild form of defensive behavior 28 . Hemoglobin enhances rearing, a type of exploratory behavior, in lactating mothers Next, to examine whether hemoglobin produces anxiogenic effects, negative valence, or stress-inducing effects, we performed an open field assay for mothers with or without pre-exposure to hemoglobin (Fig. 6a ). Unexpectedly, total distance, center time, and moving speed were the same between all conditions, including hemoglobin, 2MT, and control vehicle (Fig. 6b ), suggesting that the experimental paradigm here with pre-exposure of stimulant in the home cage is not suitable to detect anxiety-like behaviors. Instead, we observed a significant increase in the duration of rearing behavior that is a type of exploratory response that can be observed in the open field maze, which was completely abolished in Vmn2r88 -deficient lactating females (Fig. 6b, c ) (mean ± SEM, + / + (control); 57.5 ± 5.1, +/+ (hemoglobin); 76.9 ± 5.7, -/- (hemoglobin); 48.7 ± 4.2) 29 , 30 . We also performed a simple two-chamber test; lactating female mice apparently avoided the area of 2MT, while mice did not show any avoidance to hemoglobin, suggesting that hemoglobin does not possess negative valence, as 2MT does (Supplementary Fig. 8a-c ). A possibility that the behavior is due to a stress response was also examined by measuring corticosterone upon stimulation with hemoglobin, but no apparent increase was observed (mean ± SEM, control; 30.8 ± 10.9, hemoglobin; 38.5 ± 5.3, n = 5), suggesting that hemoglobin is not a stressor for the mice. These lines of evidence suggest that the digging and rearing behavior caused by hemoglobin is not an anxiety-related stressful response but a type of exploratory and/or risk assessment behavior. Fig. 6: Hemoglobin enhances rearing, a type of exploratory behavior, in lactating mothers. a Schematic illustration of the timeline of the open field (OF) assay with cotton pre-exposure. Cotton exposure was performed in their home cage (with their pups). b Quantification of total distance, total center time, moving speed, and rearing time duration of lactating mothers, pre-stimulated with control buffer-, ESP1-, 2MT- (low 2MT: 10000-fold dilution, high 2MT: 10-fold dilution) or hemoglobin (Hb)-cotton swabs in the open field assay for 10 min. n = 8 for control, n = 7 for Hb, and n = 5 for ESP1, low 2MT, and high 2MT. Error bars, S.E.M. p = 0.048 by two-sided Wilcoxon rank-sum test with Dunnett correction. c Quantification of rearing time duration of Vmn2r88 -mutant lactating mothers, pre-stimulated with control buffer- or Hb-cotton swabs. n = 6 for + /+ (Hb) and -/- (Hb), n = 10 for + /+ (control). Error bars, S.E.M. + / + (control) vs. +/+ (Hb); p = 0.098 and + /+ (Hb) vs. -/- (Hb); p = 0.011 by the two-sided Steel-Dwass test. Full size image SF1-positive cells in the VMHd are important for hemoglobin-dependent digging enhancement Finally, we performed experiments to show the importance of specific cell populations for hemoglobin-dependent outputs. Our histological analysis suggests that the activation patterns were different in the cells of the VMHd among lactating, virgin male and female mice (Fig. 4e ). To examine whether hemoglobin activates neurons expressing steroidogenic factor 1 (SF1) that are known to be a marker of VMHd, we performed dual-color in situ hybridization with sections from hemoglobin-stimulated lactating females (Supplementary Fig. 9a ) 31 . There was significant overlap between SF1 -expressing neurons and hemoglobin-derived c-Fos and its number was larger than that of c-Fos and SF1 -negative cells (the number of SF1 -negative and Hb-dependent c-Fos + cells; 37.8 ± 7.8, the number of SF1 -positive and Hb-dependent c-Fos + cells; 60.3 ± 14.6, both shown in black in Supplementary Fig. 9b ) . These results suggest that SF1 appears to be a good molecular marker to manipulate hemoglobin-responsive cell population in the VMHd. Next, we used virus encoding DREADD-Gi and lactating SF1-Cre female mice or wild type C57BL/6 female mice as a control to silence neural activities of the SF1-positive cells in the VMHd (Fig. 7a, b and Supplementary Fig. 10a, b ). Virus injection into neurons in the VMHd was performed before mating. After waiting for sufficient viral infection and delivery, CNO or saline injection into lactating female mice was performed 60 min prior to hemoglobin exposure and at the same time, their pups were removed from the cage. As a result, neural silencing of SF1-expressing cells in the VMHd significantly suppressed digging behavior in animals that were stimulated with hemoglobin (Fig. 7c and Supplementary Fig. 10c ). These results suggest that SF1-positive neurons in the VMHd are necessary for hemoglobin-dependent digging enhancement in lactating females. Fig. 7: SF1-positive cells in the VMHd are important for hemoglobin-dependent digging enhancement in lactating female mice. a Schematic illustration of the animal setup and timeline for pharmacogenetic inhibition of SF1 -expressing neurons in the VMHd. AAV-DIO-hM4Di-mCherry is injected into SF1-positive cells in the VMHd. Image adapted from Allen Mouse Brain Atlas 48 . b A representative coronal section showing DREADD-Gi expression (mCherry-positive cells shown in red) in the VMHd. n = 8. Scale bar, 500 µm. c Quantification of the total digging duration (sec) of hemoglobin (Hb)-stimulated SF1-Cre lactating mothers with i.p. injection of saline or CNO. n = 4 for CNO group. n = 4 for saline group. Error bars, S.E.M. p = 0.030 by the two-sided Wilcoxon signed-rank test. d Schematic image and illustration of the setup and timeline for optogenetic activation of SF1 -expressing neurons in the VMHd of lactating females. AAV-DIO-ChR2 or AAV-DIO-GFP are injected into SF1-positive cells in the VMHd and optic fibers are implanted above the target region. Image adapted from Allen Mouse Brain Atlas 48 . e A representative coronal section showing ChR2 expression (eYFP-positive cells shown in green) in the VMHd. n = 5. Scale bar, 500 µm. f , g Quantification of digging behavior, with or without weak light stimulation ( f 0 mW, g 0.01 mW). Error bars, S.E.M. p = 0.043 by the two-sided Wilcoxon signed-rank test, n = 5 for GFP and ChR2 group. Full size image We then asked whether the activation of the SF1-positive population induced hemoglobin-dependent behavior. For this purpose, we conducted an optogenetic experiment to activate SF1-positive cells in the VMHd. Optical fibers were implanted above SF1-positive neurons in the VMHd after injection of virus expressing channelrhodopsin (ChR2) in a Cre recombinase-dependent manner ( AAV-DIO-ChR2 ) or AAV-DIO-GFP virus as a control (Fig. 7d, e ) 32 . Weak light stimulation (0.01 mW) on the SF1-positive population elicited a significant increase in the total duration of digging behavior (Fig. 7f, g and Supplementary Fig. 11a–e ). In contrast, 1 mW light stimulation did not evoke digging or rearing enhancement but freezing, jump, and dash (Supplementary Fig. 11f, g ) (3 out of 5 mice showed freezing after 1 mW light stimulation.). Therefore, SF1-positive population in the VMHd is sufficient for exploratory and/or risk assessment behavior depending on the strength of stimulation. Discussion In this study, besides the main role of hemoglobin as an oxygen carrier in the blood, we discovered a previously unrecognized function of hemoglobin as a chemosensory signal. Recently, hemoglobin was also identified as a vomeronasal-stimulating ligand in the course of analysis of a molecular basis for infanticide 33 . Hemoglobin itself, however, did not induce a pup attack, and Vmn2r88 knock-out had little effect on behavior 33 . This study revealed the interesting behaviors of digging and rearing in mothers, upon the nasal sensing of hemoglobin. This discriminative output can be interpretated as exploratory and/or risk assessment behavior 28 , 34 . This hemoglobin-induced behavior appears to be caused by some change in the internal state of lactating females, via activation of specific brain regions such as the VMHd and PAGd (Supplementary Fig. 11h ). What is the meaning of hemoglobin-induced behavior in mice that is only apparent in motherhood? We noticed that without hemoglobin, the duration of digging behavior was longer in mothers than in males or virgin females (Fig. 5b-d ), shorter in the presence of pups in the nest than in the absence (Supplementary Fig. 6b ), and much longer in the pup retrieval assay (Supplementary Fig. 7c ). This difference in duration of background digging behavior may be a reflection of lactating females’ susceptibility to their external world and/or exploratory activity towards their surrounding environment, which was enhanced by hemoglobin. The hemoglobin effect may be related to the experience of blood exposure upon birthing. There is also a possibility that hemoglobin-dependent digging is a type of repetitive behavior in mice 35 , 36 . We showed that hemoglobin activated a limbic neural pathway, AOB-MeApv-VMHd-PAGd, in lactating mothers (Supplementary Fig. 11h ). Interestingly, these brain regions are the same as those responsible for ESP1-dependent sexual behavior and also those that respond to signals from predators, such as snake skin 12 and rat CRP1 (Ref 15 .). The pathway at the level of a nerve nucleus appears to be the same but responsible cell populations must be distinct from each other as was shown for the case of ESP1, snake skin, and ESP22 (Ref 12 , 14 .). Another interesting point on the circuit is that the behavioral output was observed only in lactating mothers, even though both hemoglobin receptor-expressing cells in the VNO and neurons in the mitral/tufted cell layer of the AOB were similarly activated in male and virgin female mice (Supplementary Fig. 4 ). These results suggest that the difference in the behavioral response toward hemoglobin between virgin and lactating females is not due to the sensitivity to the ligand at the peripheral level. Furthermore, our results show that the representation of sensory information becomes distinct between virgin and lactating females in the downstream brain areas (Fig. 4 ). Future studies to investigate how the animals reproductive state affects the sensory information process in these brain areas are critical to fully understand neural mechanisms underlying the state-dependent behavior modulation. Although it is known that the volatile odors of bleeding serve as an aversive or attractive cue to animals, including humans 37 , 38 , this study provides evidence that nonvolatile cues in the blood also possess information regarding the external environment. The detection of hemoglobin takes place in the vomeronasal organ at the bottom of the nasal cavity through a single G protein-coupled receptor, and the signal is processed via dedicated limbic regions. Our study demonstrates that sensing the ‘smell of blood’ occurs in a state-dependent fashion, namely only in birth-experienced lactating females, which may be crucial for protecting pups and ensuring further parental behavior in urgent situations. Methods Animals Animals were housed under a regular 12 h dark/light cycle, 23 ± 2 °C, 50 % humidity, with food and water ad libitum . Wild type BALB/c, C57BL/6 mice were purchased from Japan CLEA (Japan), Japan SLC (Japan), or Charles River Japan (Japan) for all experiments. Vmn2r88 -deficient mice were generated as described in the “Generation of mutant mice by CRISPR-mediated genome editing”. SF1-Cre (also known as Nr5a1-Cre , Jax#012462) mice were purchased from the Jackson Laboratory. Experiments were carried out in accordance with the animal protocols approved by the Animal Care and Use Committees at the University of Tokyo and RIKEN. Sample preparation Blood was collected from BALB/c or C57BL/6 male mice at the age of 10 weeks (Japan CLEA and Charles River Japan). Frog blood was collected from female Xenopus laevis . Fish blood was kindly provided by M. Masuda at RIKEN CBS. Blood cells of these strains were dialyzed against distilled water in order to extract hemoglobin. Stored blood from horse and guinea pig (Sigma) were also dialyzed against distilled water. Purified rat and human hemoglobin (Sigma) were also used in our histological analysis. Small cotton balls were soaked in these samples and presented to mice 90 min before their dissection. Purification and characterization of active compound Blood from male mice was diluted with distilled water (10-fold dilution). The blood cell fraction was dialyzed with a cellulose tube (Sanko) and loaded onto TSK-GEL DEAE-5PW columns (7.5 φ × 75 mm, TOSOH). Compounds were eluted under a gradient of 0–500 mM NaCl in 20 mM Tris-HCl (pH 8.0) at 1 ml min −1 and detected by Diode Array Detector L-2455 (Hitachi). Active fractions were incubated with an equivalent volume of 0.085% TFA (trifluoroacetic acid) solution and loaded onto a reverse-phase C4 HPLC column (PEGASIL-300 C4P, 4.5 φ × 250 mm, Sensyu). Samples were eluted under a gradient of 30–60% ACN (acetonitrile) in 0.085% TFA at 1 ml/min and detected by Diode Array Detector L-2455 (Hitachi). The purity of fractionated samples was confirmed by SDS-PAGE with 15% acrylamide gel, stained with Coomassie Brilliant Blue. Production of recombinant β-globin Total RNA was prepared from the liver of BALB/c mice using TRIzol reagent (Invitrogen, #15596026). β-globin cDNA was obtained by RT-PCR using the following primers: 5′- ATTCATATGGTGCACCTGACTGATGCTGAGAAGG; 5′- AATCTCGAGGTGGTACTTGTGAGCCAGGGCAGCAGC. The amplified DNA was subcloned into the expression vector pET-22b (Novagen, #69744). The expression construct was transformed into E. coli BL21 (DE3). Constructs with single amino acid mutations (G17A and H78N) were made using specific PCR primers. Peptide expression was induced with isopropyl thiogalactoside for 4 h. Bacterial pellets were resuspended in urea buffer (5 M urea, 20 mM Tris-HCl [pH 7.5]) and sonicated. After centrifugation, the bacterial pellets were resuspended in a specific buffer and purified using the His-Bind purification kit (Novagen, #69864) in accordance with the manufacturer’s protocol. After purification, recombinant protein was applied to a reverse-phase C4 column (PEGASIL-300 C4P, 4.5 φ × 250 mm, Sensyu) in order to desalt the elution buffer. The fraction with significant absorbance was collected and freeze-dried using a freeze dryer (Tokyo Rikakikai, EYELA FDU-2200). Recombinant β-globin was stored at −80 °C before use and used with a cotton ball for histological analysis, as shown in Fig. 1h . Purification of hemoglobin For hemoglobin purification, the blood of BALB/c or C57BL/6 male mice at the age of 10–12 weeks (about 500 μl in each purification) was collected and centrifuged for 10 min. A similar volume of PBS (about 250 μl) was added to the blood cell fraction. After repeating these steps twice, the same volume of distilled water was added to the blood cells. To remove residue, the blood cell solution was centrifuged again for 10 min. The solution was then diluted with distilled water (10-fold dilution). The concentration of hemoglobin was estimated to be 15 μg μl −1 . The hemoglobin solution was stored at −80 °C before being used for electrophysiology (Fig. 1i, j ), identification of its receptor (Fig. 3a-e and Supplementary Fig. 3a ), histological analysis in higher brain regions (Fig. 4 and Supplementary Fig. 5 ), and behavior assays (Figs. 5 , 6 , 7 and Supplementary Figs. 6 – 8 ). Electrophysiology Electrovomeronasogram recording (Fig. 1i, j ) was performed as described previously with minor modifications 13 , 39 . For the EVG recording, 10-week-old BALB/c female mice were anaesthetized and sacrificed by quick decapitation. VNO with exposed vomeronasal neuroepithelium was put in a recording chamber filled with Ringer’s solution (140 mM NaCl, 5.6 mM KCl, 5 mM HEPES, 2 mM pyruvic acid sodium salt, 1.25 mM KH 2 PO 4 , 2 mM CaCl 2 , 2 mM MgCl 2 , 9.4 mM D-glucose (pH 7.4)). The field potential was recorded as previously described 9 . Spikes were analyzed using Igor Pro functions (Wave Metrics) 40 . The purified β-globin was also dialyzed against Ringer’s solution before use for electrophysiology. External fluid from dialysis was used as a control. Histochemistry To prepare the sections for in situ hybridization (ISH) and immunohistochemistry, 10–12-weeks old BALB/C female (Figs. 1 , 2 , 3 and Supplementary Figs. 1 , 2 , 3 ) and C57BL/6 male mice (Fig. 4 and Supplementary Fig. 4 , 5 ) were anesthetized with a lethal amount of sodium pentobarbital, sacrificed, and perfused with phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA) in PBS. Snouts and brain tissues were postfixed with 4% PFA in PBS overnight. To prepare VNO sections (Figs. 1 a-b, 1 h. 3a-e , and Supplementary Figs. 3a , 4a ), snouts were decalcified in 0.5 M EDTA (pH 8.0) for 48 h at 4 °C. The tissues were then cryoprotected with 30% sucrose solution in PBS at 4 °C for 24–48 h. After collecting 14 μm coronal sections of the VNO or 30 µm sagittal sections of the AOB and 40 µm coronal sections of the brain using a Cryostat (model #CM1860, Leica), the sections were placed on MAS-coated glass slides (Matsunami). Cryosections of the VNO and AOB were incubated with anti-c-Fos antibody (Oncogene (Ab-2), (1:1000, lot# 21584-1), Abcam (Ab-5), (1:100, lot# ab7963-1), and Calbiochem (Ab-5), (1:10,000, lot# 34095), followed by biotinylated goat anti-rabbit IgG secondary antibody (1:200, Vector Laboratories), ABC amplification (1:100, Vector Laboratories), and staining with 3,3’ diaminobenzidine (Sigma). The sections shown in Fig. 1b were incubated with anti-pS6 ribosomal protein (S235/236) antibody (1:1000, Cell Signaling, #4858) and anti-Gαo antibody (1:500, MBL, #551) and Gαo signals were visualized with Alexa488-conjugated goat anti-rabbit secondary antibody (1:500, Invitrogen, A11034). The VNO sections from Vmn2r88 -knockout mice shown in Fig. 3d-e were incubated with anti-pS6 antibody (1:200, Cell Signaling, #4858) and anti-Vmn2r88 antibody (1:300, originally made in this study) and visualized with Alexa Fluor 488-conjugated goat anti-guinea pig IgG secondary antibody (1:500, Invitrogen, A11073) and Cyanine3-conjugated goat anti-rabbit IgG secondary antibody (1:500, Invitrogen, A10522). Guinea pig antisera was raised against synthetic peptides specific to Vmn2r88: NH 2 -C + IRKYKDKFRY-COOH. Antibodies were then affinity purified using affinity columns (Sulfolink) conjugated with the synthetic peptide. Double ISH in the VNO section for receptor screening was performed as follows 10 , 11 , 12 , 14 . To synthesize the cDNA for the V2R and Egr1 probes, total RNA was prepared from VNOs collected from 9 weeks old BALB/c female mice using TRIzol reagent (Invitrogen, #15596026). After RQ1 RNase-Free DNase (Promega, M6101) treatment, total cDNA was synthesized using Superscript III (Invitrogen, #18080093). cDNA for the V2R and Egr1 probes was obtained by RT-PCR. The V2R and Egr1 probes are both approximately 800 bp in length and the Egr1 probes consists of 3 probes to cover nearly the full length mRNA. ISH probes were prepared by in vitro transcription with DIG RNA Labeling Mix (Roche Applied Science, #11277073910) or Fluorescein RNA Labeling mix (Roche Applied Science, #11685619910) and T7 polymerase (Promega, #P2075), T3 polymerase (Roche Applied Science, #11031163001) or SP6 polymerase (Roche Applied Science, #10810274001). V2R probes were labeled with DIG and Egr1 probes were labeled with Flu unless otherwise noted. Sections of VNO underwent ISH at 60 °C or 68 °C overnight. 300 ng ml −1 of Egr1 probes and 800 ng ml −1 of V2R probes were suspended in hybridization solution unless otherwise noted. After a series of post-hybridization washing and blocking, Flu-positive cells were visualized with anti-FITC antibody (PerkinElmer, #NEF710001EA, 1:250 in blocking buffer) followed by TSA biotin amplification reagent (PerkinElmer, #NEF749A001KT, 1:50 in 1 × plus amplification diluent) and streptavidin Alexa488 (Invitrogen, #S11223, 1:250 in blocking buffer). DIG-positive cells were visualized with anti-DIG antibody (Roche Applied Science, #11207733910, 1:250 in blocking buffer) and TSA Cy3 amplification regent (PerkinElmer, #NEL744001KT, 1:100 in 1 × plus amplification diluent). Sections were counterstained with or without 4’,6-diamino-2-phenylindole dihydrochloride (DAPI, Sigma-Aldrich, #D8417) and mounted with a cover glass using Permaflour (ThermoFisher, #TA-006-FM) or Fluoromount (Diagnostic BioSystems, #K024). In Vmn2r88 ISH and pS6 immunostaining of VNO sections, after final washing of ISH the sections were incubated with pS6 antibody (Cell Signaling Technology, cat# 4856 S; 1:200 in blocking buffer) at 4 °C overnight, and signals were visualized with Alexa 488-conjugated goat anti-rabbit secondary antibody (Invitrogen, A11034). In situ hybridization of c-Fos brain mapping was performed as follows 14 . A DIG-labeled probe for c-Fos was previously characterized 12 , 14 . The c-Fos probe was prepared by in vitro transcription with a DIG-RNA labeling mix (#11277073910) and T3 RNA polymerase (#11031163001) in accordance with the manufacturer’s instructions (Roche Applied Science). Target brain regions underwent ISH at 60 °C overnight. After a series of post-hybridization washing and blocking, DIG-positive cells were visualized with anti-DIG antibody (Roche Applied Science, #11207733910, 1:250 in blocking buffer) and TSA-plus Cyanine 3 (PerkinElmer, #NEL744001KT, 1:100 in 1 × plus amplification diluent). Sections were counterstained with DAPI (Sigma-Aldrich, #D8417) to visualize the nuclei and then mounted with cover glass using Fluoromount (Diagnostic BioSystems, #K024). This method was also applied for post-hoc c-Fos ISH staining after weak light stimulation onto SF1-positive neurons of lactating female mice (Supplementary Fig. 11a-b ). For double ISH staining with c-Fos and SF1 probes (Supplementary Fig. 9a-b ), SF1 probe was prepared as previously mentioned 12 and its procedure was basically the same as double ISH in the VNO sections. Imaging of the sections was performed with an Olympus BX53 microscope (10 × or 20 × objective) equipped with an ORCA-R2 cooled CCD camera (Hamamatsu Photonics). Images were processed using Adobe Photoshop CS2 or CS6 (Adobe Systems) 41 . For cell counting, the number of sections shown in figure legends of Fig. 4 and Supplementary Fig. 5 was used for each brain region to cover entire populations from anterior to posterior. Generation of mutant mice by CRISPR-mediated genome editing To generate a null mutant of Vmn2r88 , we designed two guide RNAs (gRNAs) for each gene that were able to introduce double-strand DNA breaks flanking exon 3-6 of the V2R gene in which the essential transmembrane domain is encoded. CRISPR-mediated genome editing was performed as described previously with minor modifications 14 , 15 . Cas9 mRNA was prepared as follows 42 . pMLM3613 (Addgene, #42251) was digested with PmeI and purified with ethanol precipitation. In vitro transcription was performed using the mMESSAGE mMACHINE T7 ULTRA Transcription Kit (ThermoFisher Scientific, #AM1345) in accordance with the manufacturer’s instructions. The amount and purity of synthesized mRNA were tested using electrophoresis with a 1% agarose gel. To design gRNAs to target Vmn2r88 , we first searched for 20 bp target sequences upstream of the protospacer adjacent motif (PAM) using CRISPR-direct ( ). We then selected a target sequence with > 50% CG content that was completely unique in the mouse genome (confirmed by GGGenome at ). The selected sequence was then introduced into the BsaI-digested pDR274 construct (Addgene, #42250) using the following oligo-DNAs: Vmn2r88 upstream 5′-TAGGCGTAGATGTACACTGCAAAC; 5′-AAACGTTTGCAGTGTACATCTACG. Vmn2r88 downstream 5′-TAGGAGAACCAGGAATCTCAACTG; 5′-AAACCAGTTGAGATTCCTGGTTCT. After validating the sequence, pDR274 with the target DNA sequence was digested with DraI, and in vitro transcription of gRNA was performed using the MEGA shortscript T7 Transcription Kit (ThermoFisher Scientific, #AM1354) in accordance with the manufacturer’s instructions. The synthesized gRNA was purified using the MEGAclear Transcription Clean-Up Kit (ThermoFisher Scientific, #AM1908). The amount and purity of the synthesized gRNA were tested using electrophoresis with a 1% agarose gel. A mixture of 20 ng μl −1 of two gRNAs and 50 ng μl −1 of Cas9 mRNA was injected into C57BL/6 J fertilized eggs in order to generate the knockout mice. The genotypes of the mutant mice were determined by two kinds of PCR methods using the following oligo-DNAs. 5′-GCATTCTTCAATGCCACTGGTAAG; 5′-AATCTGCGGTGTGCAAAAGT; 5′-GCAGCCACTCCATGAAAGCA. Mutant allele = 450 bp 5′-CGTAGATGTACACTGCAAACAGG; 5′-CTTCTGCATGCACTCATGTACC; Wild type allele = 3000 bp Behavior assays Digging assay The digging behavior assay (Fig. 5 ) was performed in the test mice’s home cage. Their home cages were moved to a recording space and their food and water were temporarily removed 1 h before the recording. In the assays shown in Figs. 5 b-c, 5e-f , their pups were also removed temporarily. The mice were exposed to a cotton ball, with or without hemoglobin (300 µg), ESP1 (20 µg), or diluted 2MT (from 10-fold to 50,000-fold in mineral oil). The number of digging behaviors, duration of total digging time, and other parameters were calculated for a 20 min recording. Pup retrieval assay For the pup retrieval assay (Supplementary Fig. 7 ), C57BL/6 J mothers with their pups (postnatal day4-6) were used. All of the pups were removed from their home cages 30 min before behavior recording and a stimulant (30 µL of fresh blood, hemoglobin (300 µg) or distilled water) was painted on their backs. Then, three pups were placed in each corner of the cage, excepting the one corner which was nearest to the nest. Their behaviors were recorded for 30 min to observe not only pup retrieval but other behaviors after their retrieval. Open field assay The open field assay (Fig. 6a-c ) was performed for 10 min in a 40 cm × 40 cm × 40 cm square open arena under normal lighting. C57BL/6 J lactating mothers with pups were presented with a cotton swab, with or without hemoglobin (300 µg), ESP1 (20 µg), or 2MT (Sigma-Aldrich, 1:10 or 1:10,000 dilution in mineral oil) for 5 min and moved into the test arena just before testing. The movement of the mice was videotaped and scored for the following parameters: rearing time, total distance, center time and moving speed. The parameters without rearing time were analyzed by ImageJ software (version 2.1.0) 43 . Two-chamber test The two-chamber test (Supplementary Fig. 8a-c ) was conducted between 2 and 10 h after the start of the light period. Initially, animals were transferred into a 25 cm × 50 cm × 25 cm behavior chamber with two rooms in dim light conditions. Animals (lactating female mice) were kept in the cage for 5 min for habituation. After habituation, a piece of filter paper (5 cm × 5 cm) soaked with either 100 μL of water, water containing 300 μg of hemoglobin, or 2MT diluted in mineral oil (1:10,000), was placed on one side of the chamber. Animal behavior was recorded for 10 min by a USB camera (logicool). Each animal went through 3 trials, with different stimuli, within 4 days. The trajectory of the animal, locomotion, and the total time spent in each room, was quantified using a custom written Python program. Pharmacogenetics For chemogenetic inhibition of SF1 + neurons in the VMHd (Fig. 7a-c ), we prepared SF1-Cre female mice and conducted stereotactic surgery as described in the Optogenetics . For control group (Supplementary Fig. 10 ), we purchased parous wild type C57BL/6 female mice from Japan CLEA and performed stereotactic surgery after one additional weaning experience. We injected AAV8-hSyn-DIO-hM4D(Gi)-mCherry (Addgene, lot# v62036, ~ 2.4 × 10 13 gp mL −1 ) (300–350 nL, 40 nL min −1 ) into the bilateral VMHd. Two days after surgery, female mice were paired with stud C57BL/6 J male mice to engage in mating. Once females got pregnant, we removed paired males. After the third parturition, lactating mothers raising pups (postnatal day 3–9) were subjected to the behavioral assay. One day before the behavioral test, we changed beddings and injected 0.2 mL of saline intraperitoneally to acclimate the intraperitoneal injection. We performed a behavioral assay as described in the Digging assay with some modifications. Sixty minutes before the beginning of the behavioral testing, 0.2 mL of 0.1 mg mL −1 clozapine-N-oxide dissolved in saline (CNO, Sigma-Aldrich, cat#C0382) or saline was administered intraperitoneally to the subject female mice, and pups, food, and water were removed. The mice were exposed to a cotton ball with hemoglobin (300 µg). Each animal underwent a single behavioral assay because very few mice bite a cotton ball after second assays. The duration of total digging time and rearing were quantified for a 20 min recording. Each animal was checked post-hoc to determine if the viral injection was correctly administered. Optogenetics For optogenetic neural activation of SF1 + neurons in the VMHd (Fig. 7d-f and Supplementary Fig. 9a -b, 11a-g ), we prepared SF1-Cre female mice which experienced 2 times of parturition. These SF1-Cre female mice were anesthetized with 65 mg kg −1 ketamine (Daiichi-Sankyo) and 13 mg kg −1 xylazine (Sigma-Aldrich) via intraperitoneal injection and head-fixed to stereotactic equipment (Narishige). Then, we injected AAV5-EF1a-DIO-hChR2(H134R)-eYFP (UNC vector core, lot# AC4313U, ~ 5.5 × 10 12 gp mL −1 ) or AAV8-CAG-DIO-GFP (UNC vector core, lot# AV4910b, ~ 6.2 × 10 12 gp mL −1 ) (300–350 nL, 40 nL min −1 ), as a control virus, to the unilateral VMHd (Posterior, 1.0 mm; Lateral, 0.25 mm; Ventral, 5.25 mm, from the Bregma), using a UMP3 pump regulated by a Micro-4 device (World Precision Instruments). Soon after the virus injection, optical fibers (200 μm core, 0.39-NA) (Thorlabs, cat#FT200UMT) were implanted 200 μm above the VMHd unilaterally. Two days after surgery, female mice were paired with stud C57BL/6 J male mice to engage in mating. Once females got pregnant, we removed paired males. After the third parturition, lactating mothers raising pups (postnatal day3-9) were subjected to the behavioral assay. Behavioral experiments were performed during the light period. To observe a light stimulation effect, animals were placed into a 30 cm × 30 cm × 30 cm chamber with 5 cm thick bedding and connected to a 473 nm laser (Changchun New Industries) through a rotary joint patch cord (Thorlabs, RJPFL2) and cannula. We recorded behaviors during two successive light conditions, 5 min of light off phase and 5 min of light on phase (0 mW, 0.01 mW, 0.03 mW, or 1 mW). These behavioral tests were conducted on 2 separate days and the interval of the behavioral test was more than 2 h. Light delivery was controlled using an Arduino microcontroller (ARDUINO ZERO #ABX00003) and simple custom-made code 44 . Animal behavior was recorded by a video camera (logicool) from a horizontal and vertical view, and digging behavior was analyzed. Each animal was checked post-hoc to determine if the viral injection and fiber positioning were correctly administered. Quantification and statistical analysis Data are presented as mean ± S.E.M. unless otherwise noted. The statistical details of each experiment, including the statistical tests used, the exact value of n, and what n represents, are detailed in each figure legend. The two-sided Wilcoxon rank-sum test was used for tests shown in Fig. 4b-c, e-f, h , and the two-sided Wilcoxon rank-sum test with Dunnett correction in Fig. 6b . The two-sided Steel-Dwass test was used for tests shown in Figs. 2 e, 5b-c, e-f, 6c, and Supplementary Fig. 7c . Unpaired two-sided Student’s t -test was used for tests in Supplementary Figs. 6b , 10b, and 11b . One-way ANOVA with repeated measures and Bonferroni’s correction was used for the test shown in Supplementary Fig. 8c . The two-sided Wilcoxon signed-rank test was used for the test shown in Fig. 7c, f and Supplementary Fig. 11d . Significance was noted as ** p < 0.01, and * p < 0.05. R version 3.5.0 and python 3 were used for all non-parametric statistical analyses in this study 45 , 46 . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All relevant data is available from the authors with reasonable request. The structure of human hemoglobin shown in Fig. 2f is from RCSB Protein Data Bank ( ). Source data are provided with this paper. Code availability All original codes are available from the authors with reasonable request.
Biochemists in Japan were surprised to discover that the molecule hemoglobin in the blood works not only as an oxygen carrier; when blood is spilled as a result of aggression, accident or predator attack, the molecule also acts as a chemosensory signal for lactating mother mice, prompting digging or rearing behavior to check the surrounding environment and keep their offspring safe. A paper describing the researchers' findings has just been published in the journal Nature Communications. In 2005, researchers in the lab led by Kazushige Touhara at the University of Tokyo had already discovered in the tears of male mice a pheromone—a type of chemical substance secreted by animals that influences the behavior of others in the same species—named ESP1 (exocrine gland-secreting peptide 1), which is composed of protein. When investigating how sensory neurons in the vomeronasal organ (the organ responsible for detecting chemosensory signals such as pheromones) were stimulated by ESP1, they found that an unidentified molecule from the saliva glands was also mysteriously activating these neurons. In subsequent research seeking the source of this neuron activation, the researchers found that contamination of the gland by the blood was responsible. However, the specific molecule driving the activity and the neural pathways involved remains unknown. Then the researchers first exposed male mice to a small quantity of the blood, and observed blood-dependent activation of peripheral sensory neurons located in the vomeronasal organ in the nose. They further found that this vomeronasal stimulatory activity was prompted by cell lysate (the contents of broken up blood cells), but not in plasma (the part of the blood that carries water, salts and enzymes), and so they purified the cell lysate and employed protein sequence analysis and absorption spectrum analysis to find out which molecular compounds in the lysate were inducing the neuronal activity. The results demonstrated that hemoglobin was responsible. "Everyone, even schoolchildren, knows that hemoglobin is the oxygen-carrying molecule in the blood, so this finding of its role as a chemosensory signal in the nose came as a real surprise to us," said Touhara, the corresponding author of the paper and professor with the University of Tokyo's Department of Applied Biological Chemistry. Mice in their natural environment encounter the blood under specific conditions, such as upon an injury due to aggression among males, damage by predator attack, and pup delivery. Nevertheless, when the researchers investigated the effects of hemoglobin on social behaviors such as aggression and sexual behavior, they observed no obvious change in male-male aggression, maternal aggression, or sexual behavior upon hemoglobin exposure. Once they searched responsible brain regions for the hemoglobin signal, it was only in lactating mothers that a specific neural activation in one of the regions in the hypothalamus that received the information from vomeronasal sensory neurons. Therefore, the researchers started to look at the behaviors of lactating female mice upon exposure to hemoglobin. "The mothers immediately showed digging or rearing behavior once they received hemoglobin after playing with a cotton swab soaked in it," Touhara continued. "And the same behavior was observed with exposure to fresh blood." The researchers also found that hemoglobin in the blood stimulates vomeronasal sensory neurons through one specific receptor, Vmn2r88. Meanwhile, in mice lacking Vmn2r88, the increase of digging or rearing behavior was not observed, suggesting that this behavior is controlled by the specific ligand-receptor pair comprising hemoglobin binding to Vmn2r88. Next, by using optogenetics to manipulate neural activities in the hypothalamus with light and genetic engineering technologies, the researchers were able to replicate the digging behavior, and from this proposed a responsible neural circuitry that regulates these hemoglobin-mediated behaviors. Such digging and rearing represent a kind of exploratory and/or risk-assessment behavior, suggesting that the response to hemoglobin appears to be important for mothers to protect their pups by checking their external environment. This leads to the next question of why only lactating mothers show this behavioral response. From this study, Touhara and his team found that one of the well-known molecules in the blood, hemoglobin, works as a chemosensory molecule and its receptor-circuit mechanism is responsible for a type of exploratory and/or risk-assessment behavior in lactating female mice. It shows that animals detect the signals surrounding their living environment and display appropriate response according to their life stage.
10.1038/s41467-022-28118-w
Biology
Scientists uncover the 'romantic journey' of plant reproduction
Qifei Gao et al, A receptor–channel trio conducts Ca2+ signalling for pollen tube reception, Nature (2022). DOI: 10.1038/s41586-022-04923-7 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-04923-7
https://phys.org/news/2022-07-scientists-uncover-romantic-journey-reproduction.html
Abstract Precise signalling between pollen tubes and synergid cells in the ovule initiates fertilization in flowering plants 1 . Contact of the pollen tube with the ovule triggers calcium spiking in the synergids 2 , 3 that induces pollen tube rupture and sperm release. This process, termed pollen tube reception, entails the action of three synergid-expressed proteins in Arabidopsis : FERONIA (FER), a receptor-like kinase; LORELEI (LRE), a glycosylphosphatidylinositol-anchored protein; and NORTIA (NTA), a transmembrane protein of unknown function 4 , 5 , 6 . Genetic analyses have placed these three proteins in the same pathway; however, it remains unknown how they work together to enable synergid–pollen tube communication. Here we identify two pollen-tube-derived small peptides 7 that belong to the rapid alkalinization factor (RALF) family 8 as ligands for the FER–LRE co-receptor, which in turn recruits NTA to the plasma membrane. NTA functions as a calmodulin-gated calcium channel required for calcium spiking in the synergid. We also reconstitute the biochemical pathway in which FER–LRE perceives pollen-tube-derived peptides to activate the NTA calcium channel and initiate calcium spiking, a second messenger for pollen tube reception. The FER–LRE–NTA trio therefore forms a previously unanticipated receptor–channel complex in the female cell to recognize male signals and trigger the fertilization process. Main As a ubiquitous second messenger, Ca 2+ regulates many aspects of physiology and development in both animals and plants 9 , 10 , 11 , including reproduction. In animals, Ca 2+ signals drive the motility of sperm 12 and forecast successful fertilization 13 . In flowering plants, sperm are immobile and require a special delivery structure called the pollen tube, which navigates the female tissue and finds the ovule before releasing sperm 1 . From pollen germination to pollen tube guidance and pollen tube reception, each step requires intricate Ca 2+ signalling 14 . However, the molecular mechanism underlying Ca 2+ signalling in plant reproduction remains largely unknown. During pollen tube reception, interactions between the pollen tube and synergids in the ovule activates Ca 2+ oscillations in both partners, which leads to the rupture of the pollen tube and synergid cell death and the initiation of fertilization 2 , 3 , 15 , 16 . On the female side, FER, LRE and NTA are three components from the same pathway required for synergid Ca 2+ spiking in response to pollen tube arrival, but little is known regarding how they work together to mediate Ca 2+ entry. We show here that pollen tube RALFs bind to FER–LRE co-receptors, which recruit NTA, a calcium channel, to form a receptor–channel assembly. This tri-molecular complex is regulated by Ca 2+ /calmodulin (CaM)-dependent feedback inhibition to drive Ca 2+ oscillations in the synergid. RALFs trigger synergid Ca 2+ oscillation FER and LRE are both required for pollen tube reception and may function as a co-receptor for unknown signals derived from pollen tubes 4 , 5 , 17 , 18 , 19 , 20 . We hypothesized that such signals may be mediated by RALF family peptides because some RALFs bind to either FER alone 21 , 22 or to both FER and LRE-like proteins (LLGs) in other processes 23 , 24 , 25 . Among the 37 Arabidopsis RALFs, at least 8 (RALF4, RALF8, RALF9, RALF15, RALF19, RALF25, RALF26 and RALF30) are expressed in pollen tubes 7 . We expressed and purified the eight pollen-tube-derived RALF peptides and examined their interaction with FER 23 . We also included an ovule-derived RALF peptide (RALF34) that is closely related to pollen tube RALFs such as RALF4 and RALF19. The extracellular domain of FER (FER ex ) pulled down RALF4, RALF19 and RALF34 (Fig. 1a and Supplementary Fig. 1a ). Consistent with this result, RALF4 and RALF19 acted antagonistically with RALF34 for pollen tube integrity through interactions with the same receptors of the FER family 7 . Fig. 1: Pollen-derived RALFs bind to FER–LRE and trigger synergid [Ca 2+ ] cyt changes in a FER–LRE–NTA-dependent manner. a , Pull-down assays showing the interaction of GST-tagged RALFs and MBP-tagged ectodomains of FER (FER ex ). Amylose resin was used to pull down MBP, followed by western blotting with antibodies against GST and MBP. n = 3 independent repeats. b , Interaction of GST-tagged LRE and MBP-tagged ectodomains of FER with or without RALFs (RALF x ; each 100 nM) as indicated. Amylose resin pull-down and western blotting was performed as in a . n = 3 independent repeats. c , Co-IP of Myc-tagged LRE and Flag-tagged FERK565R expressed in N. benthamiana leaves with or without the addition of RALFs (each 5 μM) as indicated. Anti-Flag M2 affinity beads were used in co-IP, and western blots were probed with antibodies against Myc and Flag. n = 3 independent repeats. d – g , Representative Ca 2+ spiking patterns in synergids in response to pollen tube (PT) arrival or to 0.5 µM or 2 µM RALF4 (R4) for WT ( d ), fer-4 ( e ), lre-5 ( f ) and nta-3 ( g ). W, water. h , Ca 2+ oscillation periodicity of WT synergids in response to PT arrival or 0.5 µM RALF4 and RALF19 (R19). n = 8 ovules. i , The peak values of Ca 2+ spiking as in d – g . n values are shown in Extended Data Fig. 3 . Ovules were isolated from Col-0, fer-4 , lre-5 and nta-3 flowers harbouring the synergid-specific GCaMP6, and fluorescence was recorded using an inverted microscope. Red triangles indicate time points at which the PT arrived or RALF4 was applied. Error bars depict the mean ± s.e.m. All P values were determined by two-tailed Student’s t -test. NS, not significant. Source data Full size image As ligands for FER–LLG co-receptors, RALFs enhance interactions between FER and LLGs 23 . Consistent with this model, RALF4, RALF19 and RALF34 enhanced the interaction between GST-tagged LRE and MBP-tagged FER ex , whereas several other RALFs did not (Fig. 1b , bottom, and Supplementary Fig. 1b ). This observation was confirmed by co-immunoprecipitation (co-IP) assays with total protein samples from Nicotiana benthamiana leaf tissue expressing LRE–Myc and FERK565R–Flag (Fig. 1c and Supplementary Fig. 1c ). RALF4 is secreted into the apoplast of the pollen tube 26 , and our promoter–β-glucuronidase (GUS) analysis confirmed the expression of RALF4 and RALF19 during pollen tube reception (Extended Data Fig. 1a ). This result suggests that RALF4 and RALF19 interact with FER and LRE in the synergid. In summary, FER–LRE may function as co-receptors for pollen tube RALFs, including RALF4 and RALF19. In response to pollen tube arrival, the synergids produce specific Ca 2+ fluctuations required for pollen tube reception 2 , 3 . If RALFs signal the arrival of the pollen tube, they should produce a similar Ca 2+ entry pattern in the synergids when applied to isolated ovules in vitro. We generated transgenic plants that express the Ca 2+ indicator GCaMP6s 27 driven by a synergid-specific promoter (p MYB98 ) 28 and examined changes in cytosolic calcium concentration ([Ca 2+ ] cyt ) in the synergids in response to RALFs. In the wild-type (WT) synergids, RALF4 and RALF19, but not RALF34, induced increases in [Ca 2+ ] cyt (Fig. 1d,i and Supplementary Videos 1 , 2 and 3 ), which is consistent with the idea that RALF34 binds to the same receptors but functions differently 7 . Several other pollen tube RALFs (RALF8, RALF9, RALF15, RALF25, RALF26 and RALF30) that did not bind FER also failed to induce synergid [Ca 2+ ] cyt changes (Extended Data Fig. 2 ). In the pollen tube reception assay, the receptive synergid and nonreceptive synergid in one ovule showed distinct Ca 2+ spikes 2 . However, exogenous RALF4 and RALF19 induced similar Ca 2+ transients in both synergid cells in one ovule, which suggests that RALFs in the solution may have diffused evenly towards the two synergids. By contrast, the pollen tube positions itself closer to one of the two synergids, which leads to asymmetrical signalling. We then compared synergid Ca 2+ changes triggered by the pollen tube 2 , 3 with those induced by RALFs. As reported earlier 3 , synergid Ca 2+ dynamics proceeded in three phases as the pollen tube progressed to reception: (1) [Ca 2+ ] cyt oscillated at a regular pace when the pollen tube enters the micropyle and approaches the synergid; (2) [Ca 2+ ] cyt was sustained at a higher level after the pollen tube penetrates the synergid; (3) [Ca 2+ ] cyt was reduced when the synergid collapses. The amplitude and periodicity of Ca 2+ oscillations triggered by RALF4 and RALF19 were similar to phase I of Ca 2+ spiking induced by the pollen tube (Fig. 1d–i , Extended Data Fig. 3 and Supplementary Video 4 ). This result suggests that pollen-tube-derived RALF4 and RALF19 mimic the early phase of pollen tube arrival before mechanical penetration. We then tested whether RALF4 and RALF19 induced such Ca 2+ spiking in a FER–LRE-dependent manner. Synergids from fer-4 and lre-5 mutants failed to respond to RALF4, RALF19 or pollen tube arrival 3 (Fig. 1e,f,i , Extended Data Fig. 3 and Supplementary Videos 5 and 6 ). In addition to the fer and lre mutants, a mutant lacking NTA was non-responsive to 0.5 µM RALF4/19, although a portion of the nta-3 synergid showed weaker responses to an increased level of RALF4/19 (2 µM) in Ca 2+ imaging assays (Fig. 1g,i , Extended Data Fig. 3 and Supplementary Video 7 ). Compared to the fer and lre mutants, the weaker defect in nta-3 suggests that there may be partial functional redundancy with another NTA-like component in this system. MLO proteins are Ca 2+ channels The NTA protein is a member of the MILDEW RESISTANCE LOCUS O (MLO) protein family 6 . Originally discovered as a genetic determinant for resistance against powdery mildew in barley 29 , the MLO proteins feature multi-transmembrane domains and a CaM-binding domain (CaMBD) 30 , 31 . The Arabidopsis genome encodes 15 MLO proteins ( At MLO1– At MLO15), some of which are functionally linked to root thigmomorphogenesis 32 , powdery mildew susceptibility 33 and pollen tube growth 34 . NTA ( At MLO7) is specifically expressed in synergids and appears to function downstream of the FER–LRE module in pollen tube reception 6 . The biochemical function of MLO proteins remains unknown, which represents a crucial gap in knowledge with respect to the signalling pathways in which they participate 31 . Genetic analyses of NTA indicates that it works together with FER–LRE co-receptors in the same pathway to induce Ca 2+ influx. Because NTA and other MLO proteins are multi-transmembrane proteins, we hypothesized that NTA is one of the missing Ca 2+ -transporting proteins responsible for synergid Ca 2+ entry. To test whether MLO proteins transport Ca 2+ , we performed Ca 2+ transport assays with all 15 MLO members from Arabidopsis ( At MLO1– At MLO15), the barley MLO ( Hv MLO) and 2 MLO members from Physcomitrella patens ( Pp MLO2 and Pp MLO3), which represent dicot, monocot and basal land plant MLO proteins, respectively. In single-cell Ca 2+ imaging assays 35 , At MLO2, At MLO3, At MLO4, At MLO10, At MLO12, Hv MLO, Pp MLO2 and Pp MLO3 mediated Ca 2+ entry when expressed in COS7 cells (Fig. 2a and Extended Data Fig. 4 ). To confirm these Ca 2+ imaging results, we used patch-clamping to directly measure the transport activity of At MLO2 expressed in HEK292T cells. We recorded large inward currents mediated by At MLO2 that depended on external Ca 2+ concentrations (Extended Data Fig. 5a,b ). Moreover, At MLO2 waspermeable to Ba 2+ and Mg 2+ , but not to K + or Na + (Extended Data Fig. 5c–f,i–l ). Furthermore, two typical Ca 2+ channel blockers, lanthanum (La 3+ ) and gadolinium (Gd 3+ ), inhibited the At MLO2-mediated inward currents (Extended Data Fig. 5g,h ). Similar to At MLO2, Hv MLO also mediated Ca 2+ influx (Extended Data Fig. 6 ). These results indicate that MLO proteins function as Ca 2+ -permeable channels. Fig. 2: MLO family proteins, including NTA, are Ca 2+ -permeable channels. a , [Ca 2+ ] cyt increases measured by single-cell fluorescence imaging in COS7 cells expressing various MLO proteins 1–15 ( At MLO1– At MLO15). Hv , Hv MLO; Pp 2, Pp MLO2; Pp 3, Pp MLO3. b , FER and LRE facilitated the PM localization of NTA–GFP. The white rectangle indicates the area magnified in the bottom panels. n = 3 independent repeats. Scale bars, 5 μm (bottom row) or 10 μm (top row). c , Co-IP of HA-tagged NTA, Myc-tagged LRE and Flag-tagged FER expressed in Xenopus oocytes with or without the addition of RALFs (each 5 μM) as indicated. Anti-Flag M2 affinity beads were used to co-IP, and western blots were probed with antibodies against Myc, HA and Flag. n = 3 independent repeats. d , [Ca 2+ ] cyt increases measured by single-cell imaging of COS7 cells expressing NTA (N), FER (F), FERK565R (kinase-dead version) (KD) or LRE (L), or combinations thereof. e , f , Typical whole-cell recordings ( e ) and current–voltage curves ( f ) of inward currents in HEK293T cells expressing NTA, FER and LRE. g , h , Similar analyses were conducted for HEK293T cells expressing NTA, LRE and the kinase-dead version of FER. i , The C-terminal cytosolic tail of MLO1 facilitated the PM localization of NTA–GFP. NTA–MLO1 denotes the chimeric protein of NTA and MLO1 C-terminal tail. n = 3 independent repeats. Scale bars, 5 μm (right column) or 10 μm (left column). j , k , Representative cytosolic Ca 2+ spiking curves ( j ) and statistical analysis of peak values ( k ) in COS7 cells expressing the NTA–MLO1 chimeric or original channels. l , m , Typical whole-cell recordings ( l ) and current–voltage curves ( m ) of inward currents in HEK293T cells expressing the NTA-MLO1 chimeric or original channels. For Ca 2+ imaging in COS7 cells, n = 6 replicates, and about 60 cells were imaged in each duplicate. For patch-clamp, n = 8 cells. Error bars depict the mean ± s.e.m. All P values were determined by two-tailed Student’s t -test. Source data Full size image The FER–LRE–NTA trio mediates Ca 2+ entry Many of the tested MLO proteins (including NTA) failed to mediate Ca 2+ entry in HEK293T or COS7 cells (Fig. 2a , Extended Data Fig. 4 and Supplementary Videos 8 and 9 ). We speculated that they may require other components to be active or they may not be properly targeted to the plasma membrane (PM). Indeed, NTA primarily accumulates in a Golgi-associated compartment 36 and relocates to the synergid filiform apparatus in a FER- and LRE-dependent manner 6 , 37 . In our Ca 2+ transport assays, PM localization would be crucial for mediating Ca 2+ entry if NTA is indeed a Ca 2+ channel. NTA–GFP was largely localized to intracellular punctate structures in COS7 cells (Fig. 2b ). When co-expressed with FER and LRE, however, NTA–GFP was targeted to the PM (Fig. 2b ). Such PM targeting was not achieved by co-expressing NTA–GFP with either FER or LRE alone, which is consistent with the finding that LRE–LLG1 physically interacts with and chaperones FER to the PM 19 and that FER is required for the redistribution of NTA to the PM 6 . We further showed that NTA directly interacted with FER and that LRE enhanced such an interaction (Fig. 2c and Supplementary Fig. 1d ), which indicates that FER, LRE and NTA form a complex, which we refer here as the NTA trio. As FER and LRE together target NTA to the PM, we tested whether the NTA trio produces a functional channel at the PM. We co-expressed NTA with FER and LRE in COS7 and HEK293T cells and then performed imaging assays and patch-clamp recordings, which showed that NTA mediated Ca 2+ influx (Fig. 2d–f , Supplementary Video 10 and Extended Data Fig. 7 ). Similar to At MLO2 and Hv MLO, the NTA trio conducted currents carried by divalent cations (Ca 2+ , Ba 2+ and Mg 2+ ), but not monovalent cations (K + and Na + ) (Extended Data Fig. 8 ). The Ca 2+ channel activity of the NTA trio was inhibited by La 3+ and Gd 3+ , which also blocked synergid Ca 2+ spiking (Extended Data Fig. 8 ). The kinase-dead version of FER also formed an active NTA trio (Fig. 2d,g,h ), which is consistent with an earlier finding that the kinase activity of FER is not required for pollen tube reception 38 , 39 . Our data suggest that NTA is an active Ca 2+ channel but requires FER–LRE for targeting it to the PM. We tested this idea by constructing a PM-localized chimeric NTA–MLO1 protein 36 , 37 , which showed that NTA–MLO1 mediated Ca 2+ influx independently of FER–LRE (Fig. 2j–m ). RALFs enhance FER–LRE–NTA activity We then tested the effect of RALFs on the activity of the NTA trio. RALF4 and RALF19, but not RALF34, significantly enhanced the Ca 2+ channel activity of the NTA trio (Fig. 3a–d ), which is consistent with the finding that RALF4 and RALF19 strongly induce increases in synergid Ca 2+ (Fig. 1d,i ). We further confirmed this observation by reconstituting the RALF–FER–LRE–NTA pathway in Xenopus oocytes and monitoring channel activity by two-electrode voltage-clamp assays (Fig. 3e,f ). The chimeric NTA–MLO1 co-expressed with FER and LRE was also enhanced by RALF4 and RALF19, but not by RALF34 (Fig. 3g–j ). Regarding the mechanism underlying the RALF4- and RALF19-dependent activation of the channel, a previous study 6 has shown that NTA is redistributed to the filiform apparatus of the synergid following the arrival of the pollen tube. We examined the PM localization of NTA in response to RALFs, but did not observe any discernible effect of RALF4 and RALF19 application (Extended Data Fig. 9a ). In this mammalian cell system, FER and LRE clearly facilitated the PM localization of NTA (Fig. 2b ), which implies that a portion of NTA can be localized in the PM of the synergid in a pollen-tube-independent manner. Following pollen tube arrival, RALF4 and RALF19, and possibly other pollen tube signals (for example, mechanical stimulus), may further activate the Ca 2+ channel by recruiting the trio to a specific location (for example, the filiform apparatus). Fig. 3: RALFs enhance the Ca 2+ channel activity of the FER–LRE–NTA trio. a , b , Representative cytosolic Ca 2+ spiking curves ( a ) and statistical analysis of peak values ( b ) in COS7 cells expressing the FER–LRE–NTA trio or mock cells treated with various RALFs. The arrowheads indicate the time points at which 10 mM Ca 2+ was applied. c , d , Typical whole-cell recording traces using the ramping protocol ( c ) and amplitudes at −180 mV ( d ) of Ca 2+ -permeable inward currents in HEK293T cells expressing FER–LRE–NTA or mock cells treated with various RALFs. e , f , Typical two-electrode voltage-clamp recordings ( e ) and current amplitudes at −160 mV ( f ) of inward currents in Xenopus oocytes expressing FER–LRE–NTA or mock water-injected oocytes treated with various RALFs. g , h , Representative cytosolic Ca 2+ spiking curves ( g ) and statistical analysis of peak values ( h ) in COS7 cells expressing the chimeric NTA–MLO1 or FER–LRE–NTA–MLO1 treated with various RALFs. i , j , Typical whole-cell recording traces using the ramping protocol ( i ) and amplitudes at −180 mV ( j ) of Ca 2+ -permeable inward currents in HEK293T cells expressing FER–LRE–NTA–MLO1 treated with various RALFs. For Ca 2+ imaging in COS7 cells, n = 8 replicates, and about 60 cells were imaged in each duplicate. For HEK293T cell recordings, n = 8 cells. For oocyte recordings, n = 8 oocytes. Error bars depict the mean ± s.e.m. All P values were determined by two-tailed Student’s t -test. Source data Full size image During the revision of this manuscript, five other pollen tube RALFs (RALF6, RALF7, RALF16, RALF36 and RALF37) were reported to bind FER, ANJEA (ANJ) and HERCULES RECEPTOR KINASE 1 (HERK1) and to function redundantly in polytubey block and pollen tube reception 40 . We analysed RALF37 in our assays and found that RALF37, similar to RALF4 and RALF19, also triggered synergid Ca 2+ changes and activated the NTA trio (Extended Data Fig. 10 ). This result suggests that multiple RALFs derived from the pollen tube serve as signals to trigger synergid Ca 2+ spiking, which in turn leads to pollen tube reception. Consistent with this observation, single mutants of ralf4 and ralf19 did not show any detectable phenotypic defects (Extended Data Fig. 1b,c ). NTA–CaM shapes synergid Ca 2+ spiking MLO proteins contain a CaMBD in the intracellular carboxy-terminal region 30 , 31 , which suggests that these proteins may be regulated by CaM binding, a typical autoregulatory mechanism for many Ca 2+ channels in both animal and plant systems 41 , 42 . We examined how CaM affects the channel activity of MLO proteins by co-expressing CaM7 with the NTA trio or other MLO proteins, including At MLO2, Hv MLO and NTA–MLO1, in COS7 cells. Substantial inhibition of Ca 2+ entry was observed in all cases (Fig. 4a,b ), thereby revealing an inhibitory feedback mechanism of MLO channel activity by CaM. We confirmed this mechanism using a mutant NTA (named NTA RR ), in which Leu455 and Trp458 were mutated to Arg to abolish its CaM binding capacity 37 . NTA RR failed to respond to CaM-mediated inhibition (Fig. 4a,b ). Although these mutations in the CaMBD partially impaired the redistribution of NTA to the filiform apparatus in the synergid 37 , the NTA RR trio was recruited to the PM in COS7 cells (Extended Data Fig. 9b ), which is consistent with the finding that the NTA RR trio still conducted Ca 2+ entry. Fig. 4: CaM inhibition of NTA Ca 2+ channels is involved in modelling the Ca 2+ spiking pattern in synergids. a , b , Typical Ca 2+ spiking patterns ( a ) and peak values ( b ) in COS7 cells expressing MLO proteins and At CaM7. The arrowheads indicate the time points at which 10 mM external Ca 2+ was applied. n = 8 replicates, and about 60 cells were imaged in each replicate. For b , numbers are as indicated for a (red). c – f , Typical whole-cell recordings of inward currents in HEK293T cells expressing the NTA trio, At CaM7 or mock cells when [Ca 2+ ] cyt was 0 nM ( c , d ) or 1 μM ( e , f ). g , Current amplitudes at −180 mV of HEK293T cells expressing the NTA trio and At CaM7 when [Ca 2+ ] cyt was 0 nM or 1 μM. n = 8 cells. Numbers are as indicated in c , d (blue) and e , f (green). h – j , Representative Ca 2+ spiking patterns in synergids in response to PT arrival or 0.5 µM RALF4 for WT ( h ), nta-3 ( i ) and NTA RR ( j ). k , The peak values of Ca 2+ spiking as in h – j . n values are shown in Extended Data Fig. 3 . Error bars depict the mean ± s.e.m. All P values were determined by two-tailed Student’s t -test. l , Model of RALF–FER–LRE–NTA pathway leading to synergid Ca 2+ changes. Following PT arrival, PT-derived RALF4 and RALF19 bind FER–LRE, and this complex recruits and activates NTA, a CaM-gated Ca 2+ channel, to initiate Ca 2+ spiking. Source data Full size image As CaM binds to Hv MLO in a Ca 2+ -dependent manner 43 , we proposed that CaM may inhibit NTA channel activity following increased [Ca 2+ ] cyt as a negative feedback mechanism. We tested this hypothesis by titrating [Ca 2+ ] cyt and expressing a CaM7 mutant lacking the Ca 2+ -binding EF motif 44 . The results showed that CaM7 required Ca 2+ binding to inhibit the activity of the NTA trio (Fig. 4c,e,g ). Similarly, the NTA RR mutant (which is defective in CaM binding) became constitutively active (Fig. 4d,f,g ). These results support a model in which RALFs activate the NTA channel to increase synergid [Ca 2+ ] cyt to a threshold that in turn enables CaM binding and inhibition of NTA channel activity. Specific Ca 2+ spiking in synergids is essential for pollen tube reception 2 , 3 . We hypothesized that the Ca 2+ /CaM-dependent feedback inhibition of the NTA channel provides a mechanism for shaping such a Ca 2+ signature. To test this idea in planta, we generated transgenic plants harbouring the NTA RR mutant driven by the NTA promoter in the nta-3 mutant background and examined synergid [Ca 2+ ] cyt spikes in response to RALF4. [Ca 2+ ] cyt spiking in synergids was amplified in NTA RR plants (Fig. 4h–k ). We also observed higher levels of Ca 2+ increase in NTA RR synergids in response to pollen tube arrival and a disordered oscillation pattern compared with WT synergids (Fig. 4j,k ). This result indicates that the NTA RR mutant, which lacks CaM-dependent inhibition, produces a sustained increase in [Ca 2+ ] cyt , which causes a defect in pollen tube reception 37 . Conclusions We identified pollen-tube-derived RALF peptides as ligands for the FER–LRE co-receptor complex that recruits NTA, a CaM-gated Ca 2+ channel, to PM domains to initiate Ca 2+ entry and pollen tube reception (Fig. 4l ). This work demonstrated a mechanistic process that integrates the action of FER, LRE and NTA, three players genetically connected in synergid–pollen tube interaction. In addition, the identification of MLO proteins as Ca 2+ channels uncovered the long sought-after common biochemical pathway (Ca 2+ entry) that involves MLO functions in multiple physiological processes, including but may not be limited to, mildew resistance, root mechanosensing, pollen tube growth and fertilization in plants. Indeed, Ca 2+ is a core component in all these processes 11 , 14 , and our finding here sets the stage for extensive future research to address mechanisms in various MLO-dependent processes. As FER–LLG co-receptors are often connected to Ca 2+ spiking in other signalling processes beyond reproduction 21 , the identification of a MLO channel downstream of the FER–LRE co-receptors offers a possible mechanism for other RALF–FER–LLG-dependent pathways. In the context of Ca 2+ signalling, which is a common theme in all eukaryotes, MLO proteins represent a family of Ca 2+ channels specific to the plant kingdom, which suggests that instead of having fewer Ca 2+ channels than animals as currently thought 11 , plants may feature channels distinct from animal counterparts and more of these channels await to be discovered. In the context of reproduction, our study raises several important questions for future research into the mechanistic details of male–female interactions. For example, although RALF4 and RALF19 bind to FER–LRE and enhance the channel activity of NTA, the mechanism underlying this activation awaits resolution by structural analysis of the FER–LRE–NTA trio in the presence of the RALF ligands. Before pollen tube reception, pollen tube integrity and guidance also involve the function of several RALF peptides, FER family of receptor-like kinases and MLO proteins. Our study provides a strategy for further research to link these components in distinct Ca 2+ signalling pathways. A previous report 34 noted that MLO5 and MLO9 are trafficked together with cyclic nucleotide-gated channel 18 (CNGC18), another Ca 2+ channel with an essential role in pollen tube growth and guidance. This raises an interesting question regarding the functional interplay of multiple Ca 2+ channels in shaping specific Ca 2+ signatures in pollen tubes, synergids and other cell types in plants 42 . Methods Plant material and growth conditions Seeds were sterilized with 10% (v/v) bleach and sown on agar plates containing half-strength Murashige and Skoog (1/2 MS) medium (1/2 MS, 0.8% (w/v) Phyto agar and 1% (w/v) sucrose, pH adjusted to 5.8 with KOH). Plates were incubated at 4 °C for 3 days for stratification and then transferred to the soil pots in a 22 °C growth room with a 16-h light/8-h dark cycle (100 μmol m −2 s −1 ). The seeds for fer-4 (GABI_GK106A06), lre-5 (CS66102) and nta-3 (SALK_027128) were purchased from Arabidopsis Biological Resource Center. The ralf4 and ralf19 mutants were generated by CRISPR as previously reported 7 . Transgenic plants The coding DNA sequence (CDS) of GCaMP6s was PCR-amplified using HBT-GCaMP6-HA as the template 27 and fused to the MYB98 promoter region 28 , amplified from Columbia-0 (Col-0) genomic DNA in the pCAMBIA 2300 vector. The binary construct was transformed into Arabidopsis thaliana (Col-0) plants through Agrobacterium (GV3101) using the floral dip method 45 . Transgenic plants were selected on 1/2 MS plates containing 50 mg l –1 kanamycin, and one homozygous transgenic p MYB98 - GCaMP6s line was then crossed with fer-4 , lre-5 and nta-3 and further brought to homozygosity with both the GCaMP6s and the fer-4 , lre-5 and nta-3 genetic backgrounds. The NTA RR mutant was produced by site-directed mutagenesis to replace Leu455 and Trp458 with Arg. The NTA promoter was PCR-amplified from Col-0 genomic DNA and fused with the NTA RR CDS in the pCAMBIA 1305 vector and transformed into plants as described above. β-Glucuronidase staining The mature pistils of the transgenic plants carrying pro RALF4 / 19 :β-glucuronidase (GUS) were dissected to isolate intact ovules that were then fixed in 80% acetone overnight. Samples were then incubated with GUS staining buffer (50 mM sodium phosphate, pH 7.2, 2 mM potassium ferrocyanide, 2 mM potassium ferricyanide, 0.2% Triton X-100 and 2 mM X-Gluc). Images were taken with a Zeiss AxioObserver Z1 inverted microscope. Aniline blue staining Pollen grains of a freshly opened flower of WT or mutant lines were used to pollinate WT pistils that had been emasculated a day earlier. After 24 h, the pistils were fixed in acetic acid/ethanol (1:3) overnight. They were then washed stepwise in 70% ethanol, 50% ethanol, 20% ethanol and ddH 2 O. The pistils were treated with 8 M NaOH overnight to soften the tissues and then washed with ddH 2 O three times before staining with aniline blue solution (0.1% aniline blue, 50 mM K 3 PO 4 ) for 2 h. The stained pistils were observed using a Zeiss AxioObserver Z1 inverted microscope. Mammalian cell culture, vector construction and transfection The CDS of GCaMP6s was amplified from HBT-GCaMP6-HA 27 and cloned into a dual-promoter vector, pBudCE4.1 (Invitrogen), with each CDS for NTA, FER, LRE, NTA–MLO1 for co-expression in HEK293T or COS7 cells. The chimeric NTA–MLO1 CDS was generated as previously described 36 . Mammalian cells were cultured in DMEM supplemented with 10% FBS in a 5% CO 2 incubator at 37 °C with controlled humidity. HEK293T or COS7 cells were transfected using a Lipofectamine 3000 Transfection Reagent kit (Invitrogen). Plasmids for transfection were extracted from Escherichia coli (DH5α) using a Plasmid Mini kit (Qiagen), and 2 μg plasmid DNA was added into each well of 6-well plates (Nunc) containing the cells (70–80% confluent). To confirm that the cells were successfully transfected, green and/or red fluorescence signals were examined using an inverted fluorescence microscope (Zeiss AxioObserver Z1 inverted microscope) before patch-clamp and Ca 2+ imaging experiments 48 h after transfection. Whole-cell patch-clamp recording The whole-cell patch-clamp experiments were performed using an Axopatch-200B patch-clamp setup (Axon Instruments) with a Digitata1550 digitizer (Axon Instruments) as previously described 46 . Clampex10.7 software (Axon Instruments) was used for data acquisition, and Clampfit 10.7 was used for data analysis. To record Ca 2+ currents across the PM of HEK293T cells, the standard bath solution contained 140 mM N -methyl- d -glucamine (NMDG)-Cl, 10 mM CaCl 2 , 10 mM glucose and 10 mM HEPES, adjusted to pH 7.2 with Ca(OH) 2 . The standard pipette solution contained 140 mM Cs-glutamate, 6.7 mM EGTA, 3.35 mM CaCl 2 and 10 mM HEPES, adjusted to pH 7.2 with CsOH. Free [Ca 2+ ] in the pipette solution was 175 nM, as calculated using the Webmaxc Standard ( ). The 10 mM Ca 2+ in the bath solution was removed to attain 0 mM Ca 2+ or substituted with 10 mM Ba 2+ or 10 mM Mg 2+ as indicated. A ramp voltage protocol of 2-s duration from −160 mV to +30 mV (holding potential 0 mV) was applied 1 min after achieving a whole-cell configuration, and currents were recorded every 20 s, with 5 repeats in total for each cell. The five current traces were used for statistical analysis to produce average current–voltage curves. For inward K + current recordings in HEK293T cells, the bath solution contained 140 mM NMDG-Cl, 14.5 mM KCl, 10 mM glucose and 10 mM HEPES, adjusted to pH 7.2 with KOH. The pipette solution contained 145 mM K-glutamate, 3.35 mM EGTA, 1.675 mM CaCl 2 and 10 mM HEPES, adjusted to pH 7.2 with KOH. The free [Ca 2+ ] in the pipette solution was 100 nM, as calculated using the Webmaxc Standard. For inward Na + current recordings in HEK293T cells, the bath solution contained 140 mM NaCl, 10 mM glucose and 10 mM HEPES, adjusted to pH 7.2 with NaOH. The pipette solution contained 135 mM CsCl, 10 mM NaCl, 3.35 mM EGTA, 1.675 mM CaCl 2 and 10 mM HEPES, adjusted to pH 7.2 with CsOH. The free [Ca 2+ ] in the pipette solution was 100 nM, as calculated using the Webmaxc Standard. A step voltage protocol of 4-s duration for each voltage from −160 mV to +60 mV with a +20 mV increment was used for K + and Na + current recordings in HEK293T cells 1 min after achieving a whole-cell configuration. Two-electrode voltage-clamp recording from Xenopus oocytes The CDS for NTA-3×HA, LRE-4×Myc and FER-3×Flag were cloned into the pGEMHE Xenopus oocyte expression vector. To construct LRE-4×Myc, the 4×Myc tag sequence was inserted after the first 60 bp of the LRE CDS encoding the signal peptide, followed by the downstream 438 bp of LRE as previously described 23 . Two-electrode voltage-clamp assays were performed as previously reported 35 , 44 . The capped RNA (cRNA) was synthesized from 1 μg of a linearized plasmid DNA template using a mMESSAGE mMACHINE T7 kit (Ambion) and 10 ng of each cRNA, in a total volume of 46 nl, was injected into each oocyte. Injected oocytes were incubated in ND96 solution (96 mM NaCl, 2 mM KCl, 1 mM MgCl 2 , 1.8 mM CaCl 2 , 10 mM HEPES/NaOH, pH 7.4) at 18 °C for 2 days before electrophysiological recording. Oocytes were voltage-clamped using a TEV 200A amplifier (Dagan), a Digidata 1550 A/D converter, and recorded using CLAMPex 10.7 software (Axon Instruments). The pipette solution contained 3 M KCl. The standard bath solution contained 30 mM CaCl 2 , 1 mM KCl, 2 mM NaCl, 130 mM mannitol and 5 mM MES-Tris (pH 5.5). Voltage steps were applied from +40 mV to −160 mV in −20 mV decrements over 0.8 s. Single-cell Ca 2+ imaging in mammalian cells HEK293T or COS7 cells expressing GCaMP6s and various combinations of candidate channel proteins were monitored using a Zeiss AxioObserver Z1 inverted microscope (Ivision 4.5 software) with a ×20 objective as previously reported 35 . The interval of data acquisition was 2 s. The standard solution for Ca 2+ imaging contained 120 mM NaCl, 3 mM KCl, 1 mM MgCl 2 , 1.2 mM NaHCO 3 , 10 mM glucose, 10 mM HEPES, pH 7.5. About 60 s after initiation of the imaging procedure, the bath was perfused using a peristaltic pump with the standard solution supplemented with 10 mM Ca 2+ and/or RALFs to elicit Ca 2+ entry through active channels. Synergid cell Ca 2+ imaging For the RALF-induced synergid [Ca 2+ ] cyt increase experiment, unfertilized ovules were dissected from flowers as previously described 47 . The pistil was dissected to remove the ovules from the placenta using a surgical needle. The isolated ovules were placed in pollen germination medium (PGM), which contained 18% sucrose, 0.01% boric acid, 1 mM MgSO 4 , 1 mM CaCl 2 , 1 mM Ca(NO 3 ) 2 and 0.5% agarose (pH 7.0) 48 . After 2 h of incubation at 22 °C and 100% relative humidity, synergids expressing GCaMP6s were monitored using a Zeiss AxioObserver Z1 inverted microscope (Ivision 4.5 software) with a ×20 objective, and various RALFs were added to the ovules as indicated. For the pollen-tube-induced synergid [Ca 2+ ] cyt increase experiment, we followed a previously published protocol 2 , 3 . Dissected ovules of emasculated flowers expressing GCaMP6s were placed on PGM. Unpollinated pistils were cut with a razor blade (VWR International) at the junction between the style and ovary. The stigmas were placed on the PGM and manually pollinated with pollen grains expressing DsRed. Pollinated stigmas were positioned 150 µm away from the ovules, and pollen tube growth was monitored using a fluorescence microscope. Time-lapse Ca 2+ imaging began after the pollen tube entered the ovule micropyle. Protein localization Transfected COS7 cells were washed with PBS and mounted onto slides for image acquisition with a Zeiss LSM 880 confocal microscope and ZEN2012 software. Peptide purification All tag-free RALF peptides used in this study were purified from insect cells (High 5). The pFastBac vector containing RALF4, RALF19 and LRX8 were gifts from J. Santiago (University of Lausanne), and RALF4 and RALF19 peptides were purified as previously reported 49 . For RALF8, RALF9, RALF15, RALF25, RALF26, RALF30 and RALF34, the CDS encoding RALF mature peptides were cloned into a modified pACEBAC1 (Geneva Biotech) vector in which RALFs were amino-terminally fused to a 30K signal peptide, a 10×His tag, thioredoxin A and a tobacco etch virus (TEV) protease site. High 5 cells were infected with virus with a multiplicity of infection of 3 and incubated for 1 day at 28 °C and 2 days at 22 °C at 110 r.p.m. on an orbital shaker. The secreted peptides were purified from the supernatant with a Ni 2+ column (Ni-NTA, Qiagen), and incubated with TEV protease (NEB) to remove the tags. Peptides were further purified by size-exclusion chromatography on a Superdex 200 increase 10/300 GL column (GE Healthcare), equilibrated in 20 mM sodium citrate, pH 5.0, 150 mM NaCl. The peptides were diluted with sterile pure water before use. Protein–protein interaction assays For pull-down assays, MBP–FER ex , GST–RALFs and GST–LRE were produced in E. coli Rosetta (DE3) by 0.1 mM IPTG induction overnight at 16 °C and bound to amylose or glutathione resins for purification as previously reported 19 , 50 . The pull-down buffer contained 40 mM Tris–HCl, pH 7.5, 100 mM NaCl, 1 mM EDTA, 5% glycerol, 5 mM MgCl 2 , 1 mM PMSF, complete protease inhibitor cocktail (Roche) at 1:100 dilution, and 0.4% Triton X-100. Proteins were applied to amylose resin and incubated at 4 °C for 2 h with gentle mixing. The resin was washed three times in pull-down buffer. Proteins that remained bound to the resin were eluted by mixing with SDS–PAGE loading buffer, boiled for 5 min and subjected to 12% SDS–PAGE and western blotting. For co-IP of tobacco leaves, 35S:FERK565R–3×Flag and 35S:LRE–4Myc constructs were co-transformed into Agrobacterium tumefaciens (strain GV3101) and infiltrated into N. benthamiana leaves 23 . Sixty hours after inoculation, leaves were detached and treated with 5 μM RALFs for 2 h before total protein was extracted and applied to anti-Flag M2 affinity agarose gel (Sigma-Aldrich). After incubation at 4 °C for 2 h with gentle mixing, the resin was washed three times in pull-down buffer, and the bound protein was eluted by mixing with SDS–PAGE loading buffer, boiled for 5 min and subjected to 10% SDS–PAGE and western blotting. For co-IP of Xenopus oocytes, cRNAs of FER–3×Flag, NTA–3HA and LRE–4Myc were injected into oocytes, incubated for 3 days, followed by treatment with 5 μM RALFs for 2 h. Total protein was extracted in the pull-down buffer and then applied to anti-Flag M2 affinity agarose gel (Sigma-Aldrich). After incubation at 4 °C for 2 h with gentle mixing, the resin was washed three times in pull-down buffer, and the bound protein was eluted by mixing with SDS–PAGE loading buffer, boiled for 5 min and subjected to 10% SDS–PAGE and western blotting. For chemiluminescence detection, the following antibodies were used: anti-GST–HRP (1: 2,000 dilution), anti-Myc–HRP (1: 2,000 dilution), anti-HA (1: 2,000 dilution), anti-MBP (1: 2,000 dilution) and anti-mouse secondary (1:20,000 dilution) antibodies from Santa Cruz Biotechnology; and anti-Flag antibody (1:4,000 dilution) from Sigma-Aldrich. Image processing and data analysis ImageJ (v.1.51j8) was used to analyse GCaMP6s signals over time at several regions of interest. To calculate the fractional fluorescence change (Δ F/F ), the equation Δ F/F = ( F − F 0 )/ F 0 was used, where F 0 denotes the average baseline fluorescence determined by the average of F over the first 10 frames of the recording before the treatment. Microsoft Excel in Office 365 and GraphPad Prism 7.0 were used for calculation and statistical analyses of the data. Adobe Illustrator CC 2019 was used for image assembly. Clampfit 10.7 was used to analyse and process data from the electrophysiological experiments. All experiments were independently reproduced in the laboratory. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The data supporting the findings of this study are available within the paper and its Supplementary Information files . Source data are provided with this paper.
Researchers in the Department of Plant and Microbial Biology (PMB) have uncovered the intricate molecular processes that precede reproduction in flowering plants. Published July 6 in Nature, the findings document a previously unknown molecular process that serves as a method of communication during fertilization. According to Professor Sheng Luan, chair of the PMB department and the paper's senior author, the exact mechanism for signaling has previously eluded researchers. "At the molecular level, this whole process is now more clear than ever before," he said. Sending molecular 'love notes' Flowers reproduce sexually through pollination, a process that involves the transfer of pollen from a flower's stamen (the male fertilizing organ) to the stigma on the pistil (the female reproductive organ). Once the pollen grain lodges on the stigma, a pollen tube grows from the pollen grain to an ovule to facilitate the transfer of sperm to the egg. Luan said researchers have previously recorded the presence of calcium waves preceding the fertilization process and noted that "they knew the calcium signal is important but didn't know exactly how it is produced." To analyze how the calcium wave was produced by the female cell, Luan and his co-authors introduced a biosensor to report calcium levels in the specific cell to look for signals from the male parts that trigger calcium waves. They found that pollen tubes emit several small peptides—short chains of amino acids—that can be recognized by peptide receptors on the surface of the female cell. Once activated, these receptors recruit a calcium channel to produce a calcium wave that guides the pollen tube to the ovule and initiates fertilization. Green calcium waves "pulse" in this recording provided by the Luan Lab. "You could compare this to a delivery service," Luan explained. "We know the small peptide molecule serves as a signal to the female part of the flower, almost like a knock on the door letting it know the pollen tube is here." The calcium waves ultimately cause the pollen tube to rupture and release the immobile sperm once it is inside the ovule, ensuring a successful fertilization process. "In a way, they basically commit suicide to release the sperm," Luan said. "Sometimes the female reproductive cell also dies in order to expose the egg so they can meet and produce new life. It's kind of a romantic journey for plant reproduction." Reinventing molecular messaging According to Luan, understanding the intricate molecular processes of fertilization may help improve the commercial yields in flowering plants. Other researchers or plant geneticists might use the findings to break the interspecies barrier, potentially opening the door to the creation of new hybrid crop species through cross-pollination. But, in addition to the potential commercial application, these findings further highlight plants' miraculous ability to communicate via molecular emissions. "From an evolutionary point of view, plants reinvented their own molecules specific to their unique communication process," he added. The calcium channels identified in this study are unique to plants, suggesting they invented a way to produce signals that are different than those found in animals. Luan said researchers have studied calcium channels for more than 30 years, uncovering how they confer resistance to powdery mildew (a fungal disease that affects a wide variety of plants) or enable mechanical sensing in root systems. Their biochemical role remained unknown until this study uncovered the specific channel activity. "Reinventing new channels to communicate in their own way, consistent with different lifestyles of plants and animals, is of general importance to biology," Luan said.
10.1038/s41586-022-04923-7
Medicine
Taxing sweet snacks may bring greater health benefits than taxing sugar-sweetened drinks
Richard D Smith et al, Are sweet snacks more sensitive to price increases than sugar-sweetened beverages: analysis of British food purchase data, BMJ Open (2018). DOI: 10.1136/bmjopen-2017-019788 Journal information: BMJ Open
http://dx.doi.org/10.1136/bmjopen-2017-019788
https://medicalxpress.com/news/2018-04-taxing-sweet-snacks-greater-health.html
Abstract Objectives Taxing sugar-sweetened beverages (SSBs) is now advocated, and implemented, in many countries as a measure to reduce the purchase and consumption of sugar to tackle obesity. To date, there has been little consideration of the potential impact that such a measure could have if extended to other sweet foods, such as confectionery, cakes and biscuits that contribute more sugar to the diet than SSBs. The objective of this study is to compare changes in the demand for sweet snacks and SSBs arising from potential price increases. Setting Secondary data on household itemised purchases of all foods and beverages from 2012 to 2013. Participants Representative sample of 32 249 households in Great Britain. Primary and secondary outcome measures Change in food and beverage purchases due to changes in their own price and the price of other foods or beverages measured as price elasticity of demand for the full sample and by income groups. Results Chocolate and confectionery, cakes and biscuits have similar price sensitivity as SSBs, across all income groups. Unlike the case of SSBs, price increases in these categories are also likely to prompt reductions in the purchase of other sweet snacks and SSBs, which magnify the overall impact. The effects of price increases are greatest in the low-income group. Conclusions Policies that lead to increases in the price of chocolate and confectionery, cakes and biscuits may lead to additional and greater health gains than similar increases in the price of SSBs through direct reductions in the purchases of these foods and possible positive multiplier effects that reduce demand for other products. Although some uncertainty remains, the associations found in this analysis are sufficiently robust to suggest that policies—and research—concerning the use of fiscal measures should consider a broader range of products than is currently the case. price elasticity sweet snacks sugar-sweetened beverages fiscal policy This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: googletag.cmd.push(function() { googletag.display("dfp-ad-mpu"); }); Statistics from Altmetric.com See more details Picked up by 77 news outlets Blogged by 2 Tweeted by 92 On 3 Facebook pages 109 readers on Mendeley 1 readers on CiteULike Linked Articles Request Permissions If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways. ?xml version="1.0" encoding="UTF-8" ? Request permissions price elasticity sweet snacks sugar-sweetened beverages fiscal policy Strengths and limitations of this study Detailed transaction level data on all food and beverage purchases collected electronically from a representative sample of >30 000 Great Britain households over 2 years. Transaction level data allow for separating and analysing demand for ready-to-consume sweet snacks. Demand analysis accounts for zero purchases and endogeneity of total food expenditure. Data exclude purchases of foods and beverages bought and consumed outside homes. Purchase data do not necessarily amount to consumption due to possible waste. Introduction With the global prevalence of obesity and associated health risks continuing to increase, 1 2 health-related taxes have become an established policy option intended to reduce energy intake. Most of these have focused on sugar-sweetened beverages (SSBs) due to their consistent association with energy intake, weight gain, risk of type 2 diabetes, as well as dental caries. 3 In the USA, six local jurisdictions have a tax on sugary beverages implemented due to health concerns. 4 Mexico, Finland and France apply different levels of volumetric taxes on SSBs, Hungary has adopted a system of volumetric taxes from products exceeding specified levels of sugar, and Chile taxes drinks with high levels of sugar at a rate 8% higher in comparison to drinks containing less sugar. 4 More recently, Portugal and Catalonia (Spain) implemented a two-tiered tax on sugary drinks, the United Arab Emirates and Saudi Arabia introduced a 50% tax on carbonated drinks and Brunei and Thailand introduced an excise duty on sugary drinks. 4 There are similar plans across a number of other countries such as Estonia, the Philippines, Indonesia, Israel and South Africa. 5 The UK government has confirmed an industry levy starting in April 2018 to incentivise producers to reformulate their products or, if not, to increase the price of SSBs. 6 Research to date suggests that increasing the price of SSBs generates a small, but significant, reduction in their purchase (broadly, a 10% price rise reduces purchases by 6%–8%), with a more pronounced effect in poorer households and that substitution towards other soft drink categories only minimally offsets the energy reductions achieved through decreases in SSBs. 7–18 However, there has been little research on the impact such a price increase could have on other contributors to sugar and energy intake, including alcohol 18 and sweet snack foods (such as confectionery, cakes and biscuits). With the apparent success of fiscal measures to increase the price of SSBs, it would be useful to establish whether a similar, or possibly greater, effect on consumption of snack foods could be obtained from a similar price change. The research presented here is the first to provide a direct analysis of the relationship between price increases and demand for sweet snack foods, within the context of demand for soft drink and alcoholic drink purchases, across different income groups. Methods The impact, or sensitivity, of demand for a product to price changes is termed the price elasticity of demand. This shows the per cent change in the demand for product X if its own price changes (own-price elasticity) or the price of other products (Y, Z) changes (cross-price elasticity). These elasticities are estimated from demand models. We apply a partial demand model, which models household expenditure shares on prices of different products and total expenditure, adjusted for overall price level. The demand model we use is adapted from the common and widely applied Almost Ideal Demand System (AIDS). The demand model and price elasticities are estimated from household expenditure data from January 2012 to December 2013, provided by Kantar Worldpanel. The data include information on household expenditures from a sample of British households (~36 000), representative of the population with respect to household size, number of children, social class, geographical region and age group on food and drink purchases for home consumption made in a variety of outlets, including major retailers, supermarkets, butchers, greengrocers and corner shops. The dataset consists of individual transactions, providing detailed information on the day of purchase, outlet, amount spent, volume purchased and also nutrient composition of each of the products, including sugar. Households record all purchases (barcodes and the receipts) for products brought back into the home with handheld scanners at home. In addition, Kantar Worldpanel annually collects sociodemographic information for each household, such as household size and composition, income group, social class, tenure and geographical location (postcode district), as well as age, gender, ethnicity and highest educational classification of the main shopper. As we are interested in analysing the demand across income groups we excluded households (n=4075) for which this variable is missing (due to households’ preference to not report this). The full dataset used in the analysis thus consists of 32 249 households, of which 80% appear in both years (25 535), providing ~75 million food and beverage purchases disaggregated at the brand and package level, capturing both cross-sectional and longitudinal variation in household purchases. For analysis, data were aggregated from all foods and beverages into 13 distinct groups: (1) high-sugar soft drinks, containing more than 8 g sugar/100 mL (assuming a dilution rate of 1:4 as used by the British Soft Drinks Association for concentrated SSBs); (2) medium-sugar soft drinks, with between 5 and 8 g sugar/100 mL; (3) low-sugar soft drinks with less than 5 g of sugar/100 mL; (4) other soft drinks, including fruit juices, milk-based drinks (excluding pure milk) and water i ; (5) alcohol, including beer, lager, cider, wines and spirits; (6) cookies, biscuits and cereal bars; (7) chocolate and confectionery; (8) cake-type snacks, including cake bars, pastries, muffins, flapjack and mince pies; (9) savoury snacks, including crisps, popcorn, crackers and savoury assortments; (10) fresh and frozen meat and fish; (11) dairy; (12) fruit and vegetables; (13) rest of food and drink. Sweet snack foods—defined as foods which are at ambient temperature and able to be consumed on the go without utensils—were the most disaggregated as these were the focus for this study. As many beverages and snack foods are storable and not purchased very frequently, data were aggregated at 4-week intervals for each household, providing a total of n=6 23 459 household-month observations. As the data are aggregated to 4-weekly periods (n=26) and into 13 groups, we estimate geographical price indices from transaction prices of each individual product, based on the postcode area the households reside (see online supplementary appendix 1 for further details). Supplementary file 1 [bmjopen-2017-019788-SP1.pdf] Even at this level of aggregation, a substantial amount of zero-expenditure months remain, as most households do not buy beverages or foods from every category every month and some households never buy certain categories during the whole sample period. A two-step procedure was followed to take account of this censoring of the dependent variable in the estimation strategy. The AIDS approach was adapted for the panel data context to allow control for unobserved household heterogeneity via a fixed-effects specification. The full specification, including the procedures for handling censoring, endogeneity of prices and total expenditure and estimation of price elasticities is provided in online supplementary appendix 1 . Due to potential differences in purchasing behaviour, the analyses are carried out in the full sample and in subsamples by household annual income (low income (<£20 000), middle income (£20 000–£49 000) and high income (>£50 000+). Results Table 1 presents the sociodemographic profile of the sample. A comparison of Kantar Worldpanel with representative household data from the Living Cost and Food survey (LCF) ii has found the sociodemographic and regional profiles of the samples to match well, although our sample has a slightly higher share of (1) low-income households, (2) households that own a computer and/or a car and (3) households in the South and Southeast of England. 19 View this table: View inline View popup Table 1 Demographic characteristics of estimation sample Table 2 (top panel) presents the average sugar content across the food and beverage groups as well as total purchases of sugar (expressed as grams per person per day) that are purchased and brought home (ie, excluding purchases consumed outside homes), across each of the categories outlined above and split by income level. There is a clear income gradient: those on lower incomes purchase more sugar per person per day. It is also clear that more sugar is consumed across all income groups from sweet snacks (17.1 g) than all beverages combined (alcoholic and non-alcoholic) (13.9 g). In comparison to SSBs in particular (6.9 g), sweet snacks combined contribute more than twice the amount of sugar. It is also evident that sweet snacks have per 100 g a considerably higher sugar content in comparison to 100 mL of beverages. View this table: View inline View popup Table 2 Purchases of sugar (g) per person and day in 2013 and share (%) of non-zero observations across the food groups View this table: View inline View popup The bottom panel of table 2 shows the share of households that purchase products from each of the food groups during the 26 4-week periods. A higher share of non-purchases (eg, only 13% of households purchase medium-sugar soft drinks across the periods) has implications for methodology which are discussed in appendix but also provides an overview of the regularity of purchases. Approximately half of the households (49%) purchase high-sugar soft drinks across the 26 4-week periods. Low-sugar soft drinks are bought more frequently (69% of observations are positive across household periods). In comparison, cookies and biscuits as well as chocolate and confectionery are bought more frequently (77% and 69%) and cake-type snacks are bought less frequently (37%). In comparison to low and high-income households, middle-income households have a slightly higher frequency of purchase of high-sugar soft drinks and sweet snacks. Table 3 presents total expenditure, expenditure shares and average prices across all households and split into three income groups. The critical aspect for analysis here is the expenditure share, where there is a marked income gradient with respect to expenditure on beverages and a slightly lower gradient for sweet snacks. The low-income group spend 14% of total drink expenditure on the high and medium-sugar soft drinks, compared with 12% and 10% for medium and high-income groups, respectively. Similarly, of the total food expenditure, sweet snacks represent 7%, 7% and 6% among the low, medium and high-income groups, respectively. View this table: View inline View popup Table 3 Mean total expenditure, expenditure shares and prices The full results of the unconditional, uncompensated own-price and cross-price elasticities are presented in online supplementary appendix 2 . In sum, the own-price elasticity for alcoholic drinks is higher than for all other categories; that is, alcoholic drinks are more sensitive to price change than any other category. Elasticities for all categories are inelastic (ie, smaller than 1); this means that there is a less than proportionate decrease in purchase following a price rise for products, indicating that price increases reduce demand for all products, although with differing strength of effect. This pattern is seen across all income groups, with relatively similar absolute elasticity values. Comparing SSB and sweet snack price sensitivity, the elasticity for SSB is on average −0.77 (a 10% increase in price yields a 7.7% reduction in quantity purchased), whereas for chocolate and confectionery it is −0.74, biscuits −0.69 and cakes −0.66. There is relatively little variance across income groups in the own-price elasticity for chocolate and confectionery, whereas for biscuits and cookies and cake-type snacks, low-income households are relatively more price responsive (−0.74 and −0.71, respectively, in comparison to −0.64 and −0.53 in high-income group). Sweet snack foods, overall, thus appear to have only slightly lower level of price sensitivity in comparison to SSBs. Supplementary file 2 [bmjopen-2017-019788-SP2.pdf] Of interest also is the impact on purchases across other aspects of the diet when the price of SSBs or sweet snacks increases. Figures 1-4 present the impacts on purchases as a result of a 1% increase in price of each of the soft drink and snack categories to illustrate the variance in these effects (presenting only those effects where CIs exclude zero). This is presented for the total sample ( figure 1 ) and then for each income group ( figures 2-4 ). Download figure Open in new tab Download powerpoint Figure 1 Change in demand (%) as a response to 1% price increase in soft drinks and sweet snacks (all households n=623 459). Download figure Open in new tab Download powerpoint Figure 2 Change in demand (%) as a response to 1% price increase in soft drinks and sweet snacks (low-income households n=223 174). Download figure Open in new tab Download powerpoint Figure 3 Change in demand (%) as a response to 1% price increase in soft drinks and sweet snacks (mid-income households n=305 841). Download figure Open in new tab Download powerpoint Figure 4 Change in demand (%) as a response to 1% price increase in soft drinks and sweet snacks (high-income households n=94 444). In aggregate across all income groups, ( figure 1 ) clear differences arise from increasing the price of SSBs compared with sweet snacks. Increases in the price of high-sugar soft drinks are associated with a decrease in purchases of medium-sugar soft drinks (2.5% reduction in purchase if the price of high-sugar drinks increases by 10%) but increased purchases of other soft drinks (1.1%) and chocolate and confectionery (0.08%). Increasing the price of diet/low-sugar drinks elicits greater reaction in other soft drink purchases (1.1% decrease in purchase of high-sugar drinks and 2.8% decrease in purchase of medium-sugar drinks for a 10% increase in price of low-sugar drinks), but also some increase in demand for cakes, biscuits and chocolate (1.3%–1.7%). Increasing the price of medium-sugar soft drinks, however, only reduces demand for other soft drinks (by 0.5%), low-sugar soft drinks (0.3%) and alcohol (0.3%) with no associations observed with demand for snacks. For sweet snacks, there are considerably more complementary effects, with significant reductions in other categories. A price increase for chocolate and confectionery items is associated with small but significant decreases across all soft drinks (reductions in purchase of 0.6%–0.8% for a 10% price increase) as well as biscuits and cakes (by 1.2%) and savoury snacks (1.6%). For biscuits, there are significant reductions in the demand for cakes (2.3%) as well as chocolate and confectionery (1.7%). Finally, for a price increase in cakes, there are smaller changes, with reductions in purchases of biscuits (by 0.7%), but increases in the purchase of chocolate and confectionery (0.7%) and alcohol (0.8%). Thus, increasing the price of chocolate snacks especially elicits a range of significant reductions in purchases across most categories. Although many of the associations at the aggregate level are replicated across income groups ( figures 2-4 ), there is some clear variance by income group. An increase in the price of sugary drinks is associated with a reduction in medium-sugar drinks only within the low-income group (by 3% if price increases by 10%) while an increase in other soft drinks is observed in medium and high-income groups (1%). Furthermore, in the high-income group, a higher SSB price leads to an increase in purchases of chocolate and confectionery (1%–2%) but also a reduction in purchases of cake-type snacks (2%, although all with relatively large CIs). Increasing the price of diet/low-sugar drinks seems to be associated with more substitute relationships, with significant increases in sweet snack demand (1%–2% increase to a price increase of 10%), especially for low and medium-income groups. However, for increases in the price of sweet snacks the differences are more marked. Increasing the price of biscuits generates complementary reductions in the purchase of chocolate and confectionery for the low-income group (by 3% if price increases by 10%), reductions in cake-type snacks for the middle-income group (3%) but no such reductions for the high-income group where a reduction in medium-sugar drinks is observed instead (8%). While a relatively large change, the absolute change would be small as the share of medium-sugar drinks in overall expenditure is very small. Changes in the price of cake-type snacks has limited impact on other categories for those in the low-income group, but for the middle-income group it reduces purchase of biscuits (1%), but is also associated with a slight increase in purchase of alcohol (1%). For the high-income group this effect is even more pronounced, with increases in purchase of alcohol (1%) and chocolate as substitutes (3%). Increasing the price of chocolate and confectionery has a similar effect across all income groups, with associated reductions in the purchase of most other food and drink categories (1%–2% if price increases by 10%). Discussion The price elasticity of chocolate and confectionery was highest among the sweet snacks and is almost identical to that for SSBs (although both are lower than alcohol). Further, price increases in SSBs are associated with an increase in purchase of other soft drinks and chocolate and confectionery, whereas an increase in the price of chocolate is associated with a reduction in purchase of SSBs, as well as a range of other snacks. The differences across food categories and income groups indicate the complexity of estimating the impact of a single price increase. Nonetheless, it does suggest that policies to increase the price of sweet snacks could have a greater impact than that seen thus far for SSBs, not least because chocolate and confectionery alone contribute a similar quantity of sugar per person per day as SSBs in our sample. Moreover this analysis suggests they have stronger associations with reductions in other categories of foods and SSBs (ie, complementary relationships), creating a cumulative positive multiplier effect. This appears to be most pronounced in the low and middle-income groups, as would be expected. The strength of these results suggests that further research is warranted to analyse the impact on diet composition and model the long-term impacts of such interventions on health outcomes. The extent to which a levy on sugary snacks could yield a lower consumption of sugar is, of course, dependent on the structure of the levy, but considering the relatively high sugar content of these foods (per 100 g) even a small levy based on sugar content is likely to change prices, assuming it is passed through. Whether a multitiered levy based on sugar content, such as proposed for the sugary drinks, would encourage reformulation is another question since there are important differences in the ease of reformulation compared with SSBs and less is known about consumer acceptability of the reformulated snack food products. Overall, our estimates of price elasticity for foods and sugary beverages are consistent with the literature. Meta-analyses of price elasticity in broad food groups in high-income countries find these to range between −0.4 to −0.8 and that of sweets, confectionery and sweetened beverages at −0.6. 7 20 Our estimates range between −0.6 and −0.8 but we also use greater disaggregation of food and beverage groups. Another study reports the metaestimate of price elasticity of SSBs to be −1.3 that is higher than our estimate of −0.77; however, the metaestimate includes studies from Mexico and Brazil and price elasticity is dependent on income levels and lower income populations are likely to have greater responsiveness to price changes (ie, smaller elasticity value) as they spend a greater proportion of their incomes on food and beverages. 21 Two studies from Chile also suggest somewhat more responsive demand (SSBs: −1.3 to −1.4, sweets and desserts −0.8 to −1.2). 22 23 Elsewhere, a US study found, as here, a substitution effect towards juice and milk and a reduction in diet beverages if the price of SSBs increases. This study also estimated price elasticity for SSBs at −0.8 and a somewhat less price responsive demand for sweets and sugars than our analysis (−0.3). 24 It has to be noted however, that we cannot impose a priori expectations for underlying preferences for foods and beverages to be the same in different populations and over time so some variance in elasticity estimates would be natural even if methods applied by the studies are similar. There are, of course, limitations to the analysis presented here. The data, although large, representative and detailed, may be subject to under-recording; an issue present in all types of survey data. For instance, Kantar Worldpanel data appear to have lower levels of recorded alcohol expenditure than the Living Cost and Food survey. 19 The data also include foods and beverages purchased and brought home and thus exclude all purchases that are consumed outside the home which are likely to be higher among more affluent households. Furthermore, the price responsiveness is based on price variations occurring in the market. This implies that any likely effect of the taxes inferred from these elasticities is subject to bias if the taxes, when implemented, have an impact on the demand beyond the direct price change. Regardless of the models used, estimating demand requires a number of assumptions (see online supplementary appendix 1 ), which may have influenced the estimates. We prioritised an approach that allowed controlling for unobservable household heterogeneity, including in the preferences towards different types of drinks and snacks while also adjusting for non-purchase and endogeneity issues. Overall, own-price elasticities are estimated with greater robustness as an a priori expectation of an inverse relationship with price exists and own-price changes have a noticeable impact on purchases. However, the estimation of cross-price elasticities (substitution or complementarity effects) across products are harder to capture, as these are generally much smaller and the direction cannot be assumed a priori. 25 As most of cross-price elasticities are estimated close to zero, even small changes in methods can possibly affect the direction and thus interpretation of the effect. In addition, price elasticities are interpreted individually (ie, allowing one price change at a time) but categories defined in this study might be taxed simultaneously (eg, high and medium-sugar soft drinks) which means that the policy impact may vary. Perhaps more critically, although this analysis can highlight significant relationships between products purchased, it cannot explain why these relationships exist. This requires further primary research and research within population subgroups. Conclusion Increasing the price of SSBs has become an accepted policy to reduce sugar intake. Analysis presented here based on data from Great Britain suggests that extending fiscal policies to include sweet snacks could lead to larger public health benefits, both directly by reducing purchasing and therefore consumption of these foods, and indirectly by reducing demand for other snack foods and indeed SSBs. Although some uncertainty remains, the associations observed in this analysis are sufficiently robust to suggest that policies—and research—concerning the use of fiscal measures to reduce intake of free sugars and improve diet quality should consider extending beyond SSBs to include the more frequently consumed sugar-based snacks including cakes, biscuits and, especially, chocolate and confectionery.
Taxing sweet snacks could lead to broader reductions in the amount of sugar purchased than similar increases in the price of sugar-sweetened beverages (SSBs), according to new research published in BMJ Open. The research team from the London School of Hygiene & Tropical Medicine, the University of Cambridge and the University of Oxford, estimate that adding 10% to the price of chocolate, confectionery, cakes and biscuits may reduce purchases by around 7%. This is a similar outcome to taxing SSBs, where previous research suggests a 10% price rise can reduce purchases by 6-8%. Crucially, however, the study found that taxing sweet snacks could have knock-on effects on the sales of other food items, reducing the purchase of soft-drinks (by 0.6-0.8%), biscuits and cakes (1.2%), and savoury snacks (1.6%). This study is an observational analysis and cannot explain why consumers change their purchasing behaviour but, although some uncertainty remains, the researchers say the associations observed suggest that relevant policies and future research should consider a broader range of fiscal measures to improve diet than is currently the case. Lead author Professor Richard Smith from the London School of Hygiene & Tropical Medicine said: "We know that increasing the price of sugar-sweetened beverages is likely to generate a small, but significant, reduction in their purchase. However, there has been little research on the impact that a similar price increase on other sweet foods such as chocolate, confectionery, cakes and biscuits could have on the purchase of sugar. This research suggests that taxing these sweet snacks could bring greater health gains and warrants detailed consideration." This study, funded by the National Institute for Health Research Policy Research Programme, is the first to provide a direct analysis of the relationship between price increases and consumer demand for snack foods across different income groups. Household expenditure on food and drink items were classified into 13 different groups and examined in a national representative sample of around 32,000 UK homes. Purchasing was examined overall and then compared across low-income, middle-income and high-income households. The data (from Kantar Worldpanel) covered a two-year period in 2012 and 2013, and provided complete details of each sales transaction, in addition to social and demographic information for each household. To estimate the change in purchasing, the researchers applied a specialised tool for studying consumer demand. The researchers found that increasing the price of sweet snacks led to a decrease in purchases and may have wider effects on purchasing patterns, which they suggest could potentially bring additional benefits to public health. For example, increasing the price of chocolate snacks was estimated to bring about significant reductions in purchases across most food categories, while a price increase on biscuits showed a potential reduction in the demand for cakes (2.3%) as well as chocolate and confectionary (1.7%). The potential effects of price increases were greatest in the low-income group. Increasing the price of biscuits was linked to a reduction in the purchase of chocolate and confectionery for the low-income group (3% if price increases by 10%). No such reductions for the high-income group were seen. Increasing the price of chocolate and confectionery was estimated to have a similar effect across all income groups. Co-author Professor Susan Jebb from the University of Oxford said: "It's impossible to study the direct effects of a tax on snack food on consumer behaviour until such policies are introduced, but these estimates show the likely impact of changes in the price. These snacks are high in sugar but often high in fat too and very energy dense, so their consumption can increase the risk of obesity. This research suggests that extending fiscal policies to include sweet snacks could be an important boost to public health, by reducing purchasing and hence consumption of these foods, particularly in low-income households." The authors acknowledge limitations of the study including the exclusion of purchases of foods and drink bought and consumed outside of homes (e.g. in restaurants) which they say are likely to be greater among higher income earners.
10.1136/bmjopen-2017-019788
Physics
Scientists create predictive model for hydrogen-nanovoid interaction in metals
Predictive model of hydrogen trapping and bubbling in nanovoids in bcc metals, Nature Materials (2019). DOI: 10.1038/s41563-019-0422-4 Journal information: Nature Materials
http://dx.doi.org/10.1038/s41563-019-0422-4
https://phys.org/news/2019-07-scientists-hydrogen-nanovoid-interaction-metals.html
Abstract The interplay between hydrogen and nanovoids, despite long being recognized as a central factor in hydrogen-induced damage in structural materials, remains poorly understood. Here, focusing on tungsten as a model body-centred cubic system, we explicitly demonstrate sequential adsorption of hydrogen adatoms on Wigner–Seitz squares of nanovoids with distinct energy levels. Interaction between hydrogen adatoms on nanovoid surfaces is shown to be dominated by pairwise power-law repulsion. We establish a predictive model for quantitative determination of the configurations and energetics of hydrogen adatoms in nanovoids. This model, combined with the equation of states of hydrogen gas, enables the prediction of hydrogen molecule formation in nanovoids. Multiscale simulations, performed based on our model, show good agreement with recent thermal desorption experiments. This work clarifies fundamental physics and provides a full-scale predictive model for hydrogen trapping and bubbling in nanovoids, offering long-sought mechanistic insights that are crucial for understanding hydrogen-induced damage in structural materials. Main Hydrogen is the most abundant element in the known universe and is a typical product of corrosion; it thus exists in virtually all service environments. The exposure of metallic materials to hydrogen-rich environments can result in a range of structural damage, including hydrogen-induced cracking 1 , 2 , surface blistering/flaking 3 , 4 , 5 and porosity/swelling 6 , 7 . This damage undesirably degrades the structural and mechanical integrity of materials 8 , 9 , often causing premature and even catastrophic failure 4 , 5 and thus jeopardizing the safety and efficiency of many applications. It is generally believed that such damage originates from interactions between hydrogen and various lattice defects. One key issue among those interactions is the interplay between the hydrogen and nanovoids; this promotes the formation and growth of pressurized hydrogen bubbles and consequently leads to experimentally observable failure in structural materials 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 . Researchers have long recognized H 2 pressure build-up in a bubble core following cumulative hydrogen adsorption on the bubble surface 11 . However, the accurate characterization of structures, energetics and hydrogen pressure in bubbles has remained lacking. The development of advanced micrographic techniques has enabled the direct observation of hydrogen bubble structures at scales of tens of nanometres 4 , 10 . However, atomic details of hydrogen bubble nucleation, growth and agglomeration processes are difficult or even impossible to observe in situ. An alternative method to study hydrogen bubble formation is to measure the hydrogen thermal desorption rates during isochronal annealing 12 , 13 , 14 , 15 , 16 , which provides energetic information about hydrogen detrapping from nanovoids in metals. Nevertheless, the hydrogen thermal desorption spectra usually relate to multiple types of crystal defect at different depths 17 and are thus difficult to interpret. Multiscale simulations provide a way to circumvent such limitations to reveal the missing details 18 , 19 , 20 . The reliability of multiscale results requires accurate characterization of the relevant atomistic behaviour. At the fundamental level, ab initio calculations based on density functional theory (DFT) have been widely used to study the atomistic behaviour of hydrogen in nanovoids 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . Most previous DFT studies have focused only on the simplest case of monovacancies 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 and/or divacancies 31 , 32 , 33 , 34 , with hydrogen empirically placed at high symmetry sites, to investigate the energetic behaviour 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 . Recent DFT work 35 , 36 , 37 has tried to extend this method to hydrogen in larger nanovoids by decomposing the nanovoids’ inner surfaces into facets of free surfaces, and empirically saturating their high symmetry sites with hard-sphere-like H adatoms. However, this facet approach fails to resolve the inherent structural complexity associated with the highly curved edges and corners in nanovoids, and the hard-sphere approximation is oversimplified to describe just the H–H interaction, thus preventing accurate assessment of the energetics and structures of multiple H atoms in the nanovoids. As well as the above DFT endeavours, notable investigations have been carried out using large-scale classical molecular dynamics (MD) simulations. Vastly different predictions of hydrogen trapping behaviour were obtained depending on the empirical interatomic potentials used 38 , 39 . Furthermore, many of the available interatomic potentials were fitted as if simulating bulk conditions and cannot reproduce H 2 molecule formation in nanovoids 35 , 38 ; this inherently precludes the ability to predict experimentally observed H 2 pressure build-up in metals 3 , 4 , 5 , 40 , 41 , 42 . Consequently, it remains difficult to determine which hydrogen trapping behaviours reported, if any, are physically meaningful and/or relevant to hydrogen bubble formation in experiments. The absence of physical models and predictability prevents an accurate analysis of hydrogen behaviour in nanovoids; obtaining mechanistic and multiscale insights into hydrogen bubbling behaviour in metals thus remains a formidable challenge. The present study aims to directly address the aforementioned challenge. Based on comprehensive first-principles calculations on a model tungsten (W) system, we explicitly demonstrate a sequential adsorption of adatoms on square Wigner–Seitz surfaces of nanovoids with distinct energy levels, and propose a power law to describe the interaction among the H adatoms. A predictive model is established to determine the energetics and stable configurations of multiple H adatoms in nanovoids. Combined with the equation of state of pressurized H 2 , this model enables quantitative assessment of the competition between H adatoms and H 2 molecules, as well as the prediction of pressurized H 2 molecule formation in nanovoids. Multiscale simulations based on the model show very good agreement with recent deuterium (D) thermal desorption experiments. The generality of our approach and model is further confirmed by benchmark calculations on other typical body-centred cubic (bcc) metals (Mo, Cr and α-Fe). The present study clarifies the fundamental physical rules of hydrogen trapping and bubbling in general nanovoids in bcc metals, provides the long-sought predictability for guiding related experiments as well as benchmarks for developing new metal–H empirical interatomic potentials, and enables a critical step towards quantitative assessment of hydrogen-induced damage in materials. Structures of H adatoms trapped in nanovoids The general patterns of hydrogen trapping in nanovoids were examined by comprehensive ab initio MD simulations. H atoms were sequentially introduced into a nanovoid ( V m ) to form H–nanovoid clusters ( V m H n ), where m ≈ 1–8 and n ≥ 0 respectively denote the number of vacancies constituting the nanovoid and the number of H atoms enclosed therein. As with previous studies 36 , 37 , we observed H adatom adsorption on the surfaces of all nanovoids and H 2 molecule formation in the core of large nanovoids ( V m H n with m ≥ 3), evidenced by two distinct pairing states with H–H separation distances of ~1.94 and 0.75 Å, respectively (Supplementary Section 1 ). Figure 1a illustrates the spatial locations of regions of high H probability density (that is, the probability of finding a H atom in a unit volume) in nanovoids during hydrogen addition. Our results indicate that the H adatoms prefer to stay on the square surfaces of the Wigner–Seitz cells of nanovoids (in the following, these square units are referred to as Wigner–Seitz squares), in particular near their vertices (that is, the tetrahedral interstitial sites). As shown in Fig. 1a , each Wigner–Seitz square is enclosed by six metal sites. Other metal sites are more than 2.9 Å away from the H adatoms on the Wigner–Seitz square, significantly greater than the typical transition metal–H bond length (1.5–2 Å) 43 . It is thus reasonable to assume that the energetic behaviour of H adatoms on a Wigner–Seitz square are mainly influenced by these six metal sites enclosing the square 22 . For a Wigner–Seitz square on a nanovoid surface, note that parts of these six metal sites are occupied by vacancies. Consequently, the complex nanovoid surface can be categorized into five different Wigner–Seitz squares, with i and j denoting the vacancy numbers of two types of neighbouring metal site (Fig. 1a ). Figure 1b presents an example of the average number of H atoms on each type of Wigner–Seitz square obtained from ab initio MD simulations. Despite certain temperature-induced fluctuation (at a relatively high temperature of 600 K), these show that H adatoms sequentially occupy different types of Wigner–Seitz square (for example, in the order ij = 22, 12, 11, 10 in Fig. 1b ) until all squares are filled. After a certain prerequisite surface trapping, H 2 molecules begin to form in the core of large nanovoids (V ≥3 ) (Supplementary Section 1 ). This spatial preference clearly indicates an energy difference among H on different Wigner–Seitz squares and H 2 in the core. Fig. 1: Characterization of nanovoid surfaces and related hydrogen energy levels. a , H probability density isosurfaces (in green and yellow) in V 3 and V 4 nanovoids during hydrogen addition at 600 K. Blue and red spheres, metal and vacancy sites, respectively; black lines, edges of the Wigner–Seitz cells; i , j , vacancy numbers of two types of neighbouring metal site around a Wigner–Seitz square. W atoms (not shown) are fixed at their ideal bcc sites. b , Average H numbers on different Wigner–Seitz squares (as adatoms) or in the core (as molecules) in V 8 at 600 K. Shown in parentheses are the numbers of corresponding Wigner–Seitz squares on the V 8 surface (note, V 8 does not contain ij = 21 Wigner–Seitz squares; see Supplementary Table 1 ). Error bars show s.d. of the data; lines are guides to the eye. c , Trapping energy of a single H adatom (that is, E ( H 1 , V m H 1 ), see equation ( 1 )) on all symmetrically irreducible Wigner–Seitz (WS) squares in V 1 –V 8 , distributed using a standard boxplot. Lines marked as ‘TIS H’ and ‘Vacuum H 2 ’ represent a H atom at the tetrahedral interstitial site in the bulk metal lattice and in a H 2 molecule in vacuum, respectively. Additional results for other nanovoids are provided in Supplementary Section 1 . Full size image To quantitatively analyse the energetics of hydrogen, we define the trapping energy of a number k of H atoms ( k ≤ n ) in a V m H n cluster as $$\begin{array}{*{20}{c}} {E\left( {H_k,V_mH_n} \right) = E^{\mathrm{tot}}\left( {V_mH_n} \right) - E^{\mathrm{tot}}\left( {V_mH_{n - k}} \right) - \frac{k}{2}E^{\mathrm{tot}}\left( {H_2} \right)} \end{array}$$ (1) where \(E^{\mathrm{tot}}\left( {V_mH_{n - k}} \right)\) and \(E^{\mathrm{tot}}\left( {V_mH_n} \right)\) are the total energies of the reference metal matrix containing the stable V m H n − k and V m H n clusters, respectively, before and after the introduction of k H atoms, and E tot ( H 2 ) is the total energy of an isolated H 2 molecule in vacuum (with a bond energy of 4.56 eV). First, we consider the case of nanovoids containing a single H adatom; the calculated trapping energies of hydrogen (that is, E ( H 1 , V m H 1 ) with m ≈ 1–8) are shown in Fig. 1c . We find that those trapping energies can be nicely categorized by the five different types of Wigner–Seitz square where the hydrogen resides after relaxation, accordingly falling into five distinct energy levels. These energy levels are insensitive to nanovoid size, which confirms the previous assumption that the hydrogen energetics on a Wigner–Seitz square are only affected by the six metal sites enclosing the square. Figure 1c shows the energetic preference of a H adatom on different Wigner–Seitz squares; this is in close accordance with the occupancy preference (Fig. 1b ) obtained from the ab initio MD simulations. Interaction between H adatoms After clarifying the distinct energy level of a single H adatom in a nanovoid (that is, V m H 1 ), we further investigated the energetics of multiple H atoms in the nanovoids. In the following, we first assume that the H atoms stay in the form of adatoms for simplicity of analysis. The mutual interaction between H adatoms in nanovoids can be quantified by defining the H–H interaction energy: $$\begin{array}{*{20}{c}} {E^{\mathrm{int}}\left( {V_mH_n} \right) = E^{\mathrm{tot}}\left( {V_mH_n} \right) + \left( {n - 1} \right)E^{\mathrm{tot}}\left( {V_m} \right) - \mathop {\sum}\nolimits_{k = 1}^n {E^{\mathrm{tot}}} \left( {V_mH_1^{S_k}} \right)} \end{array}$$ (2) where S 1 , S 2 , …, S n indicate the sites of the n H adatoms in the V m H n cluster, and \(E^{\mathrm{tot}}\left( {V_mH_1^{S_k}} \right)\) represents the total energy of the reference metal matrix containing a \(V_mH_1^{S_k}\) cluster with the H adatom located at the S k site. To unravel the complex H–H interaction, we started by examining nanovoids containing two H adatoms, that is, V m H 2 . Figure 2a shows the corresponding pairwise H–H interaction E int ( V m H 2 ), from which we see that the H–H interaction is generally repulsive and decays rapidly as the H–H separation distance d increases. Intriguingly, we found that the pairwise H–H interaction on a nanovoid surface can be well described by a d −5 power law, similar to that for hydrogen on free metal surfaces 44 : $$\begin{array}{*{20}{c}} {E^{\mathrm{int}}\left( {V_mH_2} \right) = A_sd^{ - 5}} \end{array}$$ (3) where A s = 3.19 eV Å 5 is a fitted constant. As a consequence of this strong repulsion, it is expected that H adatoms would shift towards tetrahedral interstitial sites on Wigner–Seitz squares to increase the H–H separation distances on adding hydrogen, consistent with Fig. 1a , which shows high H probability at tetrahedral interstitial sites. Also, a Wigner–Seitz square would prefer to accommodate only one H adatom until all other Wigner–Seitz squares are occupied by H adatom(s), which is in line with the results presented in Fig. 1b . Fig. 2: Energetics of H adatoms on nanovoid surfaces. a , Pairwise H–H interaction energies in different nanovoids. Symbols are DFT data calculated using equation ( 2 ), solid lines are fits using the d −5 power function, where d is the separation distance between two H adatoms at different tetrahedral interstitial sites, averaged among all symmetrically irreducible H–H pairs, with error bars showing s.d. of the data. b , Interaction energies among multiple H adatoms in the most stable V m H n clusters as a function of the sum of pairwise d −5 . c , Comparison of overall trapping energies from DFT calculations (symbols) and those predicted by equation ( 5 ) (dashed line). Full size image Energetics of H adatoms in nanovoids For the general case of V m H n (with n ≥ 2), according to our definition of the H–H interaction energy (equation ( 2 )), the overall trapping energy of the n number of H adatoms in V m H n can be rewritten as $$\begin{array}{*{20}{c}} {E\left( {H_n,V_mH_n} \right) = \mathop {\sum}\nolimits_{k = 1}^n E \left( {H_1,V_mH_1^{S_k}} \right) + E^{\mathrm{int}}\left( {V_mH_n} \right)} \end{array}$$ (4) If we assume that the H–H interaction, E int ( V m H n ), remains pairwise in nature and described by the same d −5 law as in equation ( 3 ), we can then represent the overall trapping energy as $$\begin{array}{*{20}{c}} {E\left( {H_n,V_mH_n} \right) \cong \mathop {\sum}\nolimits_{k = 1}^n {E_k^{ij}} + A_s\mathop {\sum}\nolimits_{k < 1}^n {d_{kl}^{ - 5}} } \end{array}$$ (5) where d kl denotes the distance between two H adatoms at S k and S l sites, respectively. As previously demonstrated (Fig. 1c ), \(E\left( {H_1,V_mH_1^{S_k}} \right)\) in equation ( 4 ) would conform to one of the five energy levels, being directly prescribed by the Wigner–Seitz square where site S k is located, denoted as \(E_k^{ij}\) . To verify equation ( 5 ), we calculated the trapping energies for V m H n clusters (note that these clusters contain only H adatoms with no H 2 molecule formation, in order to limit the focus to H adatoms). We screened different candidate structures: (1) selected randomly from the ab initio MD trajectories (15 structures for each V m H n ), (2) constructed by manually adding H on Wigner–Seitz squares with the lowest energy level, and (3) constructed by adjusting the H position to minimize the trapping energy given by equation ( 5 ). In most cases, we found that the minimizing equation ( 5 ) method could identify the structures with the lowest energies. In Fig. 2b we also show that the interaction energy among multiple H adatoms can be estimated well by summing up pairwise d −5 interactions, that is, the adatom interaction term in equation ( 5 ). More importantly, as demonstrated in Fig. 2c , the overall trapping energies E ( H n , V m H n ) predicted by equation ( 5 ) are in close agreement with DFT data. These results clearly show that equation ( 5 ) captures the physical essence of multiple H adsorption on nanovoid surfaces, providing a simple but effective framework for determining the stable structures of multiple H adatoms in nanovoids. Equation ( 5 ) allows us to predict hydrogen trapping energies for any V m H n cluster based on H structures. However, DFT relaxations are still indispensable for determining accurate H adatom sites and their mutual separation, which makes equation ( 5 ) less practical. To simplify the problem, yet without losing physical generality, two approximations are introduced here based on the DFT results: (1) H adatoms sequentially fill Wigner–Seitz squares with the lowest energy, and are uniformly distributed on the surface to maximize H–H distances; (2) each H adatom has six nearest H neighbours (that is, close-packed distribution), and interactions between non-nearest adatoms are neglected considering the d −5 rapid decay. In this way, the nearest H–H distance is \(d = \left( {\frac{{2a}}{{\sqrt 3 n}}} \right)^{0.5}\) , where a is the surface area of the nanovoid. Consequently, the multiple H–H interaction energy, that is, the second term in the right side of equation ( 4 ), can be simplified to $$\begin{array}{*{20}{c}} {E^{\mathrm{int}}\left( {V_mH_n} \right) \cong \frac{1}{2}A_s6n\left( {\frac{{\sqrt 3 n}}{{2a}}} \right)^{2.5}} \end{array}$$ (6) According to equations ( 1 ), ( 5 ) and ( 6 ), the trapping energy of the n th H adatom in a V m H n cluster is given by $$\begin{array}{*{20}{c}} {E\left( {H_1,V_mH_n} \right) = E_k^{ij} + \frac{{\partial E^{\mathrm{int}}\left( {V_mH_n} \right)}}{{\partial n}} \cong E_k^{ij} + 7.3A_s\left( {\frac{n}{a}} \right)^{2.5}} \end{array}$$ (7) Additional details related to the above are provided in Supplementary Section 2 . Equation ( 7 ) provides a quantitative prediction of the trapping energies of H adatoms just simply knowing the surface H density, n / a . The predictions from equation ( 7 ) in Fig. 3 (blue lines) show consistent agreement with the DFT-calculated trapping energies. This confirms the validity of the two approximations in describing the general behaviour of H adatoms in nanovoids. One particular observation in Fig. 3 is that the hydrogen trapping energy generally exhibits a combination of stepwise increments and gradual climbing. The stepwise growth is related to distinct energy levels (that is, different \(E_n^{ij}\) ), corresponding to H occupying different types of Wigner–Seitz square (Fig. 1 ). Meanwhile, the gradual climbing in the trapping energy results from the decreasing H–H distance and thus the increasing H–H repulsion. As H adatoms continue to populate the nanovoid, eventually the trapping energy will reach the value of an interstitial H in a bulk metal lattice, \(E_{\mathrm{Bulk}}^{\mathrm{H}}\) = 0.92 eV, when the nanovoid surface will be fully saturated, namely the surface H density reaches its maximum. Under such a condition, \(E_n^{ij}\) will assume the highest energy level, that is, \(E_n^{ij = 10}\) (Fig. 1c ), and the maximum density can be calculated (by \(E\left( {H_1,V_mH_n} \right) = E_{\mathrm{Bulk}}^{\mathrm{H}}\) ) to be 0.304 H Å −2 , corresponding to a nearest-neighbouring H–H distance of 1.95 Å, in line with the H–H pairing state (1.94 Å, Supplementary Section 1 ) observed in ab initio MD simulations. Fig. 3: Comparison between model predictions and DFT results. a – e , H trapping energy as a function of total H number in V 1 – V 4 ( a – d ) and V 8 ( e ) nanovoids. DFT results are shown using open (for H adatoms) and filled (when forming a new H 2 molecule) symbols. Blue lines, model predictions given by equation ( 7 ); red lines, predictions given by equation ( 9 ). Lines marked ‘TIS H’ and ‘Vacuum H 2 ’ represent a H atom at the tetrahedral interstitial site in a bulk metal lattice and in a H 2 molecule in vacuum, respectively. Full size image H 2 molecule formation in nanovoids In our above analysis, H atoms are assumed to stay in nanovoids in the form of adatoms. However, as seen in Fig. 3 , with the continuous introduction of H atoms into a nanovoid, the trapping energy of the H adatom becomes positive, suggesting the possibility of H 2 molecule formation. Indeed, H 2 molecule formation has been noted in DFT calculations in large nanovoids (shown as filled symbols in Fig. 3 ). In Fig. 3 , we also note that, along with the formation of H 2 molecules, a notable deviation between the predictions (of H trapping energies) from equation ( 7 ) and the DFT data appears. This deviation is understandable, as equation ( 7 ) is only applicable for describing H adatoms. To remedy the discrepancy, we need to account for H 2 molecule formation in our model. H 2 molecules in the nanovoid core can be characterized by the equation of states as (for details see Supplementary Section 2 ) $$\begin{array}{*{20}{c}} {p = A_{\mathrm{c}}\left( {\frac{{n_{\mathrm{c}}}}{v}} \right)^3} \end{array}$$ where p is pressure, v is the core volume of the nanovoid, n c is twice the number of H 2 molecules presented in the nanovoid and A c = 8.01 eV Å 6 is a constant that fits well to both the experimental 45 , 46 and our DFT results. The trapping energies corresponding to molecular hydrogen in the nanovoid core can be expressed by $$\begin{array}{*{20}{c}} {E\left( {H_1,V_mH_{n_{\mathrm{c}}}} \right) = {\int}_0^p {\left( {\frac{{\partial v}}{{\partial n_{\mathrm{c}}}}} \right)} _P{\mathrm{d}}P = \frac{3}{2}A_{\mathrm{c}}\left( {\frac{{n_{\mathrm{c}}}}{v}} \right)^2} \end{array}$$ (8) which depends on volumetric H density, n c / v , in the nanovoid core. Denoting the number of H adatoms on the nanovoid surface as n s , we have the total number of H atoms in the nanovoid n = n s + n c . Combining equations ( 7 ) and ( 8 ), we can determine the partitioning of the hydrogen into the adatom and molecule states as $$\begin{array}{*{20}{c}} {E\left( {H_1,V_mH_n} \right) = \frac{3}{2}A_{\mathrm{c}}\left( {\frac{{n_{\mathrm{c}}}}{v}} \right)^2 = E_{n_{\mathrm{s}}}^{ij} + 7.3A_{\mathrm{s}}\left( {\frac{{n_{\mathrm{s}}}}{a}} \right)^{2.5}} \end{array}$$ (9) From the above, the further evolution of hydrogen trapping energy in the event of H 2 molecule formation can then be obtained, as illustrated by the red curves in Fig. 3 . We see that this new model prediction yields very good agreement with the DFT data for the regime of hydrogen trapping energy being positive. Moreover, the above framework can be readily extended to hydrogen bubbling in a nanovoid at finite temperatures and under varying chemical environments, by equating the chemical potential of hydrogen in the nanovoid core, \(\mu _{\mathrm{core}}^{\mathrm{H}}\) , with that of bulk hydrogen, \(\mu _{\mathrm{B}}^{\mathrm{H}}\) : $$\begin{array}{*{20}{c}} {\mu _{\mathrm{core}}^{\mathrm{H}} = {\int}_0^p {\left( {\frac{{\partial v}}{{\partial n_{\mathrm{c}}}}} \right)_P} {\mathrm{d}}P = \mu _{\mathrm{B}}^{\mathrm{H}} = E_{\mathrm{Bulk}}^{\mathrm{H}} + k_{\mathrm{B}}T{\mathrm{ln}}\left( {\frac{{C_{\mathrm{H}}}}{{1 - C_{\mathrm{H}}}}} \right)} \end{array}$$ (10) where k B is Boltzmann’s constant, T denotes the temperature, C H is the bulk H concentration and \(E_{\mathrm{Bulk}}^{\mathrm{H}}\) (=0.92 eV) is the trapping energy of the H interstitial in a bulk W lattice. Note that the equation of states for high-pressure H 2 is relatively insensitive to temperature 47 . Consequently, combining equations ( 8 ) and ( 10 ), the hydrogen bubbling pressure under thermodynamic equilibrium can be obtained as $$\begin{array}{*{20}{c}} {p = \frac{1}{{\sqrt {A_{\mathrm{c}}} }}\left[ {\frac{2}{3}\left( {E_{\mathrm{Bulk}}^{\mathrm{H}} + k_{\mathrm{B}}T{\mathrm{ln}}\left( {\frac{{C_{\mathrm{H}}}}{{1 - C_{\mathrm{H}}}}} \right)} \right)} \right]^{\frac{3}{2}}} \end{array}$$ (11) which solely depends on temperature, concentration and the energy state of hydrogen in the bulk lattice. With high hydrogen concentration and low temperature, the bubble pressure can be high enough to induce spontaneous bubble growth via mechanisms like loop-punching (Supplementary Section 4 ). Model verification and application The good agreement between the model prediction and DFT data (Figs. 2 and 3 ) provides evidence that our model captures the fundamental physics underlying hydrogen trapping and interaction in nanovoids. It is important to note that the model parameters \(E_n^{ij}\) , A s and A c are not sensitive to the size or configuration of the nanovoid, and thus the model is readily applicable to larger systems and for examining hydrogen bubble formation. Furthermore, preliminary calculations have been performed for Mo, Cr and α-Fe systems and similar hydrogen behaviour has been demonstrated (Supplementary Fig. 8 ), confirming the generality of our model for application in other bcc metals. The proposed model accurately predicts hydrogen trapping configurations, hydrogen energetics and H 2 molecule formation in nanovoids, with a finite set of DFT calculations. Such predictions may serve as critical benchmarks for developing new metal–H empirical interatomic potentials for classical MD simulations (Supplementary Section 4 ). Meanwhile they provide atomic-precision data to feed into large-scale methods such as kinetic Monte Carlo simulations 48 , enabling a multiscale approach that directly bridges atomistic data with macroscopic experiments. In the following, we present one typical example comparing recent deuterium thermal desorption spectroscopy (TDS) results 12 , 13 , which determine the energetic information regarding deuterium trapping in nanovoids experimentally, with our predictive model based multiscale simulations. In these two TDS experiments, irradiation-damaged W samples were treated by annealing procedures at 550 and 800 K, respectively. With vacancies being mobile only above 550 K (refs. 49 , 50 ), it is postulated 12 , 13 that these two annealing temperatures would render irradiation-induced defects in W samples as small and large nanovoids, respectively. These annealed samples were subsequently implanted with low-energy D ions, followed by TDS measurements at different heating rates. The TDS spectra data corresponding to radiation defects (reproduced from refs. 12 , 13 ) are shown in Fig. 4 . Fig. 4: Thermal desorption spectra of deuterium from W. a , b , Samples were irradiated at 10 keV per D ion to a fluence of 3 × 10 19 D m −2 , annealed at 550 K ( a ) and 800 K ( b ) for 5 min to render small and large nanovoids, implanted with 0.67 keV per D ions to a fluence of 10 19 D m −2 , then annealed at different heating rates. Lines are multiscale modelling results and symbols are experimental data from refs. 12 (open) and 13 (filled). Full size image Combining our predictive model, primary irradiation damage simulations and an object kinetic Monte Carlo method, we carried out multiscale simulations to reproduce and interpret the experimental TDS results (for details see Supplementary Section 4 ). To make a direct comparison, the same irradiation and annealing conditions as those used in the experimental studies 12 , 13 were used in the simulations. As seen in Fig. 4 , the simulated TDS curves (lines) match the experimental results (symbols) well for the temperature range in which the H–nanovoid interaction dominates. In particular, the simulated desorption peaks in Fig. 4a correspond to D release from small nanovoids (mostly V 1 –V 2 ), while the desorption peaks in Fig. 4b are attributed to large nanovoids (mostly V 6 –V 15 ), confirming the experimental speculations 12 , 13 . Aside from some minor discrepancies at low temperatures, which may be contributed by other defects like dislocations or grain boundaries 12 , 13 , our simulations accurately reproduce the experimental observations. In summary, the present study explicitly demonstrates sequential adsorption of H adatoms on Wigner–Seitz squares of nanovoids with distinct energy levels, based on comprehensive first-principles calculations using W as a representative bcc system. A power law has been demonstrated and verified to accurately describe the interaction between H adatoms within nanovoids. This study clarifies fundamental physical rules governing hydrogen trapping, interaction and bubbling in nanovoids in bcc metals. A comprehensive modelling framework has been established to enable accurate predictions of hydrogen energetics and H 2 molecule formation in nanovoids. Our study offers the long-sought mechanistic insights crucial for understanding hydrogen-induced damage in structural materials, and provides essential predictive tools for developing new H–metal interatomic potentials, and multiscale modelling of hydrogen bubble nucleation and growth. Methods First-principles DFT calculations First-principles DFT calculations were performed using the Vienna ab initio simulation package (VASP) 51 , 52 with Blöchl’s projector augmented wave (PAW) potential method 53 . All the 5 d and 6 s electrons of the metal and the 1 s electron of H were treated as valence electrons. The exchange-correlation energy functional was described with the generalized gradient approximation (GGA) as parameterized by Perdew–Wang (PW91) 54 , 55 . A super-cell containing 128 lattice points (4 × 4 × 4 duplicate of a conventional bcc unit cell) was used in the calculations. Nanovoids were constructed with stable structures obtained from a previous DFT study 56 . Relaxation of the atomic positions and super-cell shapes and sizes was performed for all calculations, except for those examining the pairwise H–H interaction, where all atoms were fixed to avoid H–H separation adjustment during relaxation. The convergence criteria for energy and atomic force were set as 10 −6 eV and 0.01 eV Å −1 , respectively. A 500 eV plane wave cutoff and a 3 × 3 × 3 k -point grid obtained using the Monkhorst–Pack method 51 were used. Benchmark calculations with increased super-cell size, cutoff energy, k -point density as well as with zero-point energy correction for H were carried out, and a negligible influence on our results was found (Supplementary Section 5 ). Moreover, benchmark investigation of hydrogen in meta-stable nanovoid configurations was also performed and very similar behaviour (energy levels and H–H interaction) was verified (Supplementary Section 5 ). Ab initio MD simulations Ab initio MD simulations were performed in the canonical (NVT) ensemble with a Nose–Hoover thermostat using the VASP code. A lower cutoff energy (350 eV) and a 1 × 1 × 1 k -point grid were adopted. The Verlet algorithm was used for integration of Newton’s equations of motion. All systems were simulated at 600 K with a time step of 1 fs. H atoms were randomly added into nanovoids at a rate of 1 atom per 5 ps. To avoid the potential influence of the initial hydrogen addition positions, the results for the first 2 ps after each H addition were excluded from the spatial distribution analysis. Multiscale simulations Thermal desorption spectra of the hydrogen isotopes were simulated by a quantitative multiscale modelling approach, which incorporates atomistic scale H–nanovoid interactions, irradiation-induced primary damages and large-scale object kinetic Monte Carlo (OKMC) simulations. General algorithms of the OKMC method and parameterizations of defects are described in detail elsewhere 28 , 48 , 57 , 58 . A 60 × 60 × 500 nm 3 box was used in all OKMC simulations, with periodic boundary conditions applied on the first two dimensions and hydrogen allowed to desorb at the surface of the third dimension. Interactions between the hydrogen and nanovoids were parameterized using the predictive model conveyed by equation ( 9 ). Primary irradiation damage databases were tabulated using the binary collision Monte Carlo code IM3D 59 , and invoked during OKMC simulations of D. The kinetic energies of D, irradiation fluxes, irradiation time and temperatures were calibrated according to the corresponding experimental conditions 12 . Data availability The data generated and/or analysed within the current study will be made available upon reasonable request to the authors. Code availability The code for the object kinetic Monte Carlo simulations will be made available upon reasonable request to the authors.
A five-year collaborative study by Chinese and Canadian scientists has produced a theoretical model via computer simulation to predict properties of hydrogen nanobubbles in metal. The international team was composed of Chinese scientists from the Institute of Solid State Physics of the Hefei Institute of Physical Science along with their Canadian partners from McGill University. The results will be published in Nature Materials on July 15. The researchers believe their study may enable quantitative understanding and evaluation of hydrogen-induced damage in hydrogen-rich environments such as fusion reactor cores. Hydrogen, the most abundant element in the known universe, is a highly anticipated fuel for fusion reactions and thus an important focus of study. In certain hydrogen-enriched environments, e.g., tungsten armor in the core of a fusion reactor, metallic material may be seriously and irreparably damaged by extensive exposure to hydrogen. Being the smallest element, hydrogen can easily penetrate metal surfaces through gaps between metal atoms. These hydrogen atoms can be readily trapped inside nanoscale voids ("nanovoids") in metals created either during manufacturing or by neutron irradiation in the fusion reactor. These nanobubbles get bigger and bigger under internal hydrogen pressure and finally lead to metal failure. Not surprisingly, the interplay between hydrogen and nanovoids that promote the formation and growth of bubbles is considered the key to such failure. Yet, the basic properties of hydrogen nanobubbles, such as their number and the strength of the hydrogen entrapped in the bubbles, has largely been unknown. Furthermore, available experimental techniques make it practically impossible to directly observe nanoscale hydrogen bubbles. To tackle this problem, the research team proposed instead using computer simulations based on fundamental quantum mechanics. However, the structural complexity of hydrogen nanobubbles made numerical simulation extremely complicated. As a result, the researchers needed five years to produce enough computer simulations to answer their questions. In the end, however, they discovered that hydrogen trapping behavior in nanovoids—although apparently complicated—actually follows simple rules. First, individual hydrogen atoms are adsorbed, in a mutually exclusive way, by the inner surface of nanovoids with distinct energy levels. Second, after a period of surface adsorption, hydrogen is pushed—due to limited space—to the nanovoid core where molecular hydrogen gas then accumulates. Following these rules, the team created a model that accurately predicts properties of hydrogen nanobubbles and accords well with recent experimental observations. Just as hydrogen fills nanovoids in metals, this research fills a long-standing void in understanding how hydrogen nanobubbles form in metals. The model provides a powerful tool for evaluating hydrogen-induced damage in fusion reactors, thus paving the way for harvesting fusion energy in the future.
10.1038/s41563-019-0422-4
Other
Evolutionary hot start, followed by cold shock
Martin Dohrmann et al. Dating early animal evolution using phylogenomic data, Scientific Reports (2017). DOI: 10.1038/s41598-017-03791-w Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-03791-w
https://phys.org/news/2017-06-evolutionary-hot-cold.html
Abstract Information about the geological timeframe during which animals radiated into their major subclades is crucial to understanding early animal ecology and evolution. Unfortunately, the pre-Cambrian fossil record is sparse and its interpretation controversial. Relaxed molecular-clock methods provide an alternative means of estimating the timing of cladogenesis deep in the metazoan tree of life. So far, thorough molecular clock studies focusing specifically on Metazoa as a whole have been based on relatively small datasets or incomplete representation of the main non-bilaterian lineages (such as sponges and ctenophores), which are fundamental for understanding early metazoan evolution. Here, we use a previously published phylogenomic dataset that includes a fair sampling of all relevant groups to estimate the timing of early animal evolution with Bayesian relaxed-clock methods. According to our results, all non-bilaterian phyla, as well as total-group Bilateria, evolved in an ancient radiation during a geologically relatively short time span, before the onset of long-term global glaciations (“Snowball Earth”; ~720–635 Ma). Importantly, this result appears robust to alterations of a number of important analytical variables, such as models of among-lineage rate variation and sets of fossil calibrations used. Introduction Our understanding of the origin and evolution of animals (Metazoa) and their major subgroups would be greatly enlightened by better knowledge about the timing of diversification early in their history. Metazoa comprises five main lineages (see Dohrmann & Wörheide 1 for a review of their phylogenetic relationships): the phyla Porifera (sponges), Placozoa ( Trichoplax ), Ctenophora (comb jellies), Cnidaria (corals, jellyfish, and their kin), and Bilateria, the mega-diverse group containing all the remaining 30 or so phyla. Most animal phyla appear in the fossil record in a relatively short period during the Cambrian (541–485 million years ago [Ma]), the so-called “Cambrian explosion” 2 , 3 , 4 . However, these Cambrian fossils already exhibit complex morphologies, supporting the idea that animals must have evolved some time during the Proterozoic (2500–541 Ma). Indeed, it is now widely agreed that animals existed at least during the later stages of the preceding Ediacaran period (635–541 Ma), as evidenced by trace fossils and some body fossils with likely metazoan affinities 5 , 6 , 7 . Unfortunately, the older fossil record is relatively scarce with respect to possible animals, and most findings are being controversially discussed 8 , which prevents reliable inferences about the origination times of the five main lineages. Therefore, molecular palaeobiological approaches 9 , 10 might aid in deciphering early animal evolution by means of divergence time estimation from genomic data of extant species (molecular clock studies). Although molecular clock studies have generally supported a deep pre-Ediacaran origin of animals and many of its major subgroups 11 , 12 , 13 , 14 , estimates of the precise timing vary widely 8 , 15 . Moreover, the majority of studies have focused either on estimating the timescale of eukaryote evolution and thus including only few animals as representatives of just one lineage within the Opisthokonta supergroup 13 , 16 , or severely undersampled the full diversity of Metazoa because they were mainly interested in the evolution of Bilateria 14 , 17 . However, reliably estimating the timing of early metazoan evolution requires that all non-bilaterian lineages be adequately represented. In a highly cited molecular clock study, Erwin et al . 12 included a variety of non-bilaterians – 20 sponges, Trichoplax , and six cnidarians. However, they did not include Ctenophora and only sampled three of the four classes of Porifera (Hexactinellida was excluded). Furthermore, the dataset they used for dating their trees is comparatively small, being composed of sequence data from seven proteins only (~2000 amino acid positions). Thus, the results of Erwin et al . 12 remain to be tested with a more complete taxon sampling and phylogenomic-scale data. Philippe et al . 18 used a 128-protein phylogenomic dataset (30,257 amino acid positions) to reconstruct animal phylogeny. This dataset includes all non-bilaterian phyla and classes, a selected sample of representatives from all three bilaterian supergroups (Deuterostomia, Ecdysozoa, Lophotrochozoa), as well as a suite of non-metazoan opisthokonts as outgroups. Supporting a well-resolved phylogeny, this dataset appears well suited for investigating the timing of deep metazoan phylogeny. However, in-depth molecular clock studies utilizing this dataset have thus far been lacking. Here, we present molecular clock analyses of the Philippe et al . 18 dataset under a variety of analytical conditions. Our results confirm previous molecular-clock estimates of an early-mid Neoproterozoic (Tonian; 1000–720 Ma) origin of crown-group Metazoa, before the onset of long-lasting global glaciations, the Sturtian and Marinoan “Snowball Earths” of the Cryogenian (720–635 Ma 19 , 20 , 21 ). In contrast to previous studies, however, our results suggest that not only crown-group Metazoa, but all non-bilaterian phyla (and at least stem-lineages of all classes), as well as total-group Bilateria, originated before the Sturtian, probably within a geologically relatively short time span. Importantly, this main conclusion is robust to a number of major assumptions that can drastically influence the outcome of molecular clock studies. Methods We conducted a range of relaxed molecular-clock analyses using the Bayesian Markov Chain Monte Carlo (MCMC) implementation PhyloBayes 22 . In order to assess the robustness of our age estimates to a number of prior assumptions, we ran analyses under (1) an autocorrelated and an uncorrelated relaxed molecular-clock model – the former assuming that the rate of molecular evolution of a lineage is correlated with the rate of its mother lineage, the latter allowing completely independent rates from lineage to lineage; (2) three different sets of fossil calibrations for internal nodes – one aiming at a maximum breadth of calibrations ( Set A ), another one excluding some potentially controversial fossils ( Set B ), and a third one adopted from Erwin et al . 12 ( Set C ); (3) different prior assumptions about the age of the root of the phylogeny (=origin of crown-group Opisthokonta) – 800, 1000, and 1360 Ma; and 4) three different alternative assumptions about the phylogenetic placement of Ctenophora (sister to Cnidaria, sister to the remaining Metazoa, and sister to Placozoa + Cnidaria + Bilateria), which is currently a matter of debate 18 , 23 , 24 , 25 , 26 . More detailed descriptions of these, as well as some additional, analyses are given in the Supplementary Material. Results Influence of molecular-clock model Mean age estimates for the nodes of greatest interest were generally older, sometimes considerably so, under the uncorrelated model (Fig. 1 ). The only exceptions were the crown-groups of Medusozoa (jellyfish and their kin), Anthozoa (corals, anemones etc.), Demospongiae (common sponges), and Calcarea (calcareous sponges), but the 95% credibility intervals (CrIs) obtained under the autocorrelated model for these nodes fell completely or almost completely within the CrIs obtained under the uncorrelated model. In general, the uncorrelated model yielded substantially wider CrIs, sometimes with ranges of hundreds of millions of years (e.g., Calcarea, Demospongiae). Under both models however, Metazoa and its deepest subclades – Porifera, Epitheliozoa (Placozoa + Eumetazoa), Eumetazoa (Coelenterata [=Cnidaria + Ctenophora] + Bilateria), and total-group Bilateria – as well as the crown-groups of Coelenterata, Cnidaria, Porifera, Silicea (siliceous sponges), and Calcarea + homoscleromorph sponges were estimated to have arisen before the Sturtian glaciation (the first of the Neoproterozoic Snowball Earth episodes). Given that the autocorrelated model had a better fit according to preliminary model selection analyses (see Supplementary Material) and yielded younger – i.e., in better agreement with the fossil record – and more precise age estimates for most nodes, we did not consider the uncorrelated model further. Figure 1 Age estimates for major animal groups obtained under different molecular clock models. Mean and 95% credibility intervals (CrIs) of age estimates for select nodes obtained with Calibration set A and the 1000 Ma root age prior (see text) under the autocorrelated “ln” (lower bars) and uncorrelated “ugam” (upper bars) relaxed clock models. Taxon names refer to crown groups. “Calc. + Homo.” = Calcarea + Homoscleromorpha clade. Ma = million years before present. Dotted line indicates Precambrian/Cambrian boundary. Gray areas indicate Sturtian (left) and Marinoan (right) glaciations 21 . Full size image Influence of different fossil calibration sets For the deepest nodes (Opisthokonta to Eumetazoa), all three calibration sets resulted in pre-Sturtian age estimates, with C yielding the oldest, A the youngest, and B intermediate estimates (Fig. 2 ). The same pattern was also obtained for the crown groups of Coelenterata, Cnidaria, Porifera, and Calcarea + Homoscleromorpha. For crown Bilateria, Protostomia, and Deuterostomia, the ranking was the same, but estimates fell within the Sturtian, Marinoan, or interglacial intervals under A and B . Under A and B , CrIs for crown Medusozoa spanned the general glaciation interval, with means falling within or close to the interglacial period. In contrast, C yielded a very wide CrI with almost no overlap with those of A and B , with the mean much younger, falling in the earliest Cambrian. The pattern for crown Anthozoa was similar, but all estimates were older and consistent with a Sturtian origin. Mean estimates for crown Silicea were all pre-Sturtian and CrIs broadly overlapped. Estimates for crown Demospongiae were all consistent with a glacial origin, although the mean estimates were pre-Sturtian under A , Sturtian under C , and end-Sturtian under B . Only three of the nodes of greatest interest were estimated to be Phanerozoic: crown-group Calcarea, Hexactinellida (glass sponges), and Ctenophora. Mean estimates for Calcarea were mid-Cambrian under all three calibration sets, with CrIs narrowest for A and widest for C . The mean estimate for Hexactinellida (actually Hexasterophora: no members of the second subclass, Amphidiscophora, are included) was late Silurian under A , whereas under B and C , which did not constrain this node, much younger and certainly incorrect 27 estimates (Upper Jurassic and Lower Cretaceous, respectively) were obtained. The situation for crown Ctenophora was very similar: Lower Devonian with the minimum constraint enforced ( A and B ), and Lower Cretaceous without the constraint ( C ). Overall, set A gave the most plausible and precise results, especially regarding the deepest nodes, which were excessively old under sets B and C . Furthermore, the calibration set of Erwin et al . 12 , which we used for set C has been heavily criticized 28 , so results obtained under set C have to be interpreted with caution. Therefore, we focus on results obtained with calibration set A in the remainder of the paper. Figure 2 Age estimates for major animal groups obtained under different fossil calibration sets. Mean and 95% CrIs of age estimates for select nodes obtained under the autocorrelated “ln” relaxed clock model and the 1000 Ma root age prior using different fossil calibration sets (see text) – A (lower bars), B (middle bars), C (upper bars). Taxon names refer to crown groups. “Calc. + Homo.” = Calcarea + Homoscleromorpha clade. Dotted line indicates Precambrian/Cambrian boundary. Gray areas indicate Sturtian (left) and Marinoan (right) glaciations 21 . Full size image Influence of root age prior Not surprisingly, for almost all nodes of major interest, age estimates were younger than those obtained under the 1000 Ma root age prior when 800 Ma was assumed for the root, and older when 1360 Ma was assumed (Fig. 3 ); the only exceptions were crown-group Hexactinellida and Ctenophora, which had almost identical age estimates under the three different root age priors. However, the differences were not as big as we had anticipated. Particularly, the in our opinion unrealistically young root age prior of 800 Ma did not result in a major shift of age estimates towards after the Marinoan glaciation. In fact, the inference that Metazoa and most of its major subgroups originated prior to (or, regarding some crown clades, during) the Cryogenian Snowball Earth periods was not affected by assumptions about the age of crown-group Opisthokonta. In the following, we focus on the results obtained under the 1000 Ma root age prior, for better comparability with the results of Erwin et al . 12 (also note that the 1360 Ma root age prior 13 effectively represents a secondary calibration constraint, which renders the results obtained under this prior somewhat dubious 29 , 30 ). Figure 3 Age estimates for major animal groups obtained under different assumptions about the age of crown-group Opisthokonta. Mean and 95% CrIs of age estimates for select nodes obtained with Calibration set A (see text) under the autocorrelated “ln” relaxed clock model using different root age priors – 1000 ± 100 Ma (lower bars), 1360 ± 100 Ma (middle bars), 800 ± 100 Ma (upper bars). Taxon names refer to crown groups. “Calc. + Homo.” = Calcarea + Homoscleromorpha clade. Dotted line indicates Precambrian/Cambrian boundary. Gray areas indicate Sturtian (left) and Marinoan (right) glaciations 21 . Full size image Influence of tree topology Changing the phylogenetic position of Ctenophora to sister of the remaining Metazoa or sister to Placozoa + Cnidaria + Bilateria resulted in somewhat different mean age estimates for many nodes (Supplementary Tables S1 – S2 ). However, CrIs for comparable nodes broadly overlapped with those obtained under the topology showing Ctenophora as sister to Cnidaria (=Coelenterata). Importantly, the general pattern of a pre-Sturtian radiation of the main animal lineages was recovered under all three alternative tree topologies (Fig. 4 , Supplementary Tables S1 – S3 , Supplementary Figs S1 – S2 ). Figure 4 Time-calibrated phylogeny of animals. Phylogeny of crown-Opisthokonta obtained by Philippe et al . 18 , time-calibrated using Calibration set A , an autocorrelated relaxed clock model, and 1000 Ma root age prior (see text). Gray areas indicate Sturtian (left) and Marinoan (right) glaciations 21 . Thick red branches highlight pre-Snowball Earth radiation of animal lineages. Bars at selected deep nodes represent 95% CrIs; above them density plots highlighting the frequency of different age estimates around the mean are shown (produced with custom python and R scripts developed by S. Vargas). Ages in million years before present (Ma). Stratigraphic abbreviations: Ordov., Ordovician; Sil., Silurian; Carbonif., Carboniferous; Pg., Palaeogene; Ng., Neogene. Taxon abbreviations: Hom., Homoscleromorpha; Cal., Calcarea; Hex., Hexactinellida; Dem., Demospongiae; Ant., Anthozoa; Med., Medusozoa; Deut., Deuterostomia; Prot., Protostomia; Ecd., Ecdysozoa; Loph., Lophotrochozoa. Full size image Discussion Using Bayesian relaxed molecular-clock dating on a phylogenomic dataset with representative taxon sampling of all five major animal lineages, we have inferred that crown-group animals have a deep pre-Ediacaran origin, which is consistent with several previous studies (reviewed in Sharpe et al . 16 ; see also dos Reis et al . 14 ). However, in contrast to these earlier studies, which found a more protracted diversification of non-bilaterian animals, we have inferred a striking pattern of a relatively fast radiation of these lineages and their basic subgroups, prior to the Neoproterozoic Snowball Earth periods (Fig. 4 ). Although estimates of the exact timing of this radiation differed somewhat depending on model conditions (Figs 1 – 3 ), this general result was robust to the choice of molecular clock model, assumptions about the age of the root (=crown-group Opisthokonta), variation in the internal fossil calibrations used, and alternative tree topologies at the base of Metazoa (see also Supplementary Material online for further analyses). Although it is not entirely clear why the pattern reported here has not been found before, it appears likely that insufficient taxon sampling of non-bilaterians in earlier studies allowed only limited conclusions about early animal evolution. Our estimates are consistent with some palaeontological and geochemical findings interpreted as evidence for Tonian to Cryogenian animal life 31 , 32 , 33 , 34 , 35 , 36 , 37 . However, claims of pre-Ediacaran animal remains are still controversial and more work needs to be done to reconcile the fossil and molecular records 7 . Clearly, a more thorough exploration of the Proterozoic fossil record will be necessary to obtain unambiguous evidence for pre-Ediacaran metazoans and a better picture of the morphology and ecology of the early members of the major extant animal lineages. Our results, if accurate, raise important questions such as what triggered this early radiation of animals and how did they survive Snowball Earth? However, future analyses of independent phylogenomic datasets with equally or better suitable taxon sampling, and employing yet other analytical set-ups will be necessary to further test the robustness of these results. Although reconstructing the time-line of animal evolution with high precision might not be possible with current molecular-clock methodology 14 , general patterns such as the one reported here can certainly be detected and will provide a much-needed framework for future research on the origin and early evolution of the Metazoa.
The initial phases of animal evolution proceeded faster than hitherto supposed: New analyses suggest that the first animal phyla emerged in rapid succession – prior to the global Ice Age that set in around 700 million years ago. The fossil record reveals that almost all of the animal phyla known today had come into existence by the beginning of the Cambrian Period some 540 million years ago. The earliest known animal fossils already exhibit complex morphologies, which implies that animals must have originated long before the onset of the Cambrian. However, taxonomically assignable fossils that can be confidently dated to pre-Cambrian times are very rare. In order to determine what the root of their family tree looked like, biologists need reliable dating information for the most ancient animal subgroups – the sponges, cnidarians, comb jellies and placozoans. Dr. Martin Dohrmann and Professor Gert Wörheide of the Division of Palaeontology and Geobiology in the Department of Earth and Environmental Sciences at LMU Munich have now used a new strategy based on the so-called molecular-clock to investigate the chronology of early animal evolution and produce a new estimate for the ages of the oldest animal groups. Their findings appear in the journal Scientific Reports. The molecular clock approach is based on the principle that mutations accumulate in the genomes of all organisms over the course of time. The extent of the genetic difference between two lineages should therefore depend on the time elapsed since they diverged from their last common ancestor. "Our study is based on a combination of genetic data from contemporary animals and information derived from well dated fossils, which we analyzed with the help of complex computer algorithms," Dohrmann explains. For the study, the researchers used an unusually large dataset made up of the sequences of 128 proteins from 55 species, including representatives of all the major animal groups, focusing in particular on those that diverged very early. The analysis confirms the conclusion reached in an earlier study, which dated the origin of animals to the Neoproterozoic Era, which lasted from 1000 to 540 million years ago. However, much to their surprise, the results also suggested that the earliest phyla, and the ancestors of all bilateral animal species (the so-called Bilateria), originated within the – geologically speaking – short time-span of 50 million years. "In addition, this early phase of evolutionary divergence appears to have preceded the extreme climate changes that led to Snowball Earth, a period marked by severe long-term global glaciation that lasted from about 720 to 635 million years ago," Dohrmann says. In order to assess the plausibility of the new findings, the researchers plan to carry out further analyses using more extensive datasets and improved statistical methods." To arrive at well-founded conclusions with respect to the morphology and ecology of the earliest animals, we also need to know more about the environmental conditions that prevailed during the Neoproterozoic, and we need more fossils that can be confidently assigned to specific taxonomic groups", Wörheide says.
10.1038/s41598-017-03791-w
Biology
Study finds pretty plants hog research and conservation limelight
Plant scientists' research attention is skewed towards colourful, conspicuous and broadly distributed flowers, Nature Plants (2021). DOI: 10.1038/s41477-021-00912-2 Journal information: Nature Plants
http://dx.doi.org/10.1038/s41477-021-00912-2
https://phys.org/news/2021-05-pretty-hog-limelight.html
Abstract Scientists’ research interests are often skewed toward charismatic organisms, but quantifying research biases is challenging. By combining bibliometric data with trait-based approaches and using a well-studied alpine flora as a case study, we demonstrate that morphological and colour traits, as well as range size, have significantly more impact on species choice for wild flowering plants than traits related to ecology and rarity. These biases should be taken into account to inform more objective plant conservation efforts. Main Throughout human history, plants have played the role of silent partners in the growth of virtually every civilization 1 . Humans have exploited wild plants and crops as sources of food 2 , used trees as combustible material and to craft manufactured goods 1 , 3 and taken inspiration from the beauty of flowers for poetic and artistic endeavours 4 , 5 . Since the birth of modern science, plants have also become the subjects of intense investigation. As scientists systematically studied the natural history of plants 6 , they soon realized that many of these species could function as model organisms to address fundamental scientific questions 7 . Edward O. Wilson famously stated that ‘[…] for every scientific question, there is the ideal study system to test it’ and thus, the choice of a researcher to study one species or another is often driven by functional criteria (for example, ploidy level for genetics studies and ease of growth under controlled conditions). Still, outside of the laboratory or the greenhouse, field scientists may be challenged in their choice of focus organisms by concerns that exceed strictly scientific research interests. As a result, when plant scientists select to study a specific wild plant among the pool of species available in a given study region, it may be that factors unrelated to the biological question end up influencing species choice and introducing biases in the research outcome. Whereas this is not a problem per se, a disparity in scientific attention towards certain species may become a concern in conservation biology, where it is paramount to ensure a ‘level playing field’ in selecting conservation priorities 8 , 9 . Given their global diversity 10 and ecological importance 11 , 12 , plants should be prominent in conservation biology’s effort to curb species loss under mounting anthropogenic pressures 13 , 14 , 15 . Yet, it is well documented that plants receive less attention and consequently less funding in conservation than do animals 16 , 17 . This particular case of taxonomic bias has been connected to ‘plant blindness’ 18 or ‘plant awareness disparity’ 19 , two terms proposed to indicate the lack of awareness for plants. Associated with both the evolutionary history of human cognition and the effect of cultural, scientific and educational tendencies, this disparity translates into serious real-life impacts, as it affects the knowledge base of conservation and its policies. As addressing this bias is urgent but also often outside the scope of plant sciences, we want to identify more specific biases that can be addressed from within the scientific community dedicated to plants, thereby informing better research practices. With this goal in mind, we chose a well-defined case study in which to consider specific traits and factors that could influence the choice of species studied. By combining the strengths of bibliometrics and trait-based approaches, we asked what kind of biases might operate in plant sciences, resulting in some species being more studied than others. To resolve this question, we chose a model system of 113 species typical of the Southwestern Alps, one of the largest biodiversity hotspots within the Mediterranean region 20 (Fig. 1a ). By focusing on a well-known flora in a delimited area, this study design allowed us to control for several confounding factors related to sampling biases, trait heterogeneity and research interest. Fig. 1: Study workflow and most important factors in explaining research interest. a , Schematic representation of the data collection and the subdivision of the plant traits in three categories of ecology, morphology and rarity. b , Outcomes of the variance partitioning analysis, whereby the relative contribution of traits related to ecology, morphology and rarity is ruled out, as well as the random effect of species’ taxonomic relatedness at family level. T°, temperature. Full size image We tested whether there is a relationship between research focus on a plant species (measured using bibliometric indicators) and species-specific traits related to ecology, morphology and rarity (Fig. 1a ). In the Web of Science, we sourced 280 papers focusing on the selected plant species (average (±s.d.) of 2.15 ± 2.96 scientific papers per plant), published between 1975 and 2020. Given that the number of publications, their average annual number of citations and average h -index were all reciprocally correlated (all Pearson’s r > 0.7; Supplementary Fig. 1 ), we expressed research attention simply as the total number of publications. By means of variance partitioning analysis 21 , we ruled out the relative contribution of ecology, morphology and rarity in determining the observed pattern of research attention. This analysis indicated how the choice of investigated species across the literature in the last 45 years has been strongly influenced by plant traits related to aesthetics. Using marginal R 2 , we observed how morphological and colour traits explain the greater proportion of variance (15.0%), whereas the contribution of ecology and rarity was negligible (Fig. 1b ). However, 75.6% of model variance remained unexplained. When reassessing variance partitioning using conditional R 2 , which describe the proportion of variance explained by both fixed effect and random factors (species taxonomic relatedness), we found that 54% of unexplained variance was due to the random effect. This reveals that certain clusters of closely related plants are more studied than others and share more similar traits than expected from a random pool (examples in Fig. 2a ). Fig. 2: Regression analysis of plant traits. a , Example of representative plant species with different traits and research attention: Gentiana ligustica R. Vilm. & Chopinet (Gentianaceae, blue inflorescence and many published papers), Berardia lanuginosa (Lam.) Fiori (Asteraceae, small and single yellow flower head, short stem and no published papers) and Fritillaria involucrata All. (Liliaceae, large brownish flower, short stem and no published papers). b , Predicted number of papers in relation to stem and flower size by flower colour according to the best-performing GLMM. c , Incidence rate ratios and significance levels (* P < 0.05, ** P < 0.01, *** P < 0.001) for all the explanatory variables included in the final model (exact P values: colour (blue), 0.00025; colour (white), 0.04562; colour (red/pink), 0.08603; colour (yellow), 0.53695; range size, 0.03308; stem size, 0.00858; flower size, 0.04987). Error bars indicate standard errors. P values for parametric terms were based on two-sided z -test. Full size image This first result was surprising, as species rarity and scientific interest for narrow-range endemics or International Union for Conservation of Nature (IUCN) listed taxa did not emerge as significant drivers. Moreover, a preference for species with particular ecological features seemed likely, as some of the endemic species of the Southwestern Alps are adapted to stressful habitats characterized by a narrow range of environmental conditions, such as rocky lands and xerophilous grasslands 22 , 23 . While these adaptations might be desirable for studies on evolution, ecological niche theory, ecophysiology and conservation, the lack of correlation between variables related to ‘rarity’ and ‘ecology’ highlights the absence of cross-study guidelines to help plant scientists prioritize such research areas in their choice of species studied. To obtain a more nuanced understanding of which specific traits are driving research attention, we explored the relationships between traits and number of published papers with a Poisson generalized linear mixed model (GLMM) that accounted for taxonomic non-independence among species 24 (Supplementary Table 1 ). Using backward model selection, we identified a best-performing model that included colour, range size, flower size and stem size as fixed terms (Fig. 2b,c ). All other variables introduced in the model had no significant effects and were therefore removed during model selection (Supplementary Table 3 ). We observed a significant relationship between the number of published papers and flower colour, with blue-coloured flowers being the most studied and white and red/pink significantly more studied than the baseline (brown/green flowers that stand out the least from the environmental background). Moreover, there was a significant positive effect of plant stem height and a (rather weak) negative effect of flower size on research interest. A greater stem height implies that species are more conspicuous but also taller; thus their inflorescences are more easily accessible without investigators having to stoop to the ground. Furthermore, several plants with small flowers in the Maritime Alps may have intrinsic human appeal, for example flowers constituting conspicuous inflorescences (such as Gymnadenia corneliana and Saxifraga florulenta ) that are more striking than single large flowers, introducing an ‘inflorescence effect’. Finally, there was a positive effect of range size on research interest. Tentatively, this is because a broader distribution could make a species accessible to more researchers and thus more likely to be studied. It is interesting to note that, incidentally, broad geographical ranges generally make species less prone to extinction, in line with our finding that species with greater IUCN extinction risk are not subject to more research interest. The statistical relevance of similar trends across our dataset, where morphological traits such as bright colours, accessible inflorescences and conspicuousness are shown to drive research attention, highlight what we call an aesthetic bias in plant research. While aesthetics is today used to refer to art and beauty (often in direct opposition to scientific values like objectivity), the Greek root of the word refers to sensory perception (as evident in its cognates ‘anaesthetic’ and ‘synaesthetic’). As such, the term highlights sensorial perception, both in its physiological, evolved cognitive structures and in its learned sociocultural articulations. Here it is interesting to note that humans have evolved trichromacy, that is the separate perception of wavelength ranges corresponding to blue, red and green regions through specialized structures 25 . It has been speculated that the evolutionary acquisition of colour vision in humans and other primates led to an increased ability to locate ripe fruits against a green background 26 , 27 . The human eye is thus optimized to perceive green, red and blue which, according to colour psychology theory 28 , also greatly impacts people’s affection, cognition and behaviour. The evolved and physiological aspect of human perception is also demonstrably affected by sociocultural factors, since education, class, gender, age, cultural background all shape how we perceive the world 29 . Yet, while these above speculations about the origin of the aesthetic bias are interesting, they are beyond the scope of this communication. What matters is that this bias affects the representativity of data used to ground research priorities and conservation policies and, as such, risks compromising efforts to effectively focus plant conservation activities and preserve plant biodiversity. In conclusion, our analysis identified the traits a plant must possess to be attractive to a scientist, emphasizing the trade-off between aesthetic characteristics, research attention and conservation need. While many factors can determine the choice of studied plant species, we showed how research interests and conservation concerns are less important than aesthetic characteristics in driving research attention. This apparently superficial preference has implicit and undesired effects, as it translates into an aesthetic bias in the data that form the basis for scientific research and practices. Whether this bias is grounded in an evolutionary adaptation of human cognition or in cultural and learned preferences or is simply the effect of practical constraints in the field, it would be desirable to develop measures to counteract it, given the potentially negative impact on our understanding of the ecology and evolution of plants and the conservation of vital plant biodiversity such as species of high phylogenetic value or with unique ecological traits and ecosystem functions. Statistical modelling has been widely used in conservation ecology to predict ecological niches in space and time and to develop a practical conservation agenda 30 . Whereas many potential issues, including geographical-relatedness and sampling biases 31 or metrics selection 32 , have been routinely considered in modelling exercises, the well-known problem of observer-related biases 33 is largely overlooked 34 . Against this background, our study demonstrates the need to consider aesthetic biases more explicitly in experimental design and choice of species studied. As Kéry and Greg 35 stated: ‘although plants stand still and wait to be counted, they sometimes hide’. Often in plain view, we would add. Methods Species selection We focused the analysis on the flora of the Italian and French Maritime alps, a plant biodiversity hotspot in the Southwestern Alps 20 . By restricting the analysis to a flora from a intensively studied and confined area, we were able to control for three confounding factors: (1) Since the Maritime Alps flora has been extensively studied for over two centuries 36 , the number of described plant species in this area has already reached the asymptote 37 compared with under-studied floras outside Europe 38 . (2) Narrow-range plants on similar substrates and localities are characterized by a restricted range of physicochemical features and would be expected to show similar adaptations. This excludes confounding factors that would occur if a study was undertaken on species from different biomes and ecological regions. (3) Narrow-range species are primarily studied by local researchers (mostly from France, Italy and Switzerland), which are expected to share a similar cultural background and thus share cultural biases. This would not occur in the case of cosmopolitan plants studied by different researchers from mixed cultural backgrounds from around the world. We selected a representative list of 113 plant species from checklists 39 , 40 . For the purpose of this analysis, we excluded subspecies and species of uncertain taxonomic status. Bibliometric data We obtained bibliometric data from the Web of Science 41 . We searched all published works focusing on each of the 113 species, using the accepted Latin names and synonyms reported in The Plant List 42 . For each species, we derived three values: number of published papers, their average number of citations per year and their average h -index. We acknowledge that our search for papers was not exhaustive: we have only included articles in English 43 , used a single bibliographic database and focused the bibliometric search to the abstract, title and keywords. This implies, for example, that species with no studies in the Web of Science ( n = 43; 38%) may have actually been the focus of grey literature or of studies that did not mention the Latin name in the abstract of keywords. This is a common practice, for example, in multispecies studies. However, we assumed that this bias was homogeneously distributed across species and thus unlikely to affect the observed patterns. Species traits We derived flower colour, stem size, flowering duration and altitude data from Tela Botanica 44 , Actaplantarum ( ) and InfoFlora 45 . We obtained species’ ecological preferences using Landolt indicator values available in Flora Indicativa 46 . We extracted flower size from FlorAlpes ( ) and conservation status from the IUCN red list 47 . We expressed taxonomic uniqueness of each species as the number of congeneric species, on the basis of ref. 42 . Finally, we approximated species range size using species occurrences available in the Global Biodiversity Information Facility 48 . We calculated two measures: (1) the area of the minimum convex polygon (MCP) encompassing all localities (range area) and (2) the dispersion of points around the distribution centroid (range dispersion). The latter measure is a more robust measure of range if distribution data are biased, which is often the case with GBIF datasets where sampling effort is uneven (for example, refs. 49 , 50 ). We grouped species traits into three categories (Supplementary Table 2 and Fig. 1a ): ecology (minimum altitude, altitude range, maximum altitude and Landolt Indexes), morphology (flower colour, flower diameter, stem size and flowering duration) and rarity (range area, range dispersion, IUCN category and number of congeneric species). Data analysis We performed all analyses in R 51 . We conducted data exploration following ref. 52 . We checked homogeneity of continuous variables and log 10 -transformed non-homogeneous variables, when appropriate (Supplementary Table 2 ). We verified multicollinearity among predictors with pairwise Pearson’s r correlations (Supplementary Figs. 1 and 3 ). We visualized potential associations between continuous and categorical variables with boxplots. We summarized the main eight Landolt indicator values variations as the first two principal component (PC) axes of a Principal Component Analysis (PCA), describing the environment in which the different species live. PC1 explained 30.5% of the variance and PC2 explained 17.1% of the variance. We excluded salinity tolerance in the calculation of PCA because it is not applicable in the analysed geographical and ecological context. IUCN categories were compared with the other ‘rarity’ variables, revealing a strong association with range area and dispersion (extinction risk is often inferred on the basis of range size 53 ) and a consistent association with the number of congeneric species. Also, collinearity analysis revealed a high correlation (| r | > 0.7) between minimum and maximum elevation, and range area and dispersion. We thus excluded the IUCN category, minimum elevation and range area from the analysis. Moreover, to balance the levels of the variable flower colour, we grouped together red with pink and brown with green coloured flowers. The category ‘green/brown’ was used as a baseline in all analyses, being the least prominent colours from the background 25 , 26 . Variance partitioning analysis We used variance partitioning analysis 21 to resolve the relative contribution of ecology, morphology and rarity in determining the observed pattern of research attention. We fitted seven GLMMs (modelling details in the next section), one for each individual set of variables (ecology, morphology and rarity) and their combined effects (ecology + morphology; ecology + rarity; morphology + rarity; ecology + morphology + rarity). In turn, we used the model pseudo R 2 (both conditional and marginal) 54 to evaluate the contribution of each variable and combination of variables the research attention each species receives, by partitioning their explanatory power using the modEvA 55 and results visualized as a Venn diagram. Regression model We used regressions to explore relationships between the research attention each species receives and plant traits 24 . Given that number of published sources, average number of citations and average h -index were all reciprocally correlated (Pearson’s r > 0.7), we only selected the number of publications as a response variable (Supplementary Fig. 1 ). GLMM with lme4 (ref. 56 ) were fitted to these data using a Poisson distribution and a log link function. The Poisson distribution is often used for count data (in our case, number of papers in the Web of Science) and the log link function ensures positive fitted values 24 . We scaled all variables and optimized GLMM with bound optimization by quadratic approximation to facilitate model convergence. We used the family taxonomic rank of each plant species as a random factor, to take into account data dependence under the assumption that species within the same family are more likely to share similar traits. Even though 38% of values in the response variable were zeros (that is, species never studied in scientific papers in the Web of Science), zero-inflation was considered as acceptable because these are ‘true zeros’ 57 . We built an initial GLMM using all the non-collinear variables and the non-associated factors (Supplementary Table 1 ) selected after data exploration (the equation is in R notation): $$\begin{array}{l}{\mathrm{Number}}\,{\mathrm{of}}\,{\mathrm{Papers}}\sim {\mathrm{Flower}}\,{\mathrm{colour}} + {\mathrm{Flowering}}\,{\mathrm{duration}} + {\mathrm{Flower}}\,{\mathrm{diameter}}\\ + {\mathrm{Stem}}\,{\mathrm{size}} + {\mathrm{Landolt}}\,{\mathrm{values}}\,{\mathrm{PC1}} + {\mathrm{Landolt}}\,{\mathrm{values}}\,{\mathrm{PC2}} + {\mathrm{Maximum}}\,{\mathrm{altitude}}\\ + {\mathrm{Altitude}}\,{\mathrm{range}} + {\mathrm{Range}}\,{\mathrm{size}} + {\mathrm{Congeneric}}\,{\mathrm{species}} + {\mathrm{random}}\,\left( {{\mathrm{Family}}} \right)\end{array}$$ Once the initial model had been fitted, we performed model selection by backward elimination. We based model reduction on Aikaike information criterion values (Supplementary Table 3 ), to simplify the model and avoid overfitting 58 . We validated models with performance 59 by checking overdispersion and standard residuals plots 24 (Supplementary Fig. 2 ). Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Data and R script to reproduce the analysis are available in figshare ( ).
New Curtin University research has found a bias among scientists toward colourful and visually striking plants, means they are more likely to be chosen for scientific study and benefit from subsequent conservation efforts, regardless of their ecological importance. Co-author John Curtin Distinguished Professor Kingsley Dixon from Curtin's School of Molecular and Life Sciences was part of an international team that looked for evidence of an aesthetic bias among scientists by analysing 113 plant species found in global biodiversity hotspot the Southwestern Alps and mentioned in 280 research papers published between 1975 and 2020. Professor Dixon said the study tested whether there was a relationship between research focus on plant species and characteristics such as the colour, shape and prominence of species. "We found flowers that were accessible and conspicuous were among those that were most studied, while colour also played a big role," Professor Dixon said. "Blue plants, which are relatively rare, received the most research attention and white, red and pink flowers were more likely to feature in research literature than green and brown plants. "Stem height, which determines a plant's ability to stand out among others, was also a contributing factor, while the rarity of a plant did not significantly influence research attention." Professor Dixon said plant traits such as colour and prominence were not indicators of their ecological significance, and so the 'attractiveness bias' could divert important research attention away from more deserving species. "This bias may have the negative consequence of steering conservation efforts away from plants that, while less visually pleasing, are more important to the health of overall ecosystems," Professor Dixon said. "Our study shows the need to take aesthetic biases more explicitly into consideration in experimental design and choice of species studied, to ensure the best conservation and ecological outcomes." The full paper, "Plant scientists research attention is skewed towards colorful, conspicuous, and broadly distributed flowers" was published in Nature Plants.
10.1038/s41477-021-00912-2
Nano
New breed of solar cells: Quantum-dot photovoltaics set new record for efficiency in such devices
"Improved performance and stability in quantum dot solar cells through band alignment engineering." Chia-Hao M. Chuang, et al. Nature Materials (2014) DOI: 10.1038/nmat3984. Received 06 December 2013 Accepted 15 April 2014 Published online 25 May 2014 "Energy Level Modification in Lead Sulfide Quantum Dot Thin Films Through Ligand Exchange." Patrick R. Brown, Donghun Kim, Richard R. Lunt, Ni Zhao, Moungi G. Bawendi, Jeffrey C. Grossman, and Vladimir Bulovic. ACS Nano May 13, 2014. DOI: 10.1021/nn500897c Journal information: Nature Materials , ACS Nano
http://dx.doi.org/10.1038/nmat3984
https://phys.org/news/2014-05-solar-cells-quantum-dot-photovoltaics-efficiency.html
Abstract Solution processing is a promising route for the realization of low-cost, large-area, flexible and lightweight photovoltaic devices with short energy payback time and high specific power. However, solar cells based on solution-processed organic, inorganic and hybrid materials reported thus far generally suffer from poor air stability, require an inert-atmosphere processing environment or necessitate high-temperature processing 1 , all of which increase manufacturing complexities and costs. Simultaneously fulfilling the goals of high efficiency, low-temperature fabrication conditions and good atmospheric stability remains a major technical challenge, which may be addressed, as we demonstrate here, with the development of room-temperature solution-processed ZnO/PbS quantum dot solar cells. By engineering the band alignment of the quantum dot layers through the use of different ligand treatments, a certified efficiency of 8.55% has been reached. Furthermore, the performance of unencapsulated devices remains unchanged for over 150 days of storage in air. This material system introduces a new approach towards the goal of high-performance air-stable solar cells compatible with simple solution processes and deposition on flexible substrates. Main Near-infrared PbS quantum dots (QDs) composed of earth-abundant elements 2 have emerged as promising candidates for photovoltaic applications because of a tunable energy bandgap that covers the optimal bandgap range for single and multi-junction solar cells 1 . The QD surface ligands 3 , 4 , 5 , 6 , 7 and the photovoltaic device architecture 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 play crucial roles in determining the optoelectronic properties of QD solar cells. Advances in QD surface passivation, particularly through the use of halide ions as inorganic ligands 4 , have led to rapid improvements in QD solar cell power conversion efficiencies to 7% 5 , 15 , 16 as a result of a lower density of trapped carriers than in their organic ligands counterparts 4 . Furthermore, recent studies have demonstrated the ability to control the band edge energies of QD films through ligand exchange 18 , 19 , 20 . However, fabrication of these recent QD devices requires high-temperature annealing (>500 °C) of the TiO 2 window layer 5 , 16 or two different processing atmospheres, including an inert gas environment 15 . Although good stability has been claimed, the devices still show performance degradation to ~85% of their original efficiencies within one week, even under inert atmosphere 5 , 16 . Here, we demonstrate ZnO/PbS solar cells in which the PbS QD and ZnO nanocrystals are both solution-processed in air and at room temperature. We demonstrate a device architecture that employs layers of QDs treated with different ligands for different functions by tuning their relative band alignment—a layer of inorganic-ligand-passivated QDs serves as the main light-absorbing layer and a layer of organic-ligand-passivated QDs serves as an electron-blocking/hole-extraction layer. The devices show significant improvements in power conversion efficiency and long-term air stability, compared with previously reported devices. Figure 1a shows the schematics of the device structures employed in this work. Oleic-acid-capped PbS QDs with the first exciton absorption peak at λ = 901 nm in solution ( Supplementary Fig. 1 ) are used to fabricate the thin films. Tetrabutylammonium iodide (TBAI) and 1,2-ethanedithiol (EDT) are used as the inorganic and organic ligands for solid-state ligand exchange. After solid-state ligand exchange, the first exciton absorption peak shifts to λ ~ 935 nm, which corresponds to an optical bandgap E g = 1.33 eV. We find that PbS QD films treated with TBAI (PbS-TBAI) exhibit superior air stability compared with PbS QDs treated with EDT (PbS-EDT; Supplementary Fig. 2 ). PbS-TBAI-only devices also show a higher short-circuit current density ( J SC ), whereas PbS-EDT-only devices show a higher open circuit voltage ( V OC ; Supplementary Fig. 3 ). Figure 1: Photovoltaic device architectures and performance. a , Device architectures. b , Representative J – V characteristics of devices with Au anodes under simulated AM1.5G irradiation (100 mW cm −2 ). The PbS-TBAI device consists of 12 layers of PbS-TBAI and the PbS-TBAI/PbS-EDT device consists of 10 layers of PbS-TBAI and 2 layers of PbS-EDT. c , External quantum efficiency (EQE) spectra for the same devices. Full size image The J – V characteristics of photovoltaic devices with Au anodes are shown in Fig. 1b . The device consisting of 12 PbS-TBAI layers (corresponding to a film thickness of ~220 nm) shows a power conversion efficiency of 6.0 ± 0.4%, which is higher than the previously reported TiO 2 /PbS-TBAI devices consisting of PbS QDs with an additional solution phase CdCl 2 treatment and MoO 3 /Au/Ag anode 4 . Although PbS-EDT-only devices show a lower J SC than PbS-TBAI-only devices, replacing the topmost two PbS-TBAI layers with two PbS-EDT layers significantly improves the J SC , V OC and fill factor (FF), resulting in a ~35% improvement in power conversion efficiency to 8.2 ± 0.6%, with a 9.2% lab-champion device ( Table 1 ). Table 1 Solar cell performance parameters. Full size table We attribute the improvement in efficiency to the band offsets between the two PbS QD layers, which effectively block electron flow to the anode while facilitating hole extraction. We use ultraviolet photoelectron spectroscopy (UPS) to determine the band edge energies with respect to vacuum in PbS QD films ( Fig. 2a ). PbS-TBAI exhibits a deeper work function of 4.77 eV (that is, E F = −4.77 eV with respect to vacuum, where E F is the Fermi level energy) than PbS-EDT. We attribute the difference in their work functions to the difference between the Pb-halide anion and the Pb-thiol–carbon interactions, which give rise to different surface dipole moments, as discussed elsewhere 20 . Furthermore, the difference between the Fermi level and valence band edge ( E V ) in PbS-TBAI is greater ( E F − E V = 0.82 eV) than that in PbS-EDT ( E F − E V = 0.63 eV). According to the individually determined band positions, the large conduction band offset (0.68 eV) between PbS-TBAI and PbS-EDT should block electron flow from the PbS-TBAI layer to the PbS-EDT layer. However, because the interactions between the PbS-TBAI and the PbS-EDT layers can affect the interfacial band bending, the actual band offsets in the device must be measured directly. Figure 2: Energy level diagrams of PbS QDs and photovoltaic devices containing the QDs. a , Energy levels with respect to vacuum for pure PbS-TBAI, pure PbS-EDT and PbS-TBAI films covered with different thicknesses of PbS-EDT layers. The Fermi levels ( E F , dashed line) and valence band edges ( E V , blue lines) were determined by UPS. The conduction band edges ( E C , red lines) were calculated by adding the optical bandgap energy of 1.33 eV, as determined from the first exciton absorption peak in the QD thin films, to E V . b , Schematic energy level alignment at PbS-TBAI and PbS-EDT interfaces deduced from UPS, where E V AC is the vacuum energy. c , Schematic illustration of proposed band bending in ZnO/PbS-TBAI (left) and ZnO/PbS-TBAI/PbS-EDT (right) devices at short-circuit conditions. Full size image To determine the band alignment at the PbS-TBAI/PbS-EDT interface, we performed UPS measurements on PbS-TBAI films covered with different thicknesses of PbS-EDT (see Supplementary Information for the spectra and more details). As shown in Fig. 2a , as the thickness of the PbS-EDT layer increases, the Fermi level with respect to vacuum shifts to shallower energy levels and reaches saturation when the thickness of the PbS-EDT layer exceeds 13.5 nm. The shift indicates the formation of an interfacial dipole, which results in a reduction of the work function and a downward vacuum level shift at the interface. Moreover, the difference between the Fermi level and the valence band edge decreases with increasing PbS-EDT layer thickness. The energy level alignment at the PbS-TBAI/PbS-EDT interface deduced from the thickness-dependent UPS data is plotted in Fig. 2b . The band alignment demonstrates the role of the PbS-EDT layer as an electron-blocking/hole-extraction layer between the PbS-TBAI layer and the anode, which leads to an improved photocurrent collection efficiency and enhanced device performance in the PbS-TBAI/PbS-EDT devices. In the PbS-TBAI-only device, electron flow from PbS-TBAI to the anode, which is in the opposite direction to the photocurrent, and interfacial recombination at the PbS/anode interface are possible loss mechanisms ( Fig. 2c ). In the PbS-TBAI/PbS-EDT device, the conduction band offset between the PbS-TBAI and PbS-EDT layers provides an energy barrier that prevents photogenerated electrons (filled circles) from flowing to the PbS-EDT layer, whereas the valence band offset provides an additional driving force for the flow of photogenerated holes (open circles) to the PbS-EDT layer. The insertion of the PbS-EDT layer not only prevents electron flow from PbS-TBAI to the anode but may also reduce surface recombination of photogenerated electrons and holes at the PbS-TBAI/anode interface. The interfacial band bending makes an additional minor contribution to the improved J SC . The band bending at the PbS-TBAI/PbS-EDT interface implies the formation of a depletion region adjacent to this junction, which effectively extends the overall depletion width in the PbS-TBAI light-absorbing layer. This effect is similar to that in previously reported graded-doping devices 15 , 16 where control of carrier concentrations through ligand exchange extends the depletion region, although in that case the band edge positions of the PbS QDs were not altered 16 . The extension of the depletion region in those graded-doping devices accounts for a marginal increase (<5%) in J SC compared with ungraded devices 15 , 16 . In our study, the PbS-TBAI/PbS-EDT devices typically show ~20% improvements in J SC compared with PbS-TBAI-only devices ( Supplementary Fig. 14 ). As shown in Fig. 1c , the PbS-TBAI/PbS-EDT device exhibits a higher external quantum efficiency (EQE) than that in the PbS-TBAI-only device at longer wavelengths. Long-wavelength photons have longer penetration depths owing to the smaller absorption coefficients. Therefore, a higher fraction of long-wavelength photons are absorbed deeper in the film relative to the short-wavelength photons whose absorption is predominantly close to the ZnO/PbS-TBAI interface ( Supplementary Fig. 16 ). The improvement in EQE at longer wavelengths clearly indicates a better photocurrent collection efficiency, especially in the region close to the PbS-TBAI/PbS-EDT interface, consistent with the proposed mechanisms. The J SC values calculated by integrating the EQE spectra with the AM1.5G solar spectrum for PbS-TBAI-only and PbS-TBAI/PbS-EDT devices are 21.0 and 23.7 mA cm −2 , respectively, which show good agreement with the measured J SC (20.7 ± 1.1 and 25.3 ± 1.1 mA cm −2 ). The device stability is found to depend to a greater extent on the interface and band alignment between the QDs and anodes than on the bulk QD layer itself. Figure 3 compares the evolution of solar cell performance parameters with air storage time in devices with Au and MoO 3 /Au anodes, where the MoO 3 is the commonly used hole-extraction layer in PbS-based and other organic photovoltaic devices 21 , 22 , 23 , 24 . Both PbS-TBAI and PbS-TBAI/PbS-EDT devices with Au anodes show stable performance compared with their counterparts with MoO 3 /Au anodes. In contrast, devices with MoO 3 /Au anodes developed S-shape J – V characteristics after air exposure ( Supplementary Fig. 8 ), consistent with the development of a Schottky barrier at the anode 23 , 24 , 25 . This effect significantly reduces the FF and device performance, limiting air stability. Figure 3: Evolution of photovoltaic parameters with air storage time in devices with Au and MoO 3 /Au anodes. a , Open circuit voltage ( V OC ). b , Short-circuit current ( J SC ). c , Fill factor (FF). d , Power conversion efficiency (PCE). Measurements were performed in a nitrogen-filled glovebox. Day 0 denotes measurements performed after anode evaporation in vacuum. Between each measurement, the unencapsulated devices were stored in air without any humidity control. The average (symbols) and standard deviation (error bars) were calculated from a sample of six to nine devices on the same substrate. Full size image The mechanism through which MoO 3 acts as the hole-extraction layer is through electron transfer from its deep-lying conduction band or from gap states to the active layer 22 , 23 , 24 . However, the positions of these states depend strongly on the stoichiometry, environment, and deposition conditions of the MoO 3 (refs 22 , 26 ). It has been shown that briefly exposing a MoO 3 film deposited under vacuum to oxygen can decrease its work function by more than 1 eV (ref. 27 ). Exposing MoO 3 to humid air can decrease its work function even further 28 . The S-shaped J – V characteristics in devices with a MoO 3 anode are most likely due to unfavourable band alignment between PbS and air-exposed MoO 3 . We note that the air-exposure time in which this effect becomes significant varies from batch to batch of fabricated devices as a result of uncontrolled humidity in ambient storage conditions. In contrast, the performance of devices without a MoO 3 interfacial layer remains unchanged, implying that the PbS-TBAI absorber layers are functionally insensitive to oxygen and moisture during storage. We also note that devices generally show an initial increase in V OC and FF after air exposure regardless of the active layer (PbS-TBAI, PbS-EDT, or PbS-TBAI/PbS-EDT) and anode materials (MoO 3 /Al, MoO 3 /Au, or Au). The ZnO/PbS films are fabricated and stored in air overnight before being transferred to a glovebox for anode deposition. The performance increases during the first hour of air exposure after evaporation of the metal electrodes ( Supplementary Fig. 9 ). Therefore, further oxidation of the PbS QDs is unlikely to explain the performance enhancement. The origin of this initial increase in performance as a result of short air exposure is still under investigation. The devices with Au anodes exhibit excellent long-term storage stability in air for over 150 days without any encapsulation ( Fig. 4a ). During the course of the stability assessment, devices are stored in air in the dark without humidity control but with some exposure to ambient light during sample transfer to the glovebox for testing. Devices have also been tested in air ( Supplementary Fig. 10 ) and show no degradation in performance after testing. An unencapsulated device was sent to an accredited laboratory (Newport) after 37 days of air storage. This device, tested in air under standard AM1.5G conditions, shows a power conversion efficiency of 8.55 ± 0.18% ( Fig. 4b and Supplementary Fig. 10 ), which represents the highest certified efficiency to date for colloidal QD photovoltaic devices. To the best of our knowledge, it is also the highest certified efficiency to date for any room-temperature solution-processed solar cell. Another device certified after 131 days of air storage shows a comparable efficiency of 8.19 ± 0.17% and the highest FF (66.7%) in QD solar cells to date ( Supplementary Fig. 13 ). Figure 4: Long-term stability assessment of unencapsulated devices with Au anodes. a , Evolution of photovoltaic parameters of PbS-TBAI (black) and PbS-TBAI/PbS-EDT (red) devices. Open symbols represent the average values and solid symbols represent the values for the best-performing device. b , Device performance of a PbS-TBAI/PbS-EDT device certified by an accredited laboratory (Newport) after 37 days of air storage. Full size image In summary, we have demonstrated high-performance quantum dot solar cells through the engineering of band alignment at the QD/QD and QD/anode interfaces. These solar cells are processed in air at room temperature and exhibit excellent air-storage stability. Our results indicate that using inorganic-ligand-passivated QDs as the light-absorbing layer and removing the MoO 3 interfacial layer are essential to achieving air stability. Compared with other solution-processed solar cells, the present limiting factor of our device is the relatively low V OC , where qV OC ( q is the elementary charge) is less than half of the optical bandgap. We expect that elucidating the origin of the low V OC , optimizing combinations of ligands and QD sizes, and further improving surface passivation via solution-phase treatments will result in continued efficiency improvements. The simplicity of the room-temperature fabrication process and the robustness of the devices to ambient conditions provide advantages compared with other solution-processed solar cells. Greater understanding of the QD optoelectronic properties and further progress in materials development could lead to a generation of air-stable, solution-processable QD-based solar cells. Methods Synthesis of colloidal PbS QDs. The synthesis of oleic-acid-capped PbS QD with a first absorption peak at λ = 901 nm was adapted from the literature 11 , 29 . Lead acetate (11.38 g) was dissolved in 21 ml of oleic acid and 300 ml of 1-octadecene at 100 °C. The solution was degassed overnight and then heated to 150 °C under nitrogen. The sulphur precursor was prepared separately by mixing 3.15 ml of hexamethyldisilathiane and 150 ml of 1-octadecene. The reaction was initiated by rapid injection of the sulphur precursor into the lead precursor solution. After synthesis, the solution was transferred into a nitrogen-filled glovebox. QDs were purified by adding a mixture of methanol and butanol, followed by centrifugation. The extracted QDs were re-dispersed in hexane and stored in the glovebox. For device fabrication, PbS QDs were further precipitated twice with a mixture of butanol/ethanol and acetone, respectively, and then re-dispersed in octane (50 mg ml −1 ). Synthesis of ZnO nanoparticles. ZnO nanoparticles were synthesized according to the literature 30 . Zinc acetate dihydrate (2.95 g) was dissolved in 125 ml of methanol at 60 °C. Potassium hydroxide (1.48 g) was dissolved in 65 ml of methanol. The potassium hydroxide solution was slowly added to the zinc acetate solution and the solution was kept stirring at 60 °C for 2.5 h. ZnO nanocrystals were extracted by centrifugation and then washed twice by methanol followed by centrifugation. Finally, 10 ml of chloroform was added to the precipitates and the solution was filtered with a 0.45 μm filter. Device fabrication. Patterned ITO substrates (Thin Film Device Inc.) were cleaned with solvents and then treated with oxygen plasma. ZnO layers (120 nm) were fabricated by spin-coating a solution of ZnO nanoparticles onto ITO substrates. PbS QD layers were fabricated by layer-by-layer spin-coating. For each layer, ~10 μl of PbS solution was spin-cast onto the substrate at 2,500 rpm for 15 s. A TBAI solution (10 mg ml −1 in methanol) was then applied to the substrate for 30 s, followed by three rinse-spin steps with methanol. For PbS-EDT layers, an EDT solution (0.02 vol% in acetonitrile) and acetonitrile were used. All the spin-coating steps were performed under ambient condition and room light at room temperature. The thicknesses of each PbS-TBAI and PbS-EDT layer are about 18 nm and 23 nm, respectively, as determined by a profilometer (Veeco Dektak 6M). The films were stored in air overnight and then transferred to a nitrogen-filled glovebox for electrode evaporation. MoO 3 (Alfa; 25 nm thick), Al or Au electrodes (100 nm thick) were thermally evaporated onto the films through shadow masks at a base pressure of 10 −6 mbar. The nominal device areas are defined by the overlap of the anode and cathode to be 1.24 mm 2 . Larger-area devices (5.44 mm 2 ) have also been fabricated and show similar performance ( Supplementary Figs 12 and 13 ). For certification of the larger area device, a 3 mm 2 mask was attached to the device to define the device area. Device characterization. Current–voltage characteristics were recorded using a Keithley 2636A sourcemeter under simulated solar light illumination (1-Sun, 100 mW cm −2 ) generated by a Newport 96000 solar simulator equipped with an AM1.5G filter. The light intensity was calibrated with a Newport 91150 V reference cell before each measurement. The error in efficiency measurements is estimated to be below 7%. EQE measurements were conducted under chopped monochromatic light from an optical fibre in an underfilled geometry without bias illumination. The light source was provided by coupling the white light from a xenon lamp (Thermo Oriel 66921) through a monochromator into the optical fibre and the photocurrent was recorded using a lock-in amplifier (Stanford Research System SR830). Both current–voltage and EQE measurements were performed under an inert atmosphere unless stated otherwise. Devices were stored in ambient air between each measurement. Ultraviolet photoelectron spectroscopy. PbS-TBAI and PbS-EDT samples for UPS measurements were fabricated in air using six layer-by-layer spin-coating steps to obtain ~110 nm-thick PbS films on glass/Cr(10 nm)/Au(80 nm) substrates. For PbS-EDT-thickness-dependent UPS, a diluted PbS solution (10 mg ml −1 ) was used to obtain the thinner PbS-EDT layers on PbS-TBAI films. The samples were then stored in air overnight before UPS measurements. UPS measurements were performed in an ultrahigh vacuum chamber (10 −10 mbar) with a He(I) (21.2 eV) discharge lamp and have a resolution of 0.1 eV. Carbon tape was used to make electrical contact between the Cr/Au anode and the sample plate. A −5.0 V bias was applied to the sample to enable accurate determination of the low-kinetic-energy photoelectron cut-off. Photoelectrons were collected at 0° from substrate normal and the spectra were recorded using an electron spectrometer (Omnicron). The conduction band edge energies were calculated by adding the optical bandgap energy of 1.33 eV determined from the first exciton absorption peak in the QD thin films to the valence band edge energies. The E F − E V values have an error bar of ±0.02 eV resulting from curve fitting.
Solar-cell technology has advanced rapidly, as hundreds of groups around the world pursue more than two dozen approaches using different materials, technologies, and approaches to improve efficiency and reduce costs. Now a team at MIT has set a new record for the most efficient quantum-dot cells—a type of solar cell that is seen as especially promising because of its inherently low cost, versatility, and light weight. While the overall efficiency of this cell is still low compared to other types—about 9 percent of the energy of sunlight is converted to electricity—the rate of improvement of this technology is one of the most rapid seen for a solar technology. The development is described in a paper, published in the journal Nature Materials, by MIT professors Moungi Bawendi and Vladimir Bulović and graduate students Chia-Hao Chuang and Patrick Brown. The new process is an extension of work by Bawendi, the Lester Wolfe Professor of Chemistry, to produce quantum dots with precisely controllable characteristics—and as uniform thin coatings that can be applied to other materials. These minuscule particles are very effective at turning light into electricity, and vice versa. Since the first progress toward the use of quantum dots to make solar cells, Bawendi says, "The community, in the last few years, has started to understand better how these cells operate, and what the limitations are." The new work represents a significant leap in overcoming those limitations, increasing the current flow in the cells and thus boosting their overall efficiency in converting sunlight into electricity. Many approaches to creating low-cost, large-area flexible and lightweight solar cells suffer from serious limitations—such as short operating lifetimes when exposed to air, or the need for high temperatures and vacuum chambers during production. By contrast, the new process does not require an inert atmosphere or high temperatures to grow the active device layers, and the resulting cells show no degradation after more than five months of storage in air. Bulović, the Fariborz Maseeh Professor of Emerging Technology and associate dean for innovation in MIT's School of Engineering, explains that thin coatings of quantum dots "allow them to do what they do as individuals—to absorb light very well—but also work as a group, to transport charges." This allows those charges to be collected at the edge of the film, where they can be harnessed to provide an electric current. The new work brings together developments from several fields to push the technology to unprecedented efficiency for a quantum-dot based system: The paper's four co-authors come from MIT's departments of physics, chemistry, materials science and engineering, and electrical engineering and computer science. The solar cell produced by the team has now been added to the National Renewable Energy Laboratories' listing of record-high efficiencies for each kind of solar-cell technology. The overall efficiency of the cell is still lower than for most other types of solar cells. But Bulović points out, "Silicon had six decades to get where it is today, and even silicon hasn't reached the theoretical limit yet. You can't hope to have an entirely new technology beat an incumbent in just four years of development." And the new technology has important advantages, notably a manufacturing process that is far less energy-intensive than other types. Chuang adds, "Every part of the cell, except the electrodes for now, can be deposited at room temperature, in air, out of solution. It's really unprecedented." The system is so new that it also has potential as a tool for basic research. "There's a lot to learn about why it is so stable. There's a lot more to be done, to use it as a testbed for physics, to see why the results are sometimes better than we expect," Bulović says. A companion paper, written by three members of the same team along with MIT's Jeffrey Grossman, the Carl Richard Soderberg Associate Professor of Power Engineering, and three others, appears this month in the journal ACS Nano, explaining in greater detail the science behind the strategy employed to reach this efficiency breakthrough. The new work represents a turnaround for Bawendi, who had spent much of his career working with quantum dots. "I was somewhat of a skeptic four years ago," he says. But his team's research since then has clearly demonstrated quantum dots' potential in solar cells, he adds. Arthur Nozik, a research professor in chemistry at the University of Colorado who was not involved in this research, says, "This result represents a significant advance for the applications of quantum-dot films and the technology of low-temperature, solution-processed, quantum-dot photovoltaic cells. … There is still a long way to go before quantum-dot solar cells are commercially viable, but this latest development is a nice step toward this ultimate goal."
10.1038/nmat3984
Nano
Valley current control shows way to ultra-low-power devices
Y. Shimazaki et al. Generation and detection of pure valley current by electrically induced Berry curvature in bilayer graphene, Nature Physics (2015). DOI: 10.1038/nphys3551 Journal information: Nature Physics
http://dx.doi.org/10.1038/nphys3551
https://phys.org/news/2015-11-valley-current-ultra-low-power-devices.html
Abstract The field of ‘Valleytronics’ has recently been attracting growing interest as a promising concept for the next generation electronics, because non-dissipative pure valley currents with no accompanying net charge flow can be manipulated for computational use, akin to pure spin currents 1 . Valley is a quantum number defined in an electronic system whose energy bands contain energetically degenerate but non-equivalent local minima (conduction band) or maxima (valence band) due to a certain crystal structure. Specifically, spatial inversion symmetry broken two-dimensional honeycomb lattice systems exhibiting Berry curvature is a subset of possible systems that enable optical 2 , 3 , 4 , 5 , magnetic 6 , 7 , 8 , 9 and electrical control of the valley degree of freedom 10 , 11 , 12 . Here we use dual-gated bilayer graphene to electrically induce and control broken inversion symmetry (or Berry curvature) as well as the carrier density for generating and detecting the pure valley current. In the insulating regime, at zero-magnetic field, we observe a large nonlocal resistance that scales cubically with the local resistivity, which is evidence of pure valley current. Main Charge and spin are both well-defined quantum numbers in solids. Spintronics is a technology that uses the spin degree of freedom. The application range of spintronics has been largely expanded by the development of electrical techniques for generating and detecting the spin current 1 . The valley degree of freedom in solid crystals can be handled by controlling the occupation of the non-equivalent structures in the band, providing the novel concept of so-called valleytronics. Among various material candidates for valleytronics, two-dimensional (2D) honeycomb lattice systems with broken spatial inversion symmetry, such as gapped graphene and transition metal dichalcogenide (TMDC), are predicted to be the most useful. These systems have two valleys, called K and K′. Optical 2 , 3 , 4 , 5 , magnetic 6 , 7 , 8 , 9 and electrical 10 , 11 , 12 control of the valley has been demonstrated. In particular, Berry curvature, which emerges in these honeycomb lattice systems with broken spatial inversion symmetry, enables electrical control of the valley degree of freedom. Berry curvature acts as an out-of-plane pseudo-magnetic field in momentum space and has opposite sign between the two valleys. Therefore, a transverse pure valley current is generated by means of the anomalous velocity, in analogy to a transverse electronic current being generated by means of the Lorentz force due to a magnetic field in real space 13 , 14 (see Fig. 1d ). This phenomenon is called the valley Hall effect 10 and can be used to generate a valley current. The inverse valley Hall effect, which converts the valley current into a transverse electric field, allows the detection of the pure valley current. Figure 1: Scheme for detection of the nonlocal resistance due to valley current flow in BLG. a , Band structure, Berry curvature and valley Hall conductivity of BLG. A band gap 2 Δ and Berry curvature Ω emerge as a result of broken spatial inversion symmetry. The valley Hall conductivity σ xy VH , which is calculated by integrating the Berry curvature, is constant in the bandgap. b , Schematic of the dual-gated BLG device. The top gate is a gold film, and the back gate is a p-doped silicon substrate. Two h-BN layers and a SiO 2 film are used as gate insulators. Using these gates, the carrier density and perpendicular electric field (displacement field) are independently varied. Expanded image: Lattice structure of AB-stacked BLG. In the presence of a displacement field, an energy difference between the top and bottom layer emerges. Therefore spatial inversion symmetry is broken in this system, and both Berry curvature and a bandgap emerge. c , AFM image of the BLG device without the top h-BN. The light blue region indicates the area of the top gate. The BLG has a mobility of ∼ 15,000 cm 2 V −1 s −1 at both 1.5 and 70 K. d , Schematic of the nonlocal resistance measurement and the nonlocal transport mediated by a pure valley current. The electric field driving the charge current in the left-hand circuit generates a pure valley current in the transverse direction by means of the valley Hall effect (VHE). This valley current is converted into an electric field or nonlocal voltage in the right-hand circuit by means of the inverse valley Hall effect (IVHE) to generate the nonlocal resistance. Full size image The valley Hall effect was first reported for photo-generated electrons in monolayer MoS 2 (ref. 11 ). However, the small inter-valley scattering length in this material prevents the detection of the pure valley current, which does not accompany the electronic current. Compared to TMDC, graphene has a much larger inter-valley scattering length owing to its higher crystal quality. Monolayer graphene on h-BN has more recently been used to generate and detect the pure valley current, where the crystal direction of the graphene was aligned to that of the h-BN such that the superlattice potential imposed by the h-BN structurally breaks the spatial inversion symmetry 12 . The valley Hall effect was analysed in detail with the carrier density as a parameter using metallic samples whose resistivity decreases as the temperature is lowered, but leaving unaddressed the insulating regime, which is more appropriate for investigating the pure valley current. In this work we employed bilayer graphene (BLG) to generate and detect the valley current. We used a perpendicular electric field to break the spatial inversion symmetry and induce Berry curvature as well as a bandgap (see Fig. 1a ). The dual-gated structure seen in Fig. 1b allows electrical and independent control of the perpendicular electric field and the carrier density 15 , 16 , 17 , 18 , 19 . This is in contrast to the monolayer graphene samples in ref. 12 , where the monolayer graphene has to be structurally aligned with h-BN through a process of mechanical transfer. BLG valley Hall devices therefore show greater promise in terms of tunability of the valley current and applications to electronic devices. Indeed we show that independent control of the Fermi level and the bandgap enables us to prove the existence of the valley Hall effect in the insulating regime where the local resistivity increases with decreasing temperature. The significant advantage of the insulating system is that conversion from the electric field to the valley current is less dissipative than that in the metallic regime, as a much smaller current is injected. Such a regime has not been accessible with conventional spin or valley Hall systems. In bilayer graphene with broken spatial inversion symmetry, the Berry curvature Ω and intrinsic valley Hall conductivity σ xy VH are calculated as a function of the Fermi energy E F (refs 10 , 20 , 21 ): and where τ z is the valley index ( τ z = −1 for K and +1 for K′), m is the effective mass in BLG without broken spatial inversion symmetry, e is the elementary charge, h is the Planck constant, and ℏ = h /2 π . The Berry curvature Ω is defined only for | E F | ≥ Δ (half the bandgap, see Fig. 1a ). σ xy VH saturates at the maximum value 4 e 2 / h when the Fermi level lies in the gap, because all occupied states in the valence band contribute to the valley Hall effect. Away from the gap—for example, when the Fermi energy lies above the gap—the conduction band, which has the opposite sign of Berry curvature to that of the valence band, contributes to reduce σ xy VH . To detect the pure valley current, the nonlocal resistance R NL was measured in the same scheme as is widely used in the spintronics field to detect pure spin current 22 , 23 , 24 , 25 , 26 . We observed a value of R NL at the charge neutrality point in the presence of a perpendicular electric field that was three orders of magnitude larger than the R NL due to the Ohmic contribution (explained later). We also found a cubic scaling relation between R NL and resistivity ρ (= 1/ σ xx ), which is expected to appear when σ xx is much larger than σ xy VH in the intrinsic valley Hall effect. This cubic scaling was reproduced in multiple devices. From these findings we conclude that the origin of the observed large nonlocal resistance is the transport mediated by pure bulk valley current in a gapped state with electrically induced Berry curvature. Figure 1b, c shows the schematic of the dual-gated BLG device and an AFM image of the device, respectively. BLG is encapsulated between two h-BN layers 27 (see Methods ) and gated through the h-BN layer from the top and from the bottom. The local and nonlocal resistance R L and R NL were derived from measurements of the four-terminal resistance R ij , kl , which is defined by the voltage between terminals i and j divided by the charge current injected between terminals k and l (see Fig. 1c, d ). Unless mentioned, R L and R NL denote R 57,38 and R 45,67 , respectively. The measurement was performed at 70 K using a low-frequency (around 1 Hz) lock-in technique, unless mentioned (see Methods and Supplementary Section I for details of the measurement). Figure 2a, b shows the gate voltage dependence of R L and R NL , respectively. At the charge neutrality point (CNP), R L increases with the displacement field ( D ) (see Fig. 2a ), reflecting the bandgap opening due to inversion symmetry breaking 15 , 16 , 17 , 18 , 19 , 28 . We found that R NL also increases with D around the CNP. Figure 2: Measured local and nonlocal resistances R L and R NL . a , b , Gate voltage dependence of R L and R NL , respectively. The displacement fields from the back and top gate are defined as D BG = ɛ BG ( V BG − V BG 0 )/ d BG and D TG = − ɛ TG ( V TG − V TG 0 )/ d TG , respectively, where ɛ BG ( ɛ TG ) and d BG ( d TG ) are the relative dielectric constant and thickness of the back (top) gate, respectively and V BG 0 ( V TG 0 ) is the offset of the back (top) gate voltage under the top gated region due to environmental doping. The displacement field D is defined by the average of D BG and D TG .The red axis shows the scale of D . Inset: Schematic of the measurement configuration. The blue arrow shows the direction of charge flow. c , Comparison of the measured R NL (blue) with a calculation of the Ohmic contribution (magnified 1,000 times) (green). The R NL curve is extracted from the data along the direction of the green arrow in b at the highest D . The Ohmic contribution curve is calculated using the R L data along the direction of the green arrow in a at the highest D (see text). Full size image In analogy with the spin Hall effect 23 ( Supplementary Section IX ), R NL arising from the valley Hall and inverse valley Hall effects is given by where σ xy VH and l v are the valley Hall conductivity and the inter-valley scattering length, respectively. W and L are the width and length of the Hall bar channel. Local conductivity σ xx is minimized at the CNP and with increasing D , thus enhancing R NL (equation (3)). For a given D , R NL is further maximized around the CNP owing to the maximal valley Hall conductivity σ xy VH (equation (2)). We confirmed that R NL is unchanged when swapping the measurement terminals—that is, R 45,67 ∼ R 67,45 ( Supplementary Section II ). We also consider a contribution of trivial Ohmic resistance which is due to classical diffusive charge transport to the measured nonlocal resistance. The Ohmic contribution can be calculated using the van der Pauw formula R NL = ρ / π exp(− π ( L / W )) (refs 12 , 24 , 25 , 26 ), where we define the resistivity ρ = R L ( W / L ), and is compared with the measured nonlocal resistance in Fig. 2c . The measured R NL is three orders of magnitude larger than the calculated Ohmic contribution. We therefore exclude the Ohmic contribution as the origin of the observed R NL . In the gapped BLG, the electron conduction mechanism depends on the temperature T . At high T it is dominated by thermal activation across the bandgap, namely band transport, whereas at low T it is dominated by hopping conduction between impurity states 15 , 17 , 18 , 19 . The temperature dependence of maximum ρ with respect to carrier density ( ρ max ) was measured for various displacement fields D ( Fig. 3a , inset). We plot 1/ ρ max as a function of 1/ T for D = 0.55 V nm −1 as a typical example in Fig. 3a . The temperature dependence is strong at high T (> 79 K), reflecting band conduction, and weak at low T , reflecting hopping conduction. The temperature dependence over the whole range is reproduced well by a double exponential function: where E 1 L ( E 2 L ) and ρ 1 ( ρ 2 ) are the activation energy and the local resistivity, respectively for the high- T (low- T ) regime. 2 E 1 L indicates the bandgap size, and is around 80 meV at the highest D ( Supplementary Section IV ). The crossover temperature T c between the high- and low-temperature regions is determined by the crossing point of the first and second term of equation (4), as shown in Fig. 3a . The temperature dependence of the maximum nonlocal resistance R NL max was also measured ( Fig. 3b , inset) and analysed with the following fitting function in the same way as for ρ max , as shown in Fig. 3b : where E 1 NL ( E 2 NL ) is the activation energy and R 1 ( R 2 ) is a fitted proportionality factor, respectively, for the high- T (low- T ) regime. The temperature dependence is fairly similar to that of ρ max in Fig. 3a . We also plot the crossover temperature T c for both 1/ ρ max and 1/ R NL max as a function of D in Fig. 3c . The T c for 1/ ρ max divides the D – T plane into the band conduction region (light green) and the hopping conduction region (light red). Figure 3: Temperature dependence of ρ max and R NL max . a , b , Typical data fitting using a double exponential function for 1/ ρ max ( a ) and 1/ R NL max ( b ). The blue broken curve indicates the fitting curve. The green (red) line indicates the contribution from band (hopping) conduction. The crossover temperature T c is defined by the crossing of the two lines. Inset: Temperature dependence of maximum ρ ( a ) and R NL ( b ) with respect to the carrier density. D is varied from 0.85 V nm −1 (red) to 0.01 V nm −1 (purple) in a and from 0.85 V nm −1 (red) to 0.17 V nm −1 (blue) in b with a constant interval. c , T c derived from the data fitting as in a for 1/ ρ max and b for 1/ R NL max . The error bars are due to the accuracy of the fitting. The light green (red) area is the region of band conduction (hopping conduction). The blue curve shows the fitting result for the nonlocal 1/ T c versus 1/ D . Full size image The critical temperatures T c of 1/ ρ max and 1/ R NL max coincide for D > 0.4 V nm −1 , indicating that there is correlation of the crossover behaviour between the local and nonlocal transport. However, it deviates for D < 0.4 V nm −1 for the following two possible reasons. The first possible reason is underestimation of the T c of the nonlocal transport in the low- D region. The nonlocal voltage becomes very small at high T and low D , making precise measurement of R NL difficult. In this regime, there are fewer measured points available for the fitting, resulting in the underestimation of T c . The second possible reason is that the nonlocal transport by the valley current is less affected by charge puddles compared to the local transport, although we do not yet fully understand the reason for this observation. One noticeable result is that the T c of the nonlocal transport depends almost linearly on D for the entire region in Fig. 3c (see the blue curve; Supplementary Section V ). This behaviour may indicate that the T c is affected by the size of bandgap but less affected by the size of potential fluctuations due to charge puddles. Note that all four of the fitting parameters in equations (4) and (5) have a D dependence; therefore, obtaining an analytical relationship between D and T c is not straightforward. Another notable result is that the high- T activation energy E 1 is different between the local and nonlocal transport ( Supplementary Sections IV and V , see Supplementary Figs 7 and 8 ). This already implies there is no linear relation between R NL and ρ in our device. This observation is in contrast to a previous report on monolayer graphene 12 , where both activation energies were similar. We now present the scaling relation between ρ and R NL at the CNP. Figure 4 is a plot of R NL versus ρ obtained for various displacement fields D . The crossover behaviour between the band conduction and the hopping conduction shows up again on this plot. In the band conduction region ( ρ < 7 kΩ) we observe a clear cubic scaling relation (green line), whereas we observe saturation in the hopping conduction region ( ρ > 7 kΩ). Similar cubic and saturating scaling relations are obtained for different physical conditions. The ρ versus R NL relation obtained for different carrier densities and temperatures at fixed displacement fields are shown in Supplementary Sections VI and VII , respectively. In addition, we observe a similar scaling relation for multiple devices (see Fig. 4 inset for one example). Figure 4: Scaling relation between ρ and R NL at CNP. Each data point is extracted from Fig. 2a, b for a different D value ranging from 0.22 to 0.85 V nm −1 at CNP. Inset: Scaling relation between the maximum ρ ( ρ max ) and R NL ( R NL max ) as functions of the carrier density obtained from various D ranging from 0.51 to 0.75 V nm −1 (red points) and −0.50 to −0.76 V nm −1 (blue points) in another dual-gated bilayer graphene Hall bar device. The width and length of the Hall bar channel are 1 μm and 4.5 μm, respectively. Measurement was performed at 50 K. At lower temperatures, we also observed deviation from the cubic scaling for high displacement fields. Full size image By assuming a constant inter-valley scattering length and replacing σ xx with ρ −1 in equation (3), we derive the following scaling relation between R NL and ρ : The cubic scaling between R NL and ρ holds for the constant valley Hall conductivity which is expected when the Fermi level is in the bandgap or near the CNP (see Fig. 1a ) for the intrinsic valley Hall effect, σ xy VH = 4 e 2 / h . The observed cubic relation for small D in Fig. 4 is therefore consistent with the theoretical expectation, providing unambiguous evidence of the valley transport. Note that, at finite temperatures, σ xy VH is reduced from 4 e 2 / h . However, in the range of displacement fields used here, it stays almost constant with a value close to 4 e 2 / h (see Supplementary Section X ). Using σ xy VH = 4 e 2 / h and substituting σ xx = ρ −1 and the sample dimensions into equation (3), we obtain l v = 1.6 μm. This is comparable to the estimated inter-valley scattering length in previous works 12 , 29 . By using different sets of four terminals we observed a significantly increasing decay of R NL with L ( Supplementary Section V ), probably owing to valley relaxation due to edge scattering, as discussed in a weak localization study 29 . We here note that equations (3) and (6) are valid only for σ xx ≫ σ xy VH ( Supplementary Section IX ). Otherwise we need to solve the conductance matrix and the diffusion equation of the entire Hall bar in a self-consistent way. Indeed, deviation from the cubic scaling in the large- D region observed in Fig. 4 may arise owing to the inapplicability of equations (3) and (6). However, it does not account for the saturation of R NL for large ρ ( Supplementary Section IX ). Another possible scenario to account for the saturation of R NL is the crossover of the conduction mechanism, as discussed in Fig. 3c . In studies of the anomalous Hall effect, the crossover between the metallic and the hopping transport regime has been experimentally studied, and the scaling relation σ xy ∝ σ xx 1.6 has been reported in a wide range of materials 13 . If we apply this experimental rule for equation (6), we find R NL to be almost constant with ρ . Here, we are again cautious about the validity of equation (6) in this argument, because in the saturation region σ xx < σ xy VH for σ xy VH = 4 e 2 / h . However, by including extrinsic contributions—for example, the side-jump contribution 10 — σ xy VH can be smaller than 4 e 2 / h and σ xx . In such a case, we can keep the above-described analogy with the anomalous Hall effect. Further experimental and theoretical investigations are needed into the valley Hall effect in the insulating regime 30 , where conventional formulae are not applicable. We finally exclude another scenario that might account for the R NL observed here. In the gap of bilayer graphene, the presence of localized states along the edge resulting from the topological property of BLG has been predicted theoretically 31 . This might also contribute to the nonlocal transport. With a large displacement field or large bandgap, the bulk shunting effect is small and the conduction becomes dominated by the edge transport. In such a case, R NL should be proportional to the local resistance obtained by a four-terminal measurement ( Supplementary Section X ). This linear scaling does not fit any of the observed features (demonstrated in Fig. 4 ). We draw a linear scaling line in blue in Fig. 4 , but this does not fit any of the observed features. Even when we consider the effect of bulk shunting, we find that the scaling is far from the cubic line ( Supplementary Section X ). So we exclude the possibility of edge transport as the origin of the observed R NL and conclude that it comes from the bulk valley current in the gap. Also the transport through the localized states along the edge was disproved by the measurement on a Corbino geometry device 19 . We used a dual-gated BLG in the Hall bar geometry to electrically control the broken inversion symmetry of BLG, and hence the valley degree of freedom. We observed a large nonlocal resistance in the insulating regime at 70 K and revealed a cubic scaling between the nonlocal resistance and the local conductivity as an indication of pure valley current flow. The valley current is fully controlled by electrical gating, with the bandgap, the Fermi level and broken inversion symmetry as parameters. This will allow further studies on the underlying physics of the valley current, in particular for σ xx < σ xy VH , as well as applications for non-dissipative electronic devices. While preparing the manuscript, observation of the topological valley current along an AB–BA stacking domain wall in bilayer graphene has been reported 32 . This topological valley current along the domain wall also originates from the non-zero valley Hall conductivity (or non-zero valley Chern number) in the gap of bilayer graphene with broken spatial inversion symmetry. Note added in proof: We became aware that there is similar work related to valley current transport in dual-gated bilayer graphene 33 . Methods We used a mechanical exfoliation technique to prepare bilayer graphene (BLG) and h-BN flakes. The number of layers in each graphene flake on the SiO 2 /Si substrate was identified by optical contrast. The SiO 2 was 285 nm thick, the Si was heavily p-doped and used for back gating. We transferred the BLG flakes onto h-BN flakes prepared on SiO 2 using the PMMA transfer technique reported in ref. 34 . Then Ti/Au (10 nm/190 nm) was deposited to make Ohmic contacts. BLG was etched into a Hall bar by means of an Ar plasma. After each transfer and lithography step, except for the step between the Ohmic contact deposition and Ar plasma etching, the device was annealed at 300 °C in an Ar/H 2 atmosphere for a few hours to remove the resist residue. However, the PMMA residue could not be completely removed by annealing, so we used a mechanical cleaning technique 35 , 36 , 37 utilizing an AFM in tapping mode. After shaping the Hall bar, an h-BN flake was transferred to the top of the BLG/h-BN stacking layer. Finally, Ti/Au (10 nm/190 nm) was deposited onto the h-BN/BLG/h-BN stacking layer to make the top gate. The thicknesses of the top and bottom h-BN layers measured by AFM were 21 nm and 35 nm, respectively. Measurements were made using a low-frequency (around 1 Hz) lock-in technique. We found that current leakage through the input impedance of the voltage amplifiers causes an artefact in the nonlocal measurement. However, by using a simple circuit model, we confirmed that the error was not significant in this measurement (see Supplementary Section I ). For the measurement shown in the inset of Fig. 4 , we used home-made voltage amplifiers with a high input impedance to suppress the artefact further.
University of Tokyo researchers have demonstrated an electrically-controllable valley current device that may pave the way to ultra-low-power "valleytronics" devices. On the atomic scale, matter behaves as both a particle and a wave. Electrons, therefore, have an associated wavelength that usually can have many different values. In crystalline systems however, certain wavelengths may be favored. Graphene, for example, has two favored wavelengths known as K and K' (K prime). This means that two electrons in graphene can have the same energy but different wavelengths - or, to put it another way, different "valley." Electronics use charge to represent information, but when charge flows through a material, some energy is dissipated as heat, a problem for all electronic devices in use today. However, if the same quantity of electrons in a channel flow in opposite directions, no net charge is transferred and no heat is dissipated - but in a normal electronic device this would mean that no information was passed either. A valleytronics device transmitting information using pure valley current, where electrons with the same valley flow in one direction, would not have this limitation, and offers a route to realizing extremely low power devices. Experimental studies on valley current have only recently started. Control of valley current in a graphene monolayer has been demonstrated, but only under very specific conditions and with limited control of conversion from charge current to valley current. In order for valley current to be a viable alternative to charge current-based modern electronics, it is necessary to control the conversion between charge current and valley current over a wide range at high temperatures. Now, Professor Seigo Tarucha's research group at the Department of Applied Physics at the Graduate School of Engineering has created an electrically controllable valley current device that converts conventional electrical current to valley current, passes it through a long (3.5 micron) channel, then converts the valley current back into charge current that can be detected by a measurable voltage. The research group used a graphene bilayer sandwiched between two insulator layers, with the whole device sandwiched between two conducting layers or 'gates', allowing for the control of valley. A vertical electric field (green arrows) breaks the symmetry of the bilayer graphene allowing for selective control of valley. A conventional, small electrical current (purple arrow) is converted into valley current via the valley Hall effect (VHE). (The electrons in the K valley, blue, travel to the right; while the electrons in the K� valley, pink, travel to the left.) Pure valley current travels over a significant distance. At the other side of the device the valley current is converted back to charge current via the inverse valley Hall effect (IVHE) and is detected as a voltage. Credit: (c) 2015 Seigo Tarucha The group transferred valley current over a distance large enough to exclude other possible competing explanations for their results and were able to control the efficiency of valley current conversion over a wide range. The device also operated at temperatures far higher than expected. "We usually measure our devices at temperatures lower than the liquefaction point of Helium (-268.95 C, just 4.2 K above absolute zero) to detect this type of phenomena," says Dr. Yamamoto, a member of the research group. "We were surprised that the signal could be detected even at -203.15 C (70 K). In the future, it may be possible to develop devices that can operate at room temperature." "Valley current, unlike charge current is non dissipative. This means that no energy is lost during the transfer of information," says Professor Tarucha. He continues, "With power consumption becoming a major issue in modern electronics, valley current based devices open up a new direction for future ultra-low-power consumption computing devices." An Atomic Force Microscope image of the valleytronics device. The bright orange area is bilayer graphene. The light blue area shows the area of the top gate. Current is injected from the right side of the device, and converted to valley current. The valley current is converted back to charge current and detected as a voltage signal. Credit: (c) 2015 Seigo Tarucha
10.1038/nphys3551
Medicine
Digital technologies and data privacy in the COVID-19 pandemic
Jobie Budd et al. Digital technologies in the public-health response to COVID-19, Nature Medicine (2020). DOI: 10.1038/s41591-020-1011-4 Journal information: Nature Medicine
http://dx.doi.org/10.1038/s41591-020-1011-4
https://medicalxpress.com/news/2020-08-digital-technologies-privacy-covid-pandemic.html
Abstract Digital technologies are being harnessed to support the public-health response to COVID-19 worldwide, including population surveillance, case identification, contact tracing and evaluation of interventions on the basis of mobility data and communication with the public. These rapid responses leverage billions of mobile phones, large online datasets, connected devices, relatively low-cost computing resources and advances in machine learning and natural language processing. This Review aims to capture the breadth of digital innovations for the public-health response to COVID-19 worldwide and their limitations, and barriers to their implementation, including legal, ethical and privacy barriers, as well as organizational and workforce barriers. The future of public health is likely to become increasingly digital, and we review the need for the alignment of international strategies for the regulation, evaluation and use of digital technologies to strengthen pandemic management, and future preparedness for COVID-19 and other infectious diseases. Main COVID-19, a previously unknown respiratory illness caused by the coronavirus SARS-CoV-2 1 , 2 , was declared a pandemic by the World Health Organization (WHO) on 11 March 2020, less than 3 months after cases were first detected. With now over 9.8 million confirmed cases and more than 495,000 deaths 3 recorded worldwide, there are grave concerns about the global health, societal and economic effects of this virus, particularly on vulnerable and disadvantaged populations, and in low- and middle-income countries with fragile health systems 4 , 5 . At the time of this writing, 7.1 billion people live in countries that have had substantial travel and social restrictions 6 . As with the control of outbreaks and pandemics before it, controlling the COVID-19 pandemic rests on the detection and containment of clusters of infection and the interruption of community transmission to mitigate the impact on human health. During the plague outbreak that affected 14th-century Europe, isolation of affected communities and restriction of population movement were used to avoid further spread 7 . These public-health measures for outbreak response remain relevant today, including surveillance, rapid case identification, interruption of community transmission and strong public communication. Monitoring how these measures are implemented and their impact on incidence and mortality is essential. All countries are required by the International Health Regulations (2005) 8 to have core capacity to ensure national preparedness for infectious hazards that have the potential to spread internationally. Research and development of new methods and technologies to strengthen these core capacities often occurs during outbreaks, when innovation is an absolute necessity 9 . During the outbreak of severe acute respiratory syndrome in 2003, Hong Kong identified clusters of disease through the use of electronic data systems 10 . During the Ebola outbreaks in West Africa in 2014–2016, mobile phone data were used to model travel patterns 11 , and hand-held sequencing devices permitted more-effective contact tracing and a better understanding of the dynamics of the outbreaks 12 . Similarly, digital technologies also have been deployed in the COVID-19 pandemic 13 , 14 (Table 1 ) to strengthen each of the four public-health measures noted above. Table 1 Digital technologies used in the COVID-19 pandemic Full size table The digital revolution has transformed many aspects of life. As of 2019, 67% of the global population had subscribed to mobile devices, of which 65% were smartphones—with the fastest growth in Sub-Saharan Africa 15 . In 2019, 204 billion apps were downloaded 16 , and as of January 2020, 3.8 billion people actively used social media 17 . Here we critically review how digital technologies are being harnessed for the public-health response to COVID-19 worldwide (Fig. 1 ). We discuss the breadth of innovations and their respective limitations. This systems-level approach is needed to inform how digital strategies can be incorporated into COVID-19-control strategies, and to help prepare for future epidemics. Fig. 1: The interconnected digital technologies used in the public-health response to COVID-19. Many approaches use a combination of digital technologies and may rely on telecommunications infrastructure and internet availability. Machine learning is shown as a separate branch for clarity, although it also underpins many of the other technologies. Much of the data generated from these technologies feeds into data dashboards. SMS, short message service. Full size image Digital epidemiological surveillance A core public-health function of outbreak management is understanding infection transmission in time, place and person, and identifying risk factors for the disease to guide effective interventions. A range of digital data sources are being used to enhance and interpret key epidemiological data gathered by public-health authorities for COVID-19. Online data sources for early disease detection Established population-surveillance systems typically rely on health-related data from laboratories, notifications of cases diagnosed by clinicians and syndromic surveillance networks. Syndromic surveillance networks are based on reports of clinical symptoms, such as ‘influenza-like illness’, rather than a laboratory diagnosis, from hospital and selected sentinel primary and secondary healthcare facilities, which agree to provide regular surveillance data of all cases. These sources, however, ultimately miss cases in which healthcare is not sought. In the UK, for example, where until recently only hospitalized patients and healthcare workers were routinely tested for COVID-19, confirmed cases represent an estimated 4.7% of symptomatic COVID-19 cases 18 . Identifying undetected cases would help elucidate the magnitude and characteristics of the outbreak 19 and reduce onward transmission. In the past two decades, data from online news sites, news-aggregation services, social networks, web searches and participatory longitudinal community cohorts have aimed to fill this gap. Data-aggregation systems, including ProMED-mail 20 , GPHIN 21 , HealthMap 22 and EIOS 23 , which use natural language processing and machine learning to process and filter online data, have been developed to provide epidemiological insight. These data sources are increasingly being integrated into the formal surveillance landscape 24 and have a role in COVID-19 surveillance. The WHO’s platform EPI-BRAIN brings together diverse datasets for infectious-disease emergency preparedness and response, including environmental and meteorological data 25 . Several systems have claimed detection of early disease reports for COVID-19, through the use of crowdsourced data and news reports, before the WHO released a statement about the outbreak 14 , 20 , 26 . The UK’s automatic syndromic surveillance system scans National Health Service digital records 27 to pick up clusters of a respiratory syndrome that could signal COVID-19. There is also interest in using online data to estimate the true community spread of infectious diseases 28 , 29 . Preliminary work on the epidemiological analysis of COVID-19-related social-media content has been reported 30 , 31 , 32 . Models for COVID-19 (ref. 33 ), building on previously established internet search algorithms for influenza 34 , are included in Public Health England’s weekly reports 35 . Crowdsourcing systems used to elucidate the true burden of disease are also supporting syndromic surveillance. InfluenzaNet gathers information about symptoms and compliance with social distancing from volunteers in several European countries through a weekly survey 36 . Similar efforts exist in other countries, such as COVID Near You 37 in the USA, Canada and Mexico. The COVID-19 symptom-tracker app has been downloaded by 3.9 million people in the UK and USA 38 and is feeding into national surveillance. While rapid and informative, these systems can suffer from selection bias, over-interpretation of findings and lack of integration with official national surveillance that report established surveillance metrics. A fragmented approach has meant that there are 39 initiatives in the UK alone that are collecting symptoms from people in the community, with no centralized data collection (M. Edelstein, personal communication). Data-visualization tools for decision support Data dashboards are being used extensively in the pandemic, collating real-time public-health data, including confirmed cases, deaths and testing figures, to keep the public informed and support policymakers in refining interventions 39 , 40 , 41 . COVID-19 dashboards typically focus on time-series charts and geographic maps, ranging from region-level statistics to case-level coordinate data 40 , 42 . Several dashboards show wider responses to the pandemic, such as clinical trials 43 , policy and economic interventions 44 and responses to social-distancing directives 45 . Few dashboards include data on contact tracing or community surveillance from apps or their effectiveness. Challenges with the quality and consistency of data collection remain a concern. Lack of official standards and inconsistencies in government reporting of statistics across countries make global comparisons difficult. Up-to-date and accurate offline statistics from governments are also not always accessible. Novel visualization approaches are emerging, such as the NextStrain open repository, which presents viral sequence data to create a global map of the spread of infection 41 . This is enabled by open sharing of data and is based on open-source code. Such speed of the sharing of such data has not been witnessed in previous global outbreaks 46 . Rapid case identification Early and rapid case identification is crucial during a pandemic 47 for the isolation of cases and appropriate contacts in order to reduce onward spread and understand key risks and modes of transmission. Digital technologies can supplement clinical and laboratory notification, through the use of symptom-based case identification and widespread access to community testing and self testing, and with automation and acceleration of reporting to public-health databases. Case identification by online symptom reporting, as seen in Singapore 48 and the UK 49 , is traditionally used for surveillance, but it now offers advice on isolation and referrals to further healthcare services, such as video assessments 50 and testing. These services can be rapidly implemented but must be linked to ongoing public-health surveillance and to action, such as isolation of cases and quarantining of contacts. Although this approach is suitable for symptomatic people, widespread testing of people and populations, as well as contact tracing, has a crucial role in case identification, as an estimated 80% of COVID-19 cases are mild or asymptomatic 19 . Sensors, including thermal imaging cameras and infrared sensors, are being deployed to identify potential cases on the basis of febrile symptoms (for example, at airports). The large numbers of false-positive and false-negative results mean that this is unlikely to have a substantial effect beyond increasing awareness 51 , 52 . Wearable technologies are also being explored for monitoring COVID-19 in populations 53 . There has been increasing interest in decentralized, digitally connected rapid diagnostic tests to widen access to testing, increase capacity and ease the strain on healthcare systems and diagnostic laboratories 54 , 55 , 56 . Several point-of-care COVID-19 PCR tests are in development 57 , 58 ; however, their use is still limited to healthcare settings. Drive-through testing facilities and self-swab kits have widened access to testing. There are inherent delays between sampling, sending samples to centralized labs, waiting for results and follow-up. By contrast, point-of-care rapid diagnostic antibody tests could be implemented in home or community or social-care settings and would give results within minutes. Linking to smartphones with automatic readout through the use of image processing and machine-learning methods 59 , 60 could allow mass testing to be linked with geospatial and patient information rapidly reported to both clinical systems and public-health systems and could speed up results. For this to work effectively, standardization of data and integration of data into electronic patient records are required. Identifying past infections by antibody testing is also central to population-level surveillance and evaluating the efficacy of interventions such as social distancing. So far, point-of-care serology tests in particular have variable performance, and in light of the possibility that antibody responses may be short-lived, how such testing can assist in patient management remains unclear 61 , 62 , 63 . Some have argued that seropositive workers who must remain active in the economy could receive a digital ‘immunity passport’ to demonstrate protection from infection, although such a strategy is fraught with operational and clinical uncertainty 63 , 64 . Machine-learning algorithms are also being developed for case identification by automated differentiation of COVID-19 from community-acquired pneumonia through the use of hospital chest scans by computerized tomography 65 , 66 , 67 . Further evaluation of their utility is recommended 68 , 69 . Interrupting community transmission After case identification and isolation, rapid tracing and quarantining of contacts is needed to prevent further transmission 70 . In areas of high transmission, the implementation and monitoring of these interventions is needed at a scale that is becoming increasingly unfeasible or at least challenging by traditional means 71 . Digital contact tracing Digital contact tracing automates tracing on a scale and speed not easily replicable without digital tools 71 . It reduces reliance on human recall, particularly in densely populated areas with mobile populations. In the COVID-19 pandemic, digital contact-tracing apps have been developed for use in several countries; these apps rely on approaches and technologies not previously tried on this scale and are controversial in terms of privacy. Evaluating their accuracy and effectiveness is essential. Early digital tracing initiatives raised concerns about privacy 72 . In South Korea, contacts of confirmed cases were traced through the use of linked location, surveillance and transaction data 73 . In China, the AliPay HealthCode app automatically detected contacts by concurrent location and automated the enforcement of strict quarantine measures by limiting the transactions permitted for users deemed to be high risk 74 , 75 . More-recent voluntary contact-tracing apps have been launched in collaboration with governments; these collect location data by global positioning system (GPS) or cellular networks 76 , proximity data collected by Bluetooth 72 , 77 or a combination of those 78 , 79 . Concerns have been raised about centralized systems (Fig. 2 ) and GPS tracking. Norway halted the use of and data collection from its Smittestopp app after the country’s data-protection watchdog objected to the app’s collection of location data as ‘disproportionate to the task’, and they recommended a Bluetooth-only approach 80 . Several international frameworks with varying levels of privacy preservation are emerging, including Decentralized Privacy-Preserving Proximity Tracing 81 , the Pan-European Privacy-Preserving Proximity Tracing initiative 82 and the joint Google–Apple framework 83 . Fig. 2: Contact tracing for COVID-19 with Bluetooth-enabled smartphone apps. Proximity-detecting contact-tracing apps use Bluetooth signals emitting from nearby devices to record contact events. Centralized apps share information about contacts and contact events with a central server. The centralized TraceTogether app 72 uploads information when a user reports testing positive for COVID-19. Some centralized Bluetooth-enabled contact-tracing apps upload the contact graph for all users 148 . Decentralized apps, such as SwissCovid 149 , upload only an anonymous identifier of the user who reports testing positive for COVID-19. This identifier is then broadcast to all users of the app, which compares the identifier with on-phone contact-event records. Full size image A key limitation of contact-tracing apps is that they require a large proportion of the population to use the app and comply with advice for them to be effective in interrupting community transmission (effective reproduction number (R), <1) 71 . Placing this in perspective, national uptake of the TraceTogether app in Singapore had reached only 30% as of June 2020 (ref. 72 ). Adoption is also limited by smartphone ownership, user trust, usability and handset compatibility. Key practical issues remain, such as understanding which contacts are deemed to be close enough for transmission and when exposure time is considered long enough to trigger an alert. System effectiveness in identifying transmission events is not well described, and it is therefore arguable that human interpretation is still important. Evaluating interventions through the use of mobility data Aggregated location data collected by smartphones via GPS, cellular network and Wi-Fi can monitor real-time population flows 84 , identify potential transmission hotspots and give insight into the effectiveness of public-health interventions such as travel restrictions on actual human behavior. Access to mobility data is a major challenge, and these approaches have raised ethical and privacy concerns 85 . Mobility data with privacy-preserving aggregation steps have recently been made available by several technology and telecom companies for the purposes of COVID-19 control; however, the datasets are limited and there is no long-term commitment in place for data sharing. Daily aggregated origin-destination data from Baidu 86 are being used to evaluate the effect of travel restrictions 87 and quarantine measures 88 on COVID-19 transmission in China. Analysis of the location data of Italian smartphone users estimated a reduction of 50% in the total trips between Italian provinces in the week after the announcement of lockdown on 12 March 2020 (ref. 89 ). Google has released weekly mobility reports with sub-national granularity, including breakdown by journey type and destination (such as workplaces and parks), and has made their dataset publicly downloadable 90 . Apple has similarly released a dataset with daily figures for mobility and assumed method of transport 91 . There is no standardization of these datasets between providers, however, and not all countries or regions are included in these datasets. Assessing local differences in mobility and contact patterns may be critical for predicting the heterogeneity of transmission rates between different communities and in different regions in which household size and age-stratified contact patterns may differ. This contextual information can provide insight into the effect of interventions to slow transmission, including the impact of handwashing 92 , social distancing and school closures 93 . The monitoring of social-distancing measures could also be used to forecast health-system demands 94 and will be important in assessing the easing of restrictions when appropriate. Concerns have been raised over breaches of civil liberties and privacy when people are tracked to monitor adherence to quarantine and social distancing, including the use of wearable devices 95 and drones 96 . Public communication: informing populations Effective implementation of interventions during a pandemic relies on public education and cooperation, supported by an appropriate communications strategy that includes active community participation to ensure public trust. With 4.1 billion people accessing the internet 97 and 5.2 billion unique mobile subscribers 15 , targeted communication through digital platforms has the potential to rapidly reach billions and encourage community mobilization (Fig. 3 ). Key challenges persist, including the rise of potentially harmful misinformation 98 , 99 and digital inequalities 100 (discussed below). Fig. 3: The global reach of mobile phones to areas affected by COVID-19. Mobile subscriptions per 100 people (blue; International Telecoms Union 150 , 2018) and reported COVID-19 cases by country (red; WHO 151 , 8 June 2020). COVID-19 is a global pandemic, yet some countries may be better resourced than others to respond with digital health interventions. There may be intra-country inequalities in mobile subscription rates. Case detection and reporting practices differ among countries, with variable under-reporting of true cumulative case counts. Full size image Online data and social media have had an ongoing, important role in public communication 101 since the first reports of an unusual influenza-like illness resistant to conventional treatment methods emerged in China 102 . Public-health organizations and technology companies are stepping up efforts to mitigate the spread of misinformation 103 , 104 and to prioritize trusted news sites; for example, Google’s SOS alert intervention 105 prioritizes the WHO and other trusted sources at the top of search results. There are few reports about the impact of these interventions 106 , 107 and difficulties in defining misinformation 108 . A United Nations study found that 86% of member states had placed COVID-19 information on national websites by early April 2020 (ref. 109 ), and many are using text messaging to reach populations who do not have access to the internet. Chat-bots are also providing information to reduce the burden on non-emergency health-advice call centers 110 , and clinical practice is being transformed by the rapid adoption of remote health-service delivery, including telemedicine, especially in primary care 50 Digital communication platforms are also supporting adherence to social-distancing measures. Video conferencing is allowing people to work and attend classes from home 111 , online services are supporting mental health 112 and digital platforms are enabling community-mobilization efforts by providing ways to assist those in need 113 . Nevertheless, the security and privacy of freely available communication platforms remains a concern, particularly for the flow of confidential healthcare information. Future directions Digital technologies join a long line of public-health innovations that have been at the heart of disease-prevention-and-containment strategies for centuries. Public health has been slower to take up digital innovations than have other sectors, with the first WHO guidelines on digital health interventions for health-system strengthening published in 2019 (refs. 114 , 115 ). The unprecedented humanitarian and economic needs presented by COVID-19 are driving the development and adoption of new digital technologies at scale and speed. We have highlighted the potential of digital technologies to support epidemiological intelligence with online datasets, identify cases and clusters of infections, rapidly trace contacts, monitor travel patterns during lockdown and enable public-health messaging at scale. Barriers to the widespread use of digital solutions remain. Implementation Digital technologies cannot operate in isolation and need to be integrated into existing public healthcare systems 116 . For example, South Korea and Singapore successfully introduced contact-tracing apps to support large teams of manual contact tracers as one of many measures, including strict isolation of cases and quarantine 73 . Digital data sources, like any data source, need to be integrated and interoperable, such as with electronic patient records. Analysis and use of these data will depend on the digital infrastructure and readiness of public-health systems, spanning secondary, primary and social-care systems. The logistics of delivery to ensure population impact are often given too little attention and can lead to over-focus on the individual technology and not its effective operation in a system. The coordination of interventions is also a challenge, with multiple symptom-reporting sites in a single country, which risks fragmentation. Looking ahead, there is a need for a systems-level approach for the vision of the ideal fit-for-purpose digital public-health system 117 that links symptom-tracking apps, rapid testing and case isolation, contact tracing and monitoring of aggregated population-mobility levels, access to care and long-term follow-up and monitoring, with public communication (Fig. 4 ). These types of integrated online care pathways are not new concepts, having been shown to be highly acceptable and feasible for other infectious diseases, such as chlamydia 118 . Fig. 4: The flow of information in a digitally enabled and integrated public-health system during an infectious-disease outbreak. Digital data are created by the public, both at the population level and at the individual level, for epidemiological intelligence and public-health interventions, and for the support of clinical case management. They are also informed by conventional surveillance via laboratory and clinical notification. This feeds into public-health decision-making and communication with the public through digital channels. Other relevant sources of information include population, demographic, economic, social, transport, weather and environmental data. Full size image Data sharing and data quality Big-data and artificial-intelligence approaches are only as good as the empirical datasets that are put into them, yet detailed public-health and private datasets are often inaccessible, due to privacy and security concerns, and often lack standardized formats or are incomplete. Researchers are calling for technology and telecom companies to share their data in a ‘proportionate, ethical and privacy-preserving manner’ 85 , 119 , 120 , often citing a moral imperative for these companies to contribute where there is justification for data use. Some companies are making subsets of aggregated data available 86 , 90 , 91 , 121 , 122 . These data are not consistent and are not provided within the same timeframe, and there is no standard format or long-term commitment. Researcher-led international collaborations have aimed to aggregate multiple international data sources of voluntarily reported information 41 , 123 . Equally, governments should provide much greater transparency in their datasets, including epidemiological data and risk factors for acquisition, with downloadable formats for researchers. Several governments have made available de-personalized individual-level datasets for research purposes 124 , 125 , although this raises potential privacy concerns. Open-source data, code and scientific methods are being rapidly and widely shared online, including increased use of preprints, which speed up data availability but lack peer review 126 . Evidence of effectiveness and regulation Evidence of the effectiveness of any new technology is needed for wider adoption, but as the current pandemic is ongoing, many digital technologies have not yet been peer-reviewed, been integrated into public-health systems, undergone rigorous testing 127 or been evaluated by digital health-evidence frameworks, such as the evidence standards framework for digital health technologies of the National Institute for Health and Care Excellence 128 . Contact-tracing apps have been launched in at least 40 countries 129 , but there is currently no evidence of the effectiveness of these apps 130 , such as the yield of identified cases and contacts, costs, compliance with advice, empirical estimates of a reduction in the R value or a comparison with traditional methods. Although it is challenging, due to the urgency of the pandemic, evaluation of the effectiveness of interventions is essential. Researchers, companies and governments should publish the effectiveness of their technologies in peer-reviewed journals and through appropriate clinical evaluation. There is an urgent need for coordinated international digital public-health strategies, but these have been slow to emerge. On 22 March 2020, the WHO release a draft of its global strategy on digital health for 2020–2024 (ref. 131 ). On 8 April, the European Union called for a pan-European approach on the use of apps and mobile data for COVID-19 82 , 132 . Legal, ethical and privacy concerns Highly granular or personal data for public-health surveillance raises legal concerns 133 , ethical concerns 134 , 135 and security and privacy concerns 136 . Not all digital interventions have allowed consensual adoption or have made the option of consent for specific purposes explicit 75 , and some have been used to enforce measures as well as to monitor them. In many cases, widespread adoption is related to effectiveness, which highlights the need for public trust and engagement. There is concern that emergency measures set precedent and may remain in place beyond the emergency, which will lead to the ongoing collection of information about private citizens with no emergency-related purpose 137 , 138 . All systems will need to be ‘proofed’ against invasions of privacy and will need to comply with appropriate legal, ethical and clinical governance 75 . Data can be shared under a legal contract for a well-defined purpose and time, with requirements for independent audit 139 to ensure data are not used for purposes outside of the pandemic. Dynamic consent processes could also allow users to share their data, and privacy-preserving technologies, such as differential privacy and homomorphic encryption, could ensure that access is possible only for specific purposes and is available in a tamper-proof manner 13 , 140 to allow auditing. Inequalities and the digital divide In 2018, the World Health Assembly Resolution on Digital Health recognized the value of digital technologies in advancing universal health coverage and the Sustainable Development Goals. Although trends are narrowing, today there remains a digital divide, and 51% of the world’s population does not subscribe to the mobile internet 15 . The lack of access to mobile communications is seen in low- and middle-income countries, although people with lower socio-economic status in high-income countries are also affected 141 . The Pew Research Center reported large disparities between people 18–29 years of age and those over 50 years of age in their mobile-communication access 142 . There are also reports of restricted mobile internet access, such as in areas of Myanmar, which have left some populations unware of the pandemic 143 . This outbreak has also disproportionately affected some communities, such as Black and minority ethnic groups, more than others 144 . It is therefore essential to develop tools and messaging that are accessible 100 and can be tailored to specific risks, languages and cultural contexts. Workforce and organizational barriers The spread of the COVID-19 pandemic has exposed the need for government leadership to accelerate the evaluation and adoption of digital technologies. Successful implementation strategies will require carefully accelerated and coordinated policies, with collaboration among multiple areas of governments, regulators, companies, non-governmental organizations and patient groups. Public health has long been under-funded compared with the funding of other areas of health 145 . Long-term changes will necessitate investment in national and international digital centers of excellence, with the necessary balance of partners and pre-agreed access to digital datasets. A substantial investment in workforce education and skills is essential for growing digital public-health leadership 146 . Conclusion The COVID-19 pandemic is ongoing, and it is too early to fully quantify the added value of digital technologies to the pandemic response. While digital technologies offer tools for supporting a pandemic response, they are not a silver bullet. The emerging consensus is that they have an important role in a comprehensive response to outbreaks and pandemics, complementing conventional public-health measures, and thereby contribute to reducing the human and economic impact of COVID-19. Cost-effectiveness and sustainability will require systems-level approaches to building digital online care pathways that link rapid and widespread testing with digital symptom checkers, contact tracing, epidemiological intelligence and long-term clinical follow up. The COVID-19 pandemic has confirmed not only the need for data sharing but also the need for rigorous evaluation and ethical frameworks with community participation to evolve alongside the emerging field of mobile and digital healthcare. Building public trust through strong communication strategies across all digital channels and demonstrating a commitment to proportionate privacy are imperative 147 . The future of public health is likely to be increasingly digital, and recognizing the importance of digital technology in this field and in pandemic preparedness planning has become urgent. Key stakeholders in the digital field, such as technology companies, should be long-term partners in preparedness rather than being partners only when emergencies are ongoing. Viruses know no borders and, increasingly, neither do digital technologies and data. There is an urgent need for alignment of international strategies for the regulation, evaluation and use of digital technologies to strengthen pandemic management and future preparedness for COVID-19 and other infectious diseases.
Digital technologies have an important role to play in responding to future pandemics, a new study in the journal, Nature Medicine reports. During this unique study, researchers from the i-sense project, led by Professor Rachel McKendry from UCL, reviewed how digital technologies have been mobilized in response to COVID-19. The associated concerns with privacy, the effectiveness of such technologies and how they can be used in future pandemics were also examined by the team which includes Professor Vince Emery from the University of Surrey. Researchers found to get the most out of digital technologies, they should be developed collaboratively with governments and healthcare providers, ensuring they meet public health needs and ethical standards. It was recommended that key stakeholders in the digital field, such as technology companies, should be long-term partners in preparedness planning rather than being partners only when emergencies are ongoing. The benefits of digital technologies such as access to faster and more widespread communication, including through social media platforms, TV briefings and text message updates was noted by researchers. The efforts of some technology companies such as Google who have been prioritizing messaging from trusted sources, including the WHO, in their search responses were also recognized. David Heymann, Professor of Infectious Disease Epidemiology at the London School of Hygiene & Tropical Medicine, said: "We need to ensure new digital technologies go through rigorous evaluation to identify those technologies that prove to be effective so that they can add to our armamentarium for outbreak control, adhere to privacy and ethics frameworks, and are built into online pathways developed in collaboration with end-users." However researchers did find some problems with the use of digital technologies during pandemics. During the outbreak of COVID-19, many technologies have been adapted and developed on a scale never seen before, including new apps and data dashboards using anonymised and aggregated data to help inform public health interventions. This has led to concerns about civil liberties and privacy. Chief scientific advisor, Department for International Trade, Dr. Mike Short CBE, said: "Although times of emergency may call for different data access requirements, any data used for the pandemic response should not be misused beyond this purpose, and systems need to be proofed against invasion of privacy and comply with relevant governance." Researchers also noted that the use of data to inform outbreak response should also take into account the digital divide across the globe as although 67 percent of the global population subscribe to a mobile device, 51 percent of the world's population are not mobile internet subscribers. As many of these interventions and surveillance methods rely on connectivity, researchers found many communities may be left behind or missed from statistics. Professor Vince Emery, emeritus professor of translational virology at the University of Surrey, said: "This review provides further impetus to the deployment of digital technologies to sense pandemics and with the roll-out of 5G new exciting possibilities will develop and contribute to controlling major public health emergencies. Viruses know no borders and, increasingly, neither do digital technologies and data so it is important they are utilized to their full potential."
10.1038/s41591-020-1011-4
Medicine
How micro-circuits in the brain regulate fear
Nigel Whittle et al, Central amygdala micro-circuits mediate fear extinction, Nature Communications (2021). DOI: 10.1038/s41467-021-24068-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-24068-x
https://medicalxpress.com/news/2021-07-micro-circuits-brain.html
Abstract Fear extinction is an adaptive process whereby defensive responses are attenuated following repeated experience of prior fear-related stimuli without harm. The formation of extinction memories involves interactions between various corticolimbic structures, resulting in reduced central amygdala (CEA) output. Recent studies show, however, the CEA is not merely an output relay of fear responses but contains multiple neuronal subpopulations that interact to calibrate levels of fear responding. Here, by integrating behavioural, in vivo electrophysiological, anatomical and optogenetic approaches in mice we demonstrate that fear extinction produces reversible, stimulus- and context-specific changes in neuronal responses to conditioned stimuli in functionally and genetically defined cell types in the lateral (CEl) and medial (CEm) CEA. Moreover, we show these alterations are absent when extinction is deficient and that selective silencing of protein kinase C delta-expressing (PKCδ) CEl neurons impairs fear extinction. Our findings identify CEA inhibitory microcircuits that act as critical elements within the brain networks mediating fear extinction. Introduction The survival of animals depends on their ability to mobilize appropriate defensive behaviours to imminent threats 1 . Yet, animals also need to be able to flexibly adapt to changes in threat contingencies by inhibiting fear responses when threat-related stimuli no longer associate with aversive outcomes, in a process called fear extinction 2 , 3 , 4 . In a typical experimental procedure, an association between conditioned stimuli (CS, a tone or light) and unconditioned stimuli (US, e.g. a foot shock) is first formed (fear conditioning) and then subsequently updated by repeated presentations of the CS in the absence of the US (fear extinction). Fear extinction is thought of as a new learning process wherein animals learn that the CS is no longer predictive of the US 4 . Thus, fear extinction does not reflect the mere erasure of the conditioned fear memory; indeed fear can be spontaneously recovered after fear extinction, or triggered by exposure to the US or a new context 5 , 6 , 7 . Current models posit that neural circuits and cell assemblies for fear and extinction memories compete with one another, in a context-dependent manner, to determine the degree of fear responding to the CS 4 , 8 , 9 . In recent years, there have been major advances in delineating the neural circuity underlying fear conditioning and extinction 10 , 11 , yet key elements of this circuitry remain to be elucidated. Of particular note, the central nucleus of the amygdala (CEA) has long been ascribed an essential role in the expression of conditioned fear responses 1 . However, detailed dissection of CEA circuitry, using in vivo and ex vivo recordings from functionally and/or genetically defined cell types, has recently challenged the view that the CEA is merely an output relay of fear responses. Instead, this work indicates that the CEA contains multiple anatomically, molecularly and functionally defined neuronal subpopulations that interact to calibrate levels of fear responding 12 , 13 , 14 . CEA output neurons located in both the lateral (CEl) and the medial (CEm) subdivision of the CEA project to downstream targets that mediate different components of conditioned defensive behaviours, such as freezing or flight 15 , 16 , 17 . Fear conditioning potentiates excitatory input onto CEl projections to the ventrolateral periaqueductal grey (vlPAG) 18 and CEm output neurons exhibit increased CS responses upon fear conditioning 15 . In turn, the activity of CEm output neurons is thought to be controlled by excitatory glutamatergic afferents from auditory thalamus and the basal amygdala (BA) 19 . Crucially, however, CEm output neurons are also subject to inhibitory control from GABAergic neurons located in the neighbouring intercalated cell clusters (ITCs) and a subset of neurons in the CEl expressing protein-kinase C delta (PKCδ) 15 , 20 , 21 . Thus, following fear conditioning, three functional neuronal subpopulations emerge in the CEl: (1) CS ‘non-responsive’ neurons, (2) CElon neurons that are excited by the CS following fear conditioning and overlap in part with a somatostatin (SST)-expressing population 18 , 22 , 23 and (3) CEloff neurons, which acquire an inhibitory response to the CS and partly overlap with PKCδ neurons 15 . Interestingly, CElon and CEloff neurons can inhibit each other 15 , and SST neurons can inhibit PKCδ neurons 22 , which could result in a switch-like disinhibition of output neurons in CEm or in CEl 12 . Together, these earlier findings show that there is a layer of processing and plasticity within the CEA that can serve to promote and limit fear responding 24 , 25 . This raises the intriguing possibility that the same CEA circuitry could be ideally positioned to mediate fear extinction and act as a substrate for the reductions in fear responding that occur with extinction. The major aim of the current study was to test this hypothesis using a combination of behavioural, in vivo electrophysiological, optogenetic and molecular approaches. Our findings demonstrate that microcircuits within the CEA are crucial for fear extinction. Results Neuronal correlates of fear extinction in subpopulations of CEA neurons To first identify neuronal correlates of fear extinction in CEA circuits, we submitted freely-moving mice ( n = 27, C57BL/6J, hereafter B6) to a fear-conditioning and extinction procedure while chronically recording single-unit activity in CEA (Figs. S1 , S2 ; Supplementary Tables 1 – 4 ; see ‘Methods’) 15 . Following fear conditioning, mice exhibited a selective increase in conditioned freezing to the CS that was reversed to pre-conditioning levels by the end of fear extinction learning (Fig. 1a, b ; Fig. S3 ). Fig. 1: Neuronal correlates of fear extinction in subpopulations of CEA neurons. a Behavioural protocol. FC: fear conditioning. CS: conditioned stimuli. b Behavioural data. B6 mice: n = 27; freezing, habituation, no CS: 19.8 ± 2.6%, CS: 26.1 ± 3.3%, beginning of extinction 1, no CS: 18.5 ± 2.3%, CS: 61.9 ± 4.6%, end of extinction 2, no CS: 25.3 ± 3.1%, CS: 34.1 ± 3.4%, blocks (averages) of 4 CSs. One-way repeated-measures ANOVA F (5,130) = 30.8, p < 0.001, followed by post hoc Bonferroni t -test vs. CS group during habituation, p < 0.001. Bar plots are expressed as means ± SEM. Circles are freezing values of individual mice. c Raster plots and corresponding spike waveforms of a representative CEm unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEm neurons: n = 15 units from 5 mice; z -score habituation: −0.11 ± 0.45, beginning of extinction 1: 4.21 ± 1.75, end of extinction 2: 1.24 ± 0.48, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,28) = 3.9, p = 0.033 followed by post hoc Bonferroni t -test vs. during habituation, p = 0.023. d Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEloff neurons: n = 33 units from 18 mice; z -score, habituation: 0.28 ± 0.33, beginning of extinction 1: −1.53 ± 0.28, end of extinction 2: −0.46 ± 0.34, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,64) = 8.4, p < 0.001 followed by post hoc Bonferroni t -test vs. during habituation, p < 0.001. e Raster plots and corresponding spike waveforms of a representative CElon unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CElon neurons: n = 55 units from 15 mice; z -score, habituation: 1.30 ± 0.30, beginning of extinction 1: 2.54 ± 0.43, end of extinction 2: 1.40 ± 0.30, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,108) = 5.3, p = 0.006 followed by post hoc Bonferroni t -test vs. during habituation, p = 0.008. All individual neurons of each CEA population had significant z -score values upon CS presentation (4 first CSs during extinction 1). Source data are provided as a Source data file. Full size image Examination of the CS-related activity of CEA single units during testing revealed three subpopulations exhibiting distinct patterns of responding that were reversed from fear conditioning to extinction. Fear-conditioning-induced differential conditioned responses in CEm and CEl neurons, as previously reported 15 , 17 , such that CEm neurons increased their phasic CS responses (Fig. 1c ), while CEl neurons exhibited either an inhibitory response (Fig. 1d ) or an increase (Fig. 1e ) in conditioned responses, consistent with the activity of CEloff and CElon neurons, respectively (Figs. S4, S5 ). A larger proportion of CElon neurons (64%, 35 out of 55 neurons) exhibited cue-related responses during habituation, as compared to CEloff (24%, 8 out of 33 neurons) and CEm neurons (27%, 4 out of 15 neurons). Strikingly, fear conditioning-related changes in the CS responses of all three neuronal subpopulations were reversed following fear extinction, i.e., elevated CS-related activity in CEm and CElon neurons was attenuated and the inhibited CS-related activity of CEloff neurons was diminished (Fig. 1c–e ). These data suggest that changes in CS-related phasic activity within the CEA inhibitory microcircuits signal the extinction of fear responses. CEA subpopulation activity tracks extinction-related changes in the expression of fear How specific is the reversal of CEA subpopulation activity during fear extinction? Does it correlate with behavioural changes and emotional values of conditioned stimuli, or does it simply reflect a non-associative process such as stimulus habituation or desensitisation? To test for the selectivity of fear extinction-induced reversal of neuronal responses in CEA microcircuits, we trained mice ( n = 10) using a discriminative fear extinction paradigm (Fig. 2a ). In this task, mice were conditioned to two different CSs followed by the extinction of just one of these CSs. Immediately after the extinction, animals were exposed to the non-extinguished CS—which resulted in an instantaneous switch (low to high) in fear behaviour (Fig. 2b ). Fig. 2: CEA subpopulation activity tracks extinction-related changes in the expression of conditioned freezing. a Behavioural protocol. FC: fear conditioning. CS: conditioned stimuli. b Freezing behaviour. B6 mice: n = 10; freezing, habituation, CS1: 23.8 ± 5.9%, CS2: 19.5.1 ± 3.3%, beginning of diff. extinction 1, CS1: 46.4 ± 5.4%; end of diff. extinction 2: CS1: 24.5 ± 4.3%, CS2: 48.1 ± 7.6%, blocks of 4 CSs. One-way repeated-measures ANOVA F (4,36) = 8.6, p < 0.001, followed by post hoc Bonferroni t -test vs. CS2 block during habituation, p < 0.001. Bar plots are expressed as means ± SEM. Circles are freezing values of individual mice. c Raster plots and corresponding spike waveforms of a representative CEm unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEm neurons: n = 6 units from 3 mice; z -score, habituation, CS1: −0.21 ± 0.44, CS2: 0.64 ± 0.38, beginning of diff. extinction 1, CS1: 2.66 ± 0.87; end of diff. extinction 2: CS1: 0.70 ± 0.42, CS2: 3.79 ± 1.08, blocks of 4 CSs. One-way repeated-measures ANOVA F (4,20) = 5.2, p = 0.005 followed by post hoc Bonferroni t -test vs. CS1 block during habituation, p < 0.05. d Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEloff neurons: n = 7 units from 5 mice; z -score, habituation, CS1: 0.32 ± 0.26, CS2: −0.24 ± 0.29, beginning of diff. extinction 1, CS1: −1.33 ± 0.47; end of diff. extinction 2: CS1: −0.51 ± 0.72, CS2: −1.37 ± 0.35, blocks of 4 CSs. One-way repeated-measures ANOVA F (4,24) = 3.4, p = 0.023 followed by post hoc Bonferroni t -test vs. CS1 block during habi t uation, p < 0.05. e Raster plots and corresponding spike waveforms of a representative CElon unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CElon neurons: n = 12 units from 5 mice; z -score, habituatio n : CS1: −0.40 ± 0.21, CS2: 0.41 ± 0.23 beginning of diff. extinction 1: CS1: 1.60 ± 0.39; end of diff. extinction 2: CS1: 0.46 ± 0.27, CS2: 1.92 ± 0.53, blocks of 4 CSs. One-way repeated-measures ANOVA F (4,44) = 6.0, p < 0.001 followed by post hoc Bonferroni t -test vs. CS1 block during habituation, p < 0.05. All individual neurons of each CEA population had significant z- score values upon CS presentation (first 4 CSs during diff. extinction 1). Source data are provided as a Source data file. Full size image We reasoned that if the neural responses in CEA subpopulations correlate with fear extinction per se, rather than with non-associative or time-related processes, we should expect a change in neural activity paralleling the behavioural switch from the extinguished to the non-extinguished CS. In line with this prediction, we observed that the switch in behaviour corresponded to an immediate recovery of CS-induced responses in CEm, CEloff and CElon neurons to levels of activity evident after fear conditioning (Fig. 2c–e ). Thus, CS-related responses in these CEA subpopulations track extinction-induced changes in the expression of fear responses to the CS. Context-driven fear renewal reverses extinction-related CEA subpopulation activity Fear extinction does not simply reflect forgetting of the CS-US associations, but a new learning process subjected to contextual modulation 4 , 26 . This sensitivity to the context provided us with an opportunity to test whether changes in CS-related activity in CEA subpopulations is sensitive to contextual changes. One week following fear extinction learning, memory for the extinguished CS was observed in a subset of mice (Fig. 3a, b ): these mice expressed low levels of CS-induced freezing when tested in the context in which extinction learning occurred. However, when these same mice were tested in the conditioning context, there was an expected ‘renewal’ of CS-induced conditioned freezing (Fig. 3b , Supplementary Table 5 ). Fig. 3: Context-driven fear renewal reverses extinction-related CEA subpopulation activity. a Behavioural protocol. CS: conditioned stimuli. b Behavioural data. B6 mice: n = 4; extinction memory retrieval, no CS: 21.4 ± 16.2%, CS: 32.5 ± 6.4%, fear renewal, no CS: 34.8 ± 1.2%, CS: 73.0 ± 7.3%, blocks of 4 CSs. One-way repeated-measures ANOVA F (3,9) = 8.5, p = 0.005 followed by post hoc Bonferroni t -test vs. CS block during extinction memory, p = 0.015. Bar plots are expressed as means ± SEM. Circles are freezing values of individual mice. c Raster plots and corresponding spike waveforms of a representative CEm unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEm neurons: n = 4 units from 1 mouse; z -score, extinction memory retrieval: −0.79 ± 1.59, fear renewal: 3.08 ± 1.06, blocks of 4 CSs. Paired student t- test, two-sided, p = 0.039. d Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEloff neurons: n = 7 units from 2 mice; z -score, extinction memory retrieval: −0.04 ± 0.61, fear renewal: −0.94 ± 0.40, blocks of 4 CSs. Paired student t- test, two-sided, p = 0.041. e Raster plots and corresponding spike waveforms of a representative CElon unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CElon neurons: n = 10 units from 2 mice; z -score, extinction memory: 1.28 ± 0.41, fear renewal: 2.13 ± 0.37, blocks of 4 CSs. Paired student t- test, two-sided, p = 0.034. All individual neurons of each CEA population had significant z -score values upon CS presentation (4 CSs during fear renewal). Source data are provided as a Source data file. Full size image We asked whether this context-induced reversion of fear responding was paralleled by an alteration in CEA single-unit activity. Indeed, while all three CEA neuronal populations showed reduced CS responses during the retrieval of extinction memory in the extinction context (Fig. 3c–e ; Fig. S6 ), CS-related responses were completely re-established to pre-extinction levels when testing occurred in the conditioning context (Fig. 3c–e ). Thus, the context-gated expression of extinguished fear memory closely parallels changes in the activity of the CEA subpopulations. Impaired extinction associates with persistent fear-related activity in CEA subpopulations Impaired fear extinction is a hallmark of anxiety disorders 27 , yet endogenous neural correlates of impaired fear extinction in CEA have not been thoroughly investigated in vivo. Based on our findings thus far, we reasoned that impaired extinction would correspond with the persistence of a ‘fear-like’ pattern of activity in CEA subpopulations. To test this prediction, we performed CEA single-unit recordings in a mouse strain (129S1/SvImJ, ‘S1’) that exhibits impaired extinction and associated abnormalities in CEA IEG activity 28 . We first replicated prior data 28 , 29 , 30 showing that S1 mice tested using the same procedures used in earlier experiments in the current study, failed to exhibit a reduction in CS-related freezing after extinction training ( n = 7 mice, Fig. 4a, b ; Fig. S3 ). Fig. 4: Impaired extinction correlates with persistent fear-related activity in CEA subpopulations. a Behavioural protocol. FC: fear conditioning. CS: conditioned stimuli. b Behavioural data. S1 mice: n = 7; habituation, no CS: 19.9 ± 5.1%, CS: 41.2 ± 6.3%, beginning of extinction 1, no CS: 36.4 ± 10.1%, CS: 80.3 ± 3.8%, end of extinction 2, no CS: 49.8 ± 9.5%, CS: 78.9 ± 3.2%, blocks of 4 CSs. One-way repeated-measures ANOVA F (5,30) = 12.0, p < 0.001, followed by post hoc Bonferroni t -test vs. CS block during habituation, p < 0.01. Bar plots are expressed as means ± SEM. Circles are freezing values of individual mice. c Raster plots and corresponding spike waveforms of a representative CEm unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEm neurons: n = 6 units from 2 mice; z -score, habituation: 0.45 ± 0.42, beginning of extinction 1: 3.12 ± 0.73, end of extinction 2: 2.21 ± 0.48, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,10) = 5.9, p = 0.020 followed by post hoc Bonferroni t -test vs. during habituation, p < 0.05. d Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CEloff neurons: n = 8 units from 6 mice; z -score, habituation: −0.49 ± 0.87, beginning of extinction 1: −2.46 ± 1.13, end of extinction 2: −2.73 ± 1.87, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,14) = 4.2, p = 0.037 followed by post hoc Bonferroni t -test vs. during habituation, p < 0.05. e Raster plots and corresponding spike waveforms of a representative CEloff unit (top). Normalized and averaged population peri-stimulus time histograms (bottom). CElon neurons: n = 31 units from 6 mice; z- score, habituation: 1.60 ± 0.38, beginning of extinction 1: 3.09 ± 0.48, end of extinction 2: 1.82 ± 0.46, blocks of 4 CSs. One-way repeated-measures ANOVA F (2,60) = 3.8, p = 0.028 followed by post hoc Bonferroni t -test vs. during habituation, p < 0.05. All individual neurons of each CEA population had significant z -score values upon CS presentation (first 4 CSs during extinction 1). Source data are provided as a Source data file. Full size image Examination of the CEA single-unit activity in the S1 mice revealed activity patterns in the same three CEA neuronal subpopulations as described above (CElon, CEloff and CEm) (Fig. 1 ). Strikingly, however, and entirely in keeping with the absence of extinction at the behavioural level, the patterns of CEA activity evident after fear conditioning were largely unchanged after extinction (Fig. 4c–e ). Specifically, CEm neurons showed a persistent increase in their phasic CS responses after fear conditioning and extinction, and the inhibitory response of CEloff neurons that emerged after conditioning was also maintained after extinction. Interestingly, however, the conditioning-related increase in the CS-related activity CElon neurons was partially restored to pre-conditioning levels after extinction. This is notable because it shows that CS-related inhibition of CEloff neurons does not require input from CElon neurons and that, by extension, other upstream populations (e.g., ITC neurons) can drive this inhibition. These data suggest that plastic changes in the activity of CEloff neurons may be a pivotal step in the reduction in fear responses produced by fear extinction, and that the failure of this plasticity may underlie the extinction deficits evident in S1 mice. To further test this hypothesis, we took advantage of the fact that PKCδ CEl neurons overlap with CEloff neurons 21 to compare, via immunostaining, the expression of the immediate-early gene (IEG) Zif268 in immunolabeled PKCδ CEl neurons of S1 and B6 mice after either a fear retrieval or extinction learning test (Fig. S7a ). We found that the number of neurons positive for both Zif268 and PKCδ in CEl was increased in extinguished B6 mice, relative to non-extinguished fear-tested counterparts. By contrast, S1 mice did not show an extinction-related increase in Zif268/PKCδ neurons, consistent with a failure to reverse CS-related inhibition of the CEloff subpopulation (Fig. S7a ). This was not due to a lower number of PKCδ neurons in CEl of S1 mice, as the overall number of these neurons was similar in the two strains (Fig. S7a ). Detailed analysis of the dendritic morphology of CEA neurons in test-naive mice did, however, indicate evidence of more overall dendritic material in CEA neurons of S1, relative to B6 mice, indicating plasticity deficits in CEloff neurons of S1 mice may relate to underlying structural abnormalities (Fig. S7b ). Activity of PKCδ/CEloff neurons during CS exposure is required for extinction memory formation Collectively, our findings thus far suggest that extinction-associated neuronal plasticity in CEA circuits may be necessary for the successful acquisition and expression of extinction memories. In particular, our data posit a critical circuit mechanism in which CEloff neurons gate reductions in fear seen with extinction by exerting inhibition of CEm neurons. Alternatively, changes in the activity in these CEA subpopulations during fear extinction could simply reflect the relaying of upstream plasticity mechanisms [e.g., from basolateral amygdala (BLA) or medial prefrontal cortex] to downstream targets (e.g., vlPAG). Furthermore, though a prior study found that inactivating CEloff neurons throughout fear conditioning and retrieval increased freezing responses 21 , it remains unclear whether CEloff neurons causally contribute to the decrease in freezing that occurs with extinction. To address these questions, we took advantage of the fact that PKCδ CEl neurons overlap with CEloff neurons 21 by using PKCδ::Cre mice ( n = 5) to selectively express, in a Cre-dependent manner, the inhibitory opsin, Arch (AAV5-DIO-Arch), in PKCδ CEA neurons. We then performed optogenetic phototagging experiments to confirm selective control over the activity of these neurons by shining yellow light into the CEA and showing that this reduced the activity of a subset of single units identifiable as PKCδ neurons (Fig. 5a, b ; see ‘Methods’). Furthermore, we examined the activity of these photo-identified neurons after fear conditioning and confirmed that the majority (7/12) exhibited an inhibitory response to the CS, consistent with their designation as CEloff neurons (Fig. 5b , Fig. S8 ). Fig. 5: PKCδ/CEloff neuronal activity during CS exposure is required for extinction memory formation. a Left, PKCδ unit identified with optogenetics. Right, waveform similarity of spikes with or without optogenetic stimulation. Bottom right, latency and magnitude of inhibition of PKCδ/CEloff neurons. b Normalized activity ( z -score, bottom) of PKCδ/CEloff cells ( n = 7) and example raster plot (top). CS: conditioned stimuli. c An adeno-associated virus (AAV2/7) conditionally expressing ChR2, eNpHR and a Venus reporter under the control of an elongation factor-1α (EF-1α) promoter was injected into the CEl of a PKCδ-Cre+ or PKCδ-Cre− mice (middle). Anti-GFP immunolabelling of the Venus reporter gene in CEl (right). DIO: double-inverted open reading frame; PKCδ: protein-kinase c delta; 2A: ribosomal self-processing peptide; BLA: basolateral amygdala; CEA: central amygdala. d Behavioural protocol. e Freezing response in PKCδ-Cre+ mice expressing ChR2 and eNpHR in PKCδ neurons. B6 mice: n = 11; freezing, ChR2, fear memory, CS I: 65.5 ± 5.6%, extinction learning, CS I: 57.1 ± 4.5%, CS II: 39.9 ± 5.2%, CS III: 37.9 ± 7.3%, CS IV: 33.7 ± 7.0%, extinction memory, CS I: 37.7 ± 5.0. eNpHR, fear memory, CS I: 59.9 ± 6.3%, extinction learning, CS I: 60.1 ± 8.4%, CS II: 42.4 ± 7.3%, CS III: 44.9 ± 7.4%, CS IV: 40.5 ± 8.2%, extinction memory, CS I: 60.1 ± 5.2% (blocks of 4 CSs). Main effect of CS presentations during extinction learning: two-way repeated-measures ANOVA F (5,49) = 8.7, p < 0.001. Interaction of light stimulation and CS presentations: two-way repeated-measures ANOVA F (5,49) = 2.34, p = 0.056 followed by post hoc Bonferroni t -test vs. CS1 block during fear memory, p < 0.01. All values are expressed as means ± SEM. f Control experiment. Freezing response in PKCδ-Cre− mice not expressing ChR2 and eNpHR in PKCδ neurons. B6 mice: n = 5; freezing, blue light, fear memory, CS I: 63.1 ± 4.9%, extinction learning, CS I: 70.4 ± 9.7%, CS II: 44.6 ± 15.0%, CS III: 26.8 ± 8.2%, CS IV: 31.8 ± 13.5%, extinction memory, CS I: 33.2 ± 3.8%. Yellow light, fear memory, CS I: 64.1 ± 14.2%, extinction learning, CS I: 66.9 ± 12.2%, CS II: 43.9 ± 6.1%, CS III: 32.2 ± 4.1%, CS IV: 26.6 ± 5.7%, extinction memory, CS I: 34.1 ± 12.1% (blocks of 4 CSs). Main effect of CS presentations during extinction: two-way repeated-measures ANOVA F (5,20) = 15.5, p < 0.001. Interaction of light stimulation and CS presentations: two-way repeated-measures ANOVA F (5,20) = 0.12, p = 0.987 followed by post hoc Bonferroni t -test vs. CS1 block during fear memory, p > 0.05 for all comparisons. All values are expressed as means ± SEM. Source data are provided as a Source data file. Full size image Next, we tested whether activating or inhibiting the activity of CEl PKCδ neurons during extinction learning affected freezing responses. To do so, we bilaterally transfected CEl PKCδ neurons in PKCδ::Cre mice ( n = 11) with an AAV conditionally expressing both the excitatory opsin, channelrhodopin-2 (ChR2), and the inhibitory opsin, halorhodopsin (eNpHR) (Fig. 5c ). Mice were then equivalently fear-conditioned to two CSs (CS1 and CS2), as evidenced by similar freezing responses to both CSs during a fear retrieval test (Fig. 5e, f ). On a subsequent extinction training session, during each of the last 12 CS1 presentations, blue light was shone to excite PKCδ neurons for 300 ms from CS onset—thereby matching the duration of CS-related activity we had observed in the CEloff neurons (see Fig. 1d , Fig. S9 ). This was followed by 16 presentations of the CS2, during which yellow light was shone over each of the last CS2 to inhibit these same PKCδ neurons. We found that freezing responses did not differ between the CS1/ChR2 and CS2/eNpHR groups during extinction learning, indicating that manipulating the activity of this subpopulation during a specific temporal window corresponding to CS presentation does not produce acute changes in freezing responses (Fig. 5e ). However, when we examined performance during a subsequent extinction memory test, conducted in the absence of light, we found that freezing levels were higher in the CS2/eNpHR than the CS1/ChR2 group (Fig. 5e ). In fact, the freezing levels in the CS2/eNpHR group were similar to the levels seen prior to extinction learning, consistent with a failure to form a lasting extinction memory to CS2 when PKCδ/CEloff neurons were inhibited during extinction training. Importantly, when we repeated the same procedures in PKCδ::Cre-negative mice, freezing did not differ between groups at any stage of testing, excluding technical artefacts (Fig. 5f , Fig. S10 ). Together, these data show that activity of PKCδ/CEloff neurons during CS exposure is required for the formation of fear extinction memories. Discussion While the CEA is known to play an essential role in the formation and expression of conditioned fear memories, the precise nature of this role is still uncertain 10 , 11 . Recent studies have shown that fear conditioning potentiates inputs from the lateral amygdala (LA) and paraventricular nucleus of the thalamus (PVT) onto SST-expressing neurons in CEl subdivision of the CEA 14 , 18 , while inactivation of entire CEl causes fear learning deficits 15 . Furthermore, optogenetically-guided electrophysiological recordings (‘phototagging’) have demonstrated that SST-expressing CEl neurons correspond to a functional class of ‘CElon’ neurons 18 , which induce freezing to a CS 13 via direct projections to vlPAG 23 or, alternatively, by gating CEm output through disinhibition mediated by PKCδ (‘CEloff’) neurons located in CEl 15 , 21 . In the current study, we found that CElon neurons exhibited significant cue-related responses during habituation, while these were much less apparent for CEloff and CEm neurons. Hence, CElon neurons might encode an attentional or salience-related signal that could reflect preferential innervation by upstream sensory and attention processing regions. Following fear conditioning, a larger increase in CS-related activity in CElon neurons (e.g., through synaptic plasticity occurring during fear learning at afferent synapses to CElon neurons) may then be transmitted locally to CEloff neurons, and ultimately disinhibit CEm output neurons to elicit freezing responses. Importantly, we found that the CS responses of CElon neurons were reduced by fear extinction, although some level of CS-related activity, similar to that seen during habituation, remained. This extinction-related reduction was also seen in CElon neurons in mice from the S1 mouse strain, despite these animals showing persistent fear. This implies that persistent fear-related activity in CEloff/CEm neurons is mediated by mechanisms separate from CElon neurons, such as changes in input from neighbouring ITCs. Altogether, these findings provide further evidence that the CEA contains a functionally diverse set of neuronal subpopulations coding for responses to conditioned fear stimulus. Our findings also demonstrate that stimulus-related responses in these neuronal populations are dynamically modified following extinction to calibrate appropriate levels of freezing. Using a combination of in vivo single-unit recordings and optogenetics, we show that extinction-related reductions in freezing correspond to a relative increase in the CS-related activity of fear-inhibiting CEloff neurons and a decrease in the activity of fear-inducing CElon and CEm neurons. Importantly, we found that these changes in CS-related activity are rapidly reversed when fear responding is renewed by exposure to the fear conditioning context and fail to develop to a non-extinguished CS or in S1 mice that show persistent fear responding due to impaired extinction. Thus, the activity of these subpopulations of CEA neurons closely tracks shifts in the emotional significance of the CS that occurs with extinction, and is not simply due to non-associative processes, such as habituation or desensitization, that can occur with repeated CS exposure or the passage of time 4 , 31 . Finally, we identify a key contribution of CEloff neurons in fear extinction by demonstrating that selective photosilencing of PKCδ CEl neurons during extinction learning prevents extinction memory formation. PKCδ neurons in the CEl have been demonstrated to play a critical role in the formation of aversive memories by modulating neuronal activity and plasticity in other brain regions 32 , 33 . Similarly, our study suggests that PKCδ neurons may regulate extinction learning by controlling extinction-related synaptic plasticity locally or downstream of the CEl. Thus, PKCδ neurons may have a more general role in emotional learning by integrating different sensory modalities, valence and attentional signals, thereby flexibly selecting and scaling emotional responses by modulating the activity and plasticity of downstream circuits in motor, autonomic or neuroendocrine centres 12 . Alternatively, distinct subpopulations of PKCδ neurons might control learning in a valence-specific manner. The exact circuit and plasticity mechanisms by which CEA neuronal activity is altered by extinction remain to be elucidated. A reduction in the activity of (SST-expressing) CElon occurring with extinction could stem from the reversal (depotentiation) of the conditioning-induced strengthening of synaptic inputs from LA and PVT onto CEl SST-expressing neurons 14 , 18 . Alternatively, extinction may engage additional circuit components that suppress the activity of CElon neurons. These components could be extrinsic to the CEl, such as neurons residing in the amygdala striatal transition area 34 or ITC clusters 35 , or intrinsic to the CEl itself, for example in the form of local CEl inhibitory circuits. In this context, corticotropin-releasing hormone (CRH)-expressing neurons in the CEl have recently been shown to inhibit local SST-expressing neurons, suppress freezing and regulate extinction 22 , 24 , 25 . Of further note, CEl CRH-expressing neurons express an array of other neuropeptides and neuropeptide receptors 36 . Given various neuropeptide systems operate in the CEA to control physiological and behavioural readouts of fear 37 , 38 , defining novel neuropeptide-expressing CEl subpopulations could help identify novel circuit elements underlying extinction-related decreases in CElon neuron activity. Previous work has shown that the CElon and CEloff populations can modulate one another’s activity through reciprocal inhibition 15 , 21 , 22 . Hence, a reduction in afferent input from CElon neurons following extinction could produce a release of inhibition over the CEloff subpopulation. Interestingly, our finding that optogenetic silencing of PKCδ (CEloff) neurons impairs extinction memory formation, without producing a frank increase in freezing during silencing, indicates that CEloff neurons do not simply gate fear expression but are a locus of plasticity for extinction. Thus, our study indicates that PKCδ/CEloff neuronal activity during CS exposure is required for extinction memory formation. We infer that the diminution of CS-induced inhibitory responses of CEloff neurons over the course of extinction learning and into retrieval (as shown by our electrophysiological data), and associated inhibition of downstream targets, might be necessary for the induction and/or consolidation of a proper extinction memory—for instance by inhibition of freezing-promoting downstream pathways. Optogenetic inhibition of CEloff neurons during CS exposure over the entire extinction acquisition session prevents this shift in CS responses and could therefore impair downstream plasticity events necessary for the formation of a stable long-term extinction memory. Determining the mechanisms underlying changes in the CS response of CEloff neurons during extinction learning will be another important question for future studies. Notwithstanding, our findings are consistent with a heuristic model in which extinction leads to a reduction in the CS-related activity of CElon neurons and a subsequent disinhibition of the CEloff subpopulation, which, in turn, could suppress CEm output to vlPAG and thereby reduce freezing. There are a number of caveats to this model. First, it remains possible that extinction alters the regulation of CEloff neurons by upstream inputs other than (or in addition to) CElon neurons. These inputs could include some of the same aforementioned structures known to innervate CElon cells, such as the PVT, amygdala striatal transition area 34 , BLA and ITC clusters. Second, there is compelling evidence that CEm output can be regulated independently of CEl input. CElon neurons can bypass the CEloff→CEm pathway and project directly to vlPAG 18 . Furthermore, extinction is associated with the strengthening of direct, ITC-mediated, feed-forward inhibition of CEm output neurons 39 , in a manner driven by principal cells located in BLA 39 and infralimbic cortex 40 . The observation that permanent ablation of the entire ITC clusters impairs extinction retrieval further highlights a role for these cells 41 though, given evidence of significant heterogeneity between individual ITC clusters 28 , 30 , the precise nature of this role remains unresolved. Nonetheless, it seems likely that ITCs are a key substrate for extinction that may operate in parallel or in concert with CEloff neurons to affect freezing. Collectively, these prior observations, taken together with the current findings, suggest that while the CEA is an essential node within the broader neural circuitry mediating fear behaviours, the process of extinction likely engages multiple circuit elements that regulate the activity of CEA neurons to modulate freezing. The combination of independent and interacting circuits would endow a system with the dynamic range and flexibility to adjust behavioural responses to fear-related stimuli according to accumulated experience and prevailing environmental conditions. A system for extinction with flexibility and inbuilt redundancy would be of significant adaptive value considering generating an appropriate level of fear behaviour is crucial to survival for many species. In humans, dysfunction of this system, including deficient plasticity in the CEA subpopulations described here, could contribute to the impaired fear extinction reported in patients with anxiety and trauma-related disorders 42 , 43 . Methods Animals Male C57BL6/J mice (B6, Harlan Ltd), 129S1/SvImJ mice (S1, Charles River or Jackson Laboratory) and PKCδ-Cre-CFP mice 21 were housed by strain (1–2 animals per cage) for 7 days before all experiments, under a 12-h light/dark cycle, and were provided with food and water ad libitum. The ambient temperature in the animal facility was ca. 20 °C and the humidity ca. 30%. In the current study, male mice were used to aid comparability with prior analysis of CEA neurons 15 , 21 and in part because heavier male mice were better suited to carrying the electrode implants during locomotion. It will be important and potentially highly informative to modify procedures to enable the study of fear-related CEA neuronal activity in female mice in future work. All animal procedures were executed in accordance with institutional guidelines and were approved by the Veterinary Department of the Canton of Basel-Stadt, the Austrian Animal Experimentation Ethics Board and Austrian Ethical Committees on Animal Care and Use (Bundesministerium für Wissenschaft und Verkehr, Kommission für Tierversuchsangelegenheiten), the National Institute on Alcohol Abuse and Alcoholism Animal Care and the National Institutes of Health guidelines outlined in ‘Using Animals in Intramural Research’. Behaviour and optical stimulation Fear conditioning and extinction took place in two different contexts (context A and B). The conditioning and extinction boxes and the floor were cleaned with 70% ethanol or 1% acetic acid before and after each session, respectively. To score freezing behaviour, an automatic infrared beam detection system placed on the bottom of the experimental chambers (Coulbourn Instruments) was used. The animals were considered to be freezing if no movement was detected for 2 s. On day 1, mice were submitted to a habituation session in context B, in which they received 4 presentations of the CS (total CS duration of 30 s, consisting of 50-ms pips repeated at 0.9 Hz, 2-ms rise and fall; pip frequency: 7.5 kHz or white-noise counterbalanced across animals, 80 dB sound pressure level). Fear conditioning was performed on the same day by pairing the CS with a US (1 s foot shock, 0.6 mA, 5 CS/US pairings; inter-trial interval: 20–180 s). The onset of the US coincided with the offset of the CS. On days 2 and 3, conditioned mice were submitted to extinction training in context B, during which they received 12 presentations of the CS. Retrieval of extinction, spontaneous recovery of conditioned fear (50% freezing cut-off) and context-dependent fear renewal were tested 7 days later in context B and A, respectively, with 4 presentations of the CS 9 . Statistical comparisons were performed with one-way repeated-measures ANOVA followed by Bonferroni post hoc test ( p < 0.05 was considered significant). For the quantification of zif268 in PKCδ neurons in the CEl amygdala, mice were submitted to an auditory fear conditioning paradigm in which the CS (total CS duration of 30 s, 10 kHz, 80 dB sound pressure level) was paired to the US (2 s foot shock; 0.6 mA; three CS/US pairings; inter-trial interval: 20–180 s) (TSE operant system). The onset of the US coincided with the offset of the CS. Fear conditioning was always performed in a context (context A) different from that used in the extinction session (context B). Context A was cleaned with water and context B with 70% alcohol followed by water. On the following day, fear memory retrieval and extinction training was performed in context B by presenting 16 CSs with an inter-trial interval of 5 s 44 . The ‘fear expression’ groups received only 2 presentations of the CS following fear conditioning. Freezing behaviour was quantified as an index of fear 45 in each behavioural session by manually quantifying freezing behaviour; defined as no visible movement except that required for respiration, and converted to a percentage [(duration of freezing within the CS/total time of the CS) × 100] by a trained observer blind to the experimental groups. For discriminative extinction, mice were habituated on day 1 to 4 presentations of two different CS in context B (total CS duration of 30 s, consisting of 50-ms pips repeated at 0.9 Hz, 2 ms rise and fall; pip frequency: 7.5 kHz or white noise, 80 dB sound pressure level). Both CSs were subsequently paired with a US (1-s foot shock, 0.6 mA, 5 CS/US pairings for each CS; inter-trial interval: 20–180 s). The onset of the US coincided with the offset of the CS. On days 3 and 4, only one of the two CSs was extinguished by 16 and 12 presentations in context B, respectively. At the end of the second extinction session, mice were exposed to 4 presentations of the non-extinguished CS in context B 9 . Statistical comparisons were performed with one-way repeated-measures ANOVA followed by Bonferroni post hoc test ( p < 0.05 was considered significant). Optogenetic experiments were performed using a fear conditioning and fear extinction procedure in virally injected PKCδ-Cre positive or negative mice. On day 1, two different CS, CS1 and CS2 (total CS duration of 30 s, consisting of 50-ms pips repeated at 0.9 Hz, 2 ms rise and fall; pip frequency: 7.5 kHz or white noise, 80 dB sound pressure level, counterbalanced across animals) were paired 5 times with the US (1-s foot shock, 0.6 mA, inter-trial interval: 20–180 s). On day 2, fear memory was tested by presenting 4 CS1 and 4 CS2. On day 3, fear extinction was achieved by sequentially presenting 16 CS1 and 16 CS2 (counterbalanced for order across animals). From the 5th to the 16th CS for CS1 and CS2, each CS pip was coupled to light stimulation (−50 ms to +300 ms from pip onset, 20–40 mW) bilaterally delivered through optic fibres (200 µm core diameter, 0.37 NA, Thorlabs GmbH) to the CEl amygdalae. Optic fibres were connected to a custom-built laser bench using an AOTF (AA Opto-Electronic) to control laser intensity (lasers: MBL473, 473-nm wavelength and MGL593.5, 593.5-nm wavelength, CNILasers). To ensure that animals could move freely, the connecting fibres were suspended over the behavioural context. On day 4, extinction memory was tested by the 4 presentations CS1 and CS2 (counterbalanced for order across animals). Statistical comparisons were performed with two-way repeated-measures ANOVA followed by Bonferroni post hoc test ( p < 0.05 was considered significant). Single-unit recordings and virus injections Mice were anaesthetized with isoflurane (induction 5%, maintenance 2.5%) in O 2 . Body temperature was maintained with a heating pad (CMA/150, CMA/Microdialysis). Mice were secured in a stereotaxic frame and unilaterally implanted in the amygdala with a multi-wire electrode aimed at the following coordinates: 1.3 mm posterior to bregma; ±2.6 mm lateral to midline; 3.25–3.75-mm deep from the cortical surface. The electrodes consisted of 8–16 individually insulated nichrome wires (13 µm inner diameter, impedance 50–300 kΩ; California Fine Wire) contained in a 26-gauge stainless steel guide cannula. The wires were attached to a 10 pin to 18 pin connector (Omnetics). The implant was secured using cyanoacrylate adhesive gel. After surgery mice were allowed to recover for 7 days. Analgesia was applied before and during the 3 days after surgery (Metacam). Electrodes were connected to a headstage (Plexon) containing 8–16 unity-gain operational amplifiers. The headstage was connected to a 16-channel computer-controlled preamplifier (gain X-100, band-pass filter from 150 Hz to 9 kHz, Plexon). Neuronal activity was digitized at 40 kHz and band-pass filtered from 250 Hz to 8 kHz, and was isolated by time–amplitude window discrimination and template matching using a Multichannel Acquisition Processor system (Plexon). At the conclusion of the experiment, recording sites were marked with electrolytic lesions before perfusion, and electrode locations were reconstructed with standard histological techniques 15 . For optical stimulation of PKCδ CEl neurons, PKCδ-Cre+ animals were bilaterally injected into CEl amygdalae with an rAAV serotype 2/7 (Vector Core, University of Pennsylvania), containing a construct conditionally coding for ChR2-2A-eNpHR-2A-Venus 46 at −1.3 mm posterior and +/− 2.6 mm lateral to bregma at a depth of 3.25–3.75 mm. The use of a 2A-Peptide Self-Processing cassette in the AAV2/7 DIO-EF-1α-ChR2-2A-eNpHR-2A-Venus enables equimolar/isostoichiometric expression of ChR2, eNpHR and Venus in PKCδ neurons to bi-directionally control their activity 42 , 47 . For identification of the injection site, the virus solution was mixed at 1:1000 with blue fluorescing polymer microspheres (Duke Scientific Corp.). Deeply anesthetized animals were fixed in a stereotactic frame (Kopf Instruments) and the skin above the skull was cut. Glass pipettes (tip diameter 10–20 μm), connected to a Picospritzer III (Parker Hannifin Corporation), were lowered by a Micropositioner (Kopf Instruments) to the depth of 3.75 mm. About 300 nl were pressure injected bilaterally into CEl amygdalae. In the same surgeries 26-gauge stainless steel guide cannulas (Plastics One) were implanted bilaterally along the same track above CEl amygdalae at a depth of −3.25 mm. Guide cannulas were secured using cyanoacrylate adhesive gel (Henkel) and dental cement (Heraeus Dental). To prevent blockage of the cannulas, dummy cannulas (Plastics One) were inserted and fixed. Behavioural experiments were performed after 4 weeks of recovery and expression time and 3 days of handling. After the experiment, optic fibres were removed and animals were perfused for histological analysis of the injection site as described below. Single-unit spike sorting and analysis Single-unit spike sorting was performed using Off-Line Spike Sorter (Plexon) as described 15 . Principal component scores were calculated for unsorted waveforms and plotted on three-dimensional principal component spaces, and clusters containing similar valid waveforms were manually defined. A group of waveforms was considered to be generated from a single neuron if it defined a discrete cluster in principal component space that was distinct from clusters for other units and if it displayed a clear refractory period (>1 ms) in the auto-correlogram histograms. In addition, two parameters were used to quantify the overall separation between identified clusters in a particular channel. These parameters include the J3 statistic, which corresponds to the ratio of between-cluster to within-cluster scatter, and the Davies–Bouldin validity index (DB), which reflects the ratio of the sum of within-cluster scatter to between-cluster separation. High values for the J3 and low values for the DB are indicative of good cluster separation. Control values for this statistic were obtained by artificially defining two clusters from the centred cloud of points in the principal component space from channels in which no units could be detected (Supplementary Fig. 1 ). Template waveforms were then calculated for well-separated clusters and stored for further analysis. Clusters of identified neurons were analysed offline for each recording session using principal component analysis and a template-matching algorithm. Only stable clusters of single units recorded over the time course of the entire behavioural training were considered. Long-term single-unit stability isolation was evaluated using Wavetracker (Plexon) in which principal component space-cylinders were calculated from data recorded during behavioural sessions. Straight cylinders suggest that the same set of single units was recorded during the entire training session (Supplementary Fig. 1 ). We further quantified the similarity of waveform shape by calculating linear correlation ( r ) values between average waveforms obtained over training days (Supplementary Fig. 1 ). As a control, we computed the r values from average waveforms of different neurons. To avoid analysis of the same neuron recorded on different channels, we computed cross-correlation histograms. If a target neuron presented a peak of activity at a time that the reference neuron fires, only one of the two neurons was considered for further analysis. CS-induced neural activity was calculated by comparing the firing rate after stimulus onset with the firing rate recorded during the 500 ms before stimulus onset (bin size, 50 ms; averaged over blocks of 4 CS presentations consisting of 108 individual sound pips in total) using a z -score transformation. Z -score values were calculated by subtracting the average baseline firing rate established over the 500 ms preceding stimulus onset from individual raw values and by dividing the difference by the baseline standard deviation. Classification of units was performed by considering a significant z -score value within 250 ms after CS onset during the fear test according to the freezing levels. Normalized populations PSTHs were obtained by averaging normalized PSTHs from individual neurons. Statistical comparisons were performed with one-way repeated-measures ANOVA followed by Bonferroni post hoc test or with the Student paired t -test for the recall and renewal datasets ( p < 0.05 was considered significant). Calculations were made in MATLAB and R. Statistical analysis was done in the commercially available software GraphPad Prism and SigmaPlot. Optical identification of single units For optogenetic identification of PKCδ neurons, we used pulses of yellow light (to activate Arch). We used 300-ms pulses, 120 times, with a 2 s inter-pulse interval, at 10 mW light power at the fibre tip. Units were considered as light responsive if they showed significant, time-locked (<10 ms) changes in neuronal activity upon illumination. To determine the onset of inhibition, we used change-point analysis (Change Point Analyzer 2.0, Taylor Enterprises Inc.). As described previously 22 , 46 , this identifies the time point exhibiting a significant change in neuronal activity relative to the preceding time points. We calculated linear correlation ( r ) values for spontaneous and light-evoked spikes to quantitatively determine the similarity of their waveform shapes. Immunohistochemistry and imaging After completion of experiments, virally injected PKCδ-Cre+ mice were deeply anaesthetized with avertin (0.3 g/kg) Mice were then transcardially perfused with phosphate-buffered saline (PBS) followed by 4% paraformaldehyde (PFA). Coronal, 80-µm-thick brain slices were then cut with a vibratome (VT1000 S, Leica) and stored in PBS containing 0.05% sodium azide. To visualize virus expression, standard immunolabelling procedures were performed on free-floating brain sections: overnight incubation at 4 °C with goat rabbit anti-GFP antibody (1:1000, catalogue no. A11122, Invitrogen), 2 h incubation with anti-rabbit Alexa 488 (1:1000, catalogue no. A11008, Invitrogen). After a final wash, slices were mounted on cover slips and imaged. Mice were included in the analysis if they showed virus expression bilaterally within CEl amygdalae and if fibre tip placement was not more than ~500 µm away from CEl amygdala. For the quantification of zif268 in PKCδ neurons in the CEl amygdala, mice were killed 2 h after the start of the extinction training session as previously described 30 . Mice were deeply anesthetized using sodium pentobarbitone (200 mg/kg) and transcardially perfused with 20 ml of 0.9% saline followed by 20 ml of 4% paraformaldehyde in phosphate-buffered saline (PBS), pH 7.4. Samples were post-fixed for 2 h in the same fixative at 4 °C and stored in PBS. Coronal sections (40 µm) were cut on a vibratome (Leica Microsystems) and collected in tris buffered saline (TBS). Free-floating sections were incubated in blocking solution (10% BSA and 0.1% Triton X-100 in TBS) then with primary polyclonal rabbit anti-Zif268 antibody (1:2000; Cat. No.: sc-189; Santa Cruz Biotechnology) and monoclonal mouse anti-PKCδ (1:1000, Cat. No.: 610398, BD Transduction Laboratories) for 48 h at 4 °C. The sections were then washed with TBS and incubated for 2 h at room temperature with Cy2-conjugated donkey anti-rabbit (1:500; Cat. No.: 711-225-152; Jackson ImmunoResearch Laboratories) and Alexa Fluor 647-conjugated donkey anti-mouse (1:500; Cat. No.: 717-605-150; Jackson ImmunoResearch Laboratories). Sections were then attached to microscope slides and coverslipped with FluroGold Antifade reagent. All immunolabelled sections were imaged using an Olympus BX51 microscope equipped with an Olympus XM10 video camera. Images taken under consistent exposure times using a ×20 oil-immersed optical objective lens (UPlanSApo, Olympus Corporation) were digitised and viewed using CellSens Dimension 1.5 software (Olympus Corporation, Tokyo, Japan). The quantification of Zif268 expression in PKCδ positive or negative expressing neurons in the CEl was achieved by manual scoring. Statistical comparisons were performed with a two-way ANOVA followed by Fischer LSD post hoc test ( p < 0.05 was considered significant). Dendritic morphology of CEl neurons The dendritic morphology of CEl neurons was determined using Golgi stain as described previously 29 , 48 . Mice were overdosed with xylazine/ketamine and then transcardially perfused with 0.9% saline. Brains were removed and immersed in Golgi-Cox solution (1:1 solution of 5% potassium dichromate and 5% mercuric chloride diluted 4:10 with 5% potassium chromate) for 18 days. Brains were dehydrated, infiltrated with a graded series of celloidins, and embedded in 8% celloidin. Coronal sections were cut at a thickness of 160 μm on a sliding microtome (American Optical 860) and alkalinized, developed, fixed, dehydrated, cleared, mounted, and coverslipped. Neurons selected for reconstruction did not have truncated branches and were unobscured by neighbouring neurons and glia, with dendrites that were easily discriminable by focusing through the depth of the tissue. In 4–6 sections evenly spaced through the rostral-caudal extent of the CEl amygdala, an average of 4–6 neurons per mouse (average of 2.5 from each hemisphere) were randomly selected (using a random number generator, ) and reconstructed. Neurons were drawn in three-dimensions by an experimenter blind to strain, using a ×100 objective on an Olympus BX41 system microscope using a computer-based neuron tracing system (Neurolucida, MBF Biosciences). The length and number of dendrites, as well as the length and number of terminal branches, was measured for all dendritic arbours. Values were compared between strains using t -tests. In addition, to assess the overall amount and location of dendritic material, a three-dimensional version of a Sholl analysis 49 was performed by measuring the number of dendritic intersections within 10 μm concentric spheres radiating from the soma. Statistics and reproducibility Imaging was repeated independently with similar results in Fig. 5c ( n = 11 mice) and in Supplementary Fig. 7a (B6 expression, n = 8 mice; S1 expression, n = 6 mice; B6 extinction, n = 8 mice; S1 extinction, n = 6 mice). Distinct spike waveforms recorded from different units and sorted using 3D principal component analysis could be observed in all recordings with more than one unit per electrode as shown in Supplementary Fig. 1b . Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability The data that support the findings on this study are available from the corresponding authors upon request. The data that supports the findings of this study are available at Source data are provided with this paper. Code availability The computer code that supports the findings of this study is available from the corresponding authors upon request.
The brain mechanisms underlying the suppression of fear responses have attracted a lot of attention as they are relevant for therapy of human anxiety disorders. Despite our broad understanding of the different brain regions activated during the experience of fear, how fear responses can be suppressed remains largely elusive. Researchers at the University of Bern and the Friedrich Miescher Institute in Basel have now discovered that the activation of identified central amygdala neurons can suppress fear responses. Fear is an important reaction that warns and protects us from danger. But when fear responses are out of control, this can lead to persistent fears and anxiety disorders. In Europe, about 15 percent of the population is affected by anxiety disorders. Existing therapies remain largely unspecific or are not generally effective, because the detailed neurobiological understanding of these disorders is lacking. What was known so far is that distinct nerve cells interact together to regulate fear responses by promoting or suppressing them. Different circuits of nerve cells are involved in this process. A kind of "tug-of-war" takes place, with one brain circuit "winning" and overriding the other, depending on the context. If this system is disturbed, for example if fear reactions are no longer suppressed, this can lead to anxiety disorders. Recent studies have shown that certain groups of neurons in the amygdala are crucial for the regulation of fear responses. The amygdala is a small almond-shaped brain structure in the center of the brain that receives information about fearful stimuli and transmits it to other brain regions to generate fear responses. This causes the body to release stress hormones, change heart rate or trigger fight, flight or freezing responses. Subdivision of the mouse amygdala. The cell types studied are located in the central amygdala (red). Credit: Rob Hurt / Wikicommons (CC BY-SA 4.0) Now, a group led by Professors Stéphane Ciocchi of the University of Bern and Andreas Lüthi of the Friedrich Miescher Institute in Basel has discovered that the amygdala plays a much more active role in these processes than previously thought: Not only is the central amygdala a "hub" to generate fear responses, but it contains neuronal microcircuits that regulate the suppression of fear responses. In animal models, it has been shown that inhibition of these microcircuits leads to long-lasting fear behaviour. However, when they are activated, behaviour returns to normal despite previous fear responses. This shows that neurons in the central amygdala are highly adaptive and essential for suppressing fear. These results were published in the journal Nature Communications. "Disturbed" suppression leads to long-lasting fear The researchers led by Stéphane Ciocchi and Andreas Lüthi studied the activity of neurons of the central amygdala in mice during the suppression of fear responses. They were able to identify different cell types that influence the animals' behaviour. For their study, the researchers used several methods, including a technique called optogenetics with which they could precisely shut down—with pulses of light—the activity of an identified neuronal population within the central amygdala that produces a specific enzyme. This impaired the suppression of fear responses, whereupon animals became excessively fearful. "We were surprised how strongly our targeted intervention in specific cell types of the central amygdala affected fear responses," says Ciocchi, Assistant Professor at the Institute of Physiology, University of Bern. "The optogenetic silencing of these specific neurons completely abolished the suppression of fear and provoked a state of pathological fear." Important for developing more effective therapies In humans, dysfunction of this system, including deficient plasticity in the nerve cells of the central amygdala described here, could contribute to the impaired suppression of fear memories reported in patients with anxiety and trauma-related disorders. A better understanding of these processes will help develop more specific therapies for these disorders. "However, further studies are necessary to investigate whether discoveries obtained in simple animal models can be extrapolated to human anxiety disorders," Ciocchi adds.
10.1038/s41467-021-24068-x
Medicine
Immunocompromised pediatric patients showed T-cell activity and humoral immunity against SARS-CoV-2
Hannah Kinoshita et al, Robust Antibody and T Cell Responses to SARS-CoV-2 in Patients with Antibody Deficiency, Journal of Clinical Immunology (2021). DOI: 10.1007/s10875-021-01046-y
http://dx.doi.org/10.1007/s10875-021-01046-y
https://medicalxpress.com/news/2021-05-immunocompromised-pediatric-patients-t-cell-humoral.html
Abstract Immunocompromised patients, including those with inborn errors of immunity (IEI), may be at increased risk for severe or prolonged infections with SARS-CoV-2 (Zhu et al. N Engl J Med. 382:727–33, 2020 ; Guan et al. 2020 ; Minotti et al. J Infect. 81:e61–6, 2020 ). While antibody and T cell responses to SARS-CoV-2 structural proteins are well described in healthy convalescent donors, adaptive humoral and cellular immunity has not yet been characterized in patients with antibody deficiency (Grifoni et al. Cell. 181:1489–1501 e1415, 2020 ; Burbelo et al. 2020 ; Long et al. Nat Med. 26:845–8, 2020 ; Braun et al. 2020 ). Herein, we describe the clinical course, antibody, and T cell responses to SARS-CoV-2 structural proteins in a cohort of adult and pediatric patients with antibody deficiencies ( n = 5) and controls (related and unrelated) infected with SARS-CoV-2. Five patients within the same family (3 with antibody deficiency, 2 immunocompetent controls) showed antibody responses to nucleocapsid and spike proteins, as well as SARS-CoV-2 specific T cell immunity at days 65–84 from onset of symptoms. No significant difference was identified between immunocompromised patients and controls. Two additional unrelated, adult patients with common variable immune deficiency were assessed. One did not show antibody response, but both demonstrated SARS-CoV-2-specific T cell immunity when evaluated 33 and 76 days, respectively, following SARS-CoV-2 diagnosis. This report is the first to show robust T cell activity and humoral immunity against SARS-CoV-2 structural proteins in some patients with antibody deficiency. Given the reliance on spike protein in most candidate vaccines (Folegatti et al. Lancet. 396:467–78, 2020 ; Jackson et al. N Engl J Med. 383:1920–31, 2020 ), the responses are encouraging. Additional studies will be needed to further define the timing of onset of immunity, longevity of the immune response, and variability of response in immunocompromised patients. Working on a manuscript? Avoid the common mistakes Introduction Since the start of the COVID-19 pandemic there has been expanding evidence that immunocompromised patients may be at increased risk for severe or prolonged infections with SARS-CoV-2 [ 1 , 2 , 3 ]. Clinical descriptions of COVID-19 in patients with T cell and antibody-specific inborn errors of immunity (IEI) are expanding, including reports of worsened disease course in patients with common variable immunodeficiency (CVID) as compared with pure agammaglobulinemia [ 4 , 5 , 6 , 7 ]. In the largest cohort described to date of patients with IEI and COVID-19, 20% of the cohort required intensive care, with an overall mortality rate of 10%; 6 of 9 deceased patients suffered from an antibody defect [ 6 ]. Their findings represent increased morbidity and mortality, especially at younger ages, as compared with the general population. While antibody and T cell responses to SARS-CoV-2 structural proteins are well described in healthy convalescent donors, adaptive humoral and cellular immunity have not yet been characterized in patients with antibody deficiencies [ 8 , 9 , 10 , 11 ]. Here, we describe five patients affected with antibody deficiencies who developed mild symptoms of COVID-19 and provide comprehensive analysis of their adaptive immune responses. Methods All patients provided written informed consent for clinical data and blood sample collection on protocols approved by the National Institutes of Health Institutional Review Board, in concordance with ethical standards as put forth by the Declaration of Helsinki. Serology Assays SARS-CoV-2 antibody testing was performed via luciferase immunoprecipitation assay on all subjects as previously described [ 9 ]. Briefly, plasma samples were incubated with spike and nucleocapsid proteins fused to Gaussia and Renilla luciferase, respectively. Protein A/G beads were added, and samples were washed prior to addition of coelenterazine substrate (Promega). A Berthold 165 LB 960 Centro Microplate Luminometer was used to measure luciferase activity in light units. Antibody levels were reported as the geometric mean level with 95% confidence interval. As described previously, cutoff limits for determining positive antibodies in the SARS-CoV-2-infected samples were based on the mean plus 3–4 standard deviations of the serum values derived from uninfected blood donor controls for nucleocapsid (125,000 LUs) and spike (45,000 LUs) [ 9 ]. Percentages for categorical variables, median, mean, standard deviation and range, and geometric mean were used to describe the data. Unpaired t tests were used for statistical analysis. T Cell Assays Testing of T cell responses was performed via stimulation of peripheral blood mononuclear cells with peptide libraries encompassing SARS-CoV-2 structural proteins as previously described [ 12 ]. Cells were then cultured for 10 days in 96-well plates with IL-4 (400 IU/mL) and IL-7 (10 ng/mL). On day 10, expanded viral specific T cells (VSTs) were harvested. 2 × 10 5 VSTs were plated in a 96-well plate and re-stimulated with SARS-CoV-2 structural proteins pooled pepmixes or actin (negative control) with CD28/CD49d (BD Biosciences) and anti-CD107a- Pe-Cy7 antibody. After 1 h of stimulation, brefeldin A (Golgiplug; BD Biosciences, San Jose, CA, USA) and monensin (GolgiStop; BD Biosciences, San Jose, CA, USA) were added. Cells were then incubated for an additional 4 h. Cell viability was assessed using Live-Dead Aqua. Cells were surface stained with fluorophore-conjugated antibodies against CD3-BV785, CD4-BV605, CD8- BV421, TCRαβ-PerCP Cy5.5, TCRγδ- APC-Fire750, CCR7-FITC, CD45-RO-PE Dazzle, HLA-DR-Alexaflour700, and CD56-BV650 (Miltenyi Biotec; BioLegend). Cells were fixed, permeabilized with Cytofix/Cytoperm solution (BD Biosciences), and subsequently stained with IFN-γ-APC and TNF-α-PE (Miltenyi Biotec). All samples were acquired on a CytoFLEX cytometer (Beckman Coulter, Brea, CA, USA). The gating strategy for analysis is presented in Supplemental Fig. 2 . Results Clinical History Immunodeficiency History The proband of the kindred (P1) is an 11-year-old boy with CVID, atopy, celiac disease, and recurrent infections beginning in the first year of life. He is currently treated with immunoglobulin replacement therapy with improvement in infections (Supplemental Table 1 ). His twin brother (P2) had a history of periodic fever, apthous stomatitis, pharyngitis, adenitis (PFAPA), recurrent sinusitis, eosinophilic esophagitis, and environmental allergies, as well as hypogammaglobulinemia (Supplemental Fig. 1 ) [ 13 ]. Their younger sister (P3) has history of recurrent sinopulmonary infections, otitis media requiring myringotomy tubes, pharyngitis with subsequent tonsillectomy, as well as atopy with a normal immunologic evaluation [ 14 ]. Their mother (P4) also has specific antibody deficiency with recurrent sinopulmonary infection and atopy. She is currently managed with early antibiotic therapy for infections and frequent booster vaccinations. P2, P3, and P4 have never received immunoglobulin therapy. Whole-exome sequencing performed on the proband did not identify any a causative variant (for additional clinical information, see Supplemental data S1 ). Patient 6 (P6) is an unrelated 48-year-old male who was diagnosed with CVID at the age of 34 years in the setting of recurrent sinusitis, thrombocytopenia, leukopenia, and splenomegaly (Supplemental Table 2 ). He is currently treated with immunoglobulin replacement therapy and amantadine prophylaxis, but with ongoing infectious and non-infectious complications. Whole-exome sequencing did not reveal a causal genetic variant. Patient 7 (P7) is an unrelated 21-year-old female with CVID diagnosed at the age of 14 years in the setting of alopecia areata and idiopathic thrombocytopenic purpura. Her course was complicated by anti-phospholipid syndrome and respiratory infections. She is currently treated with subcutaneous immunoglobulin and hydroxychloroquine. Whole-exome sequencing revealed 2 previously reported compound heterozygous variants in TNFRSF13B (c.310 T > C, p.Cys104Arg and c.260 T > A, p.Ile87Asn) which are believed to be contributory to her CVID [ 15 , 16 , 17 ]. SARS-CoV-2 History With regard to the kindred, the healthy father (P5) had onset of fever progressing to fatigue, anosmia, and cough in August of 2020. Symptoms persisted for 14 days, with a normal chest X-ray during his disease course. SARS-CoV-2 PCR was positive on the second day of illness. A day later, two of the children (P2 with hypogammaglobulinemia and P3) developed 2 days of fever without respiratory symptoms. P3 had persistent anosmia lasting weeks. Twenty days after the first family member became ill, the mother with SAD (P4) developed fever, fatigue, severe headache, and anosmia, with persistent symptoms over several weeks. SARS-CoV-2 testing was not performed on the other family members at the time of illness. Patient 1 with CVID remained asymptomatic when his family was ill. All family members recovered without need for hospitalization or treatment. In November 2020, P6 with CVID (unrelated) was incidentally found to be SARS-CoV-2 positive on admission for post-operative bleeding after surgery for benign prostatic hypertrophy. His only symptoms of SARS-CoV-2 infection were dry mouth and cough 3 days after diagnosis. His course was otherwise uncomplicated. P7 with CVID (unrelated) developed nasal and sinus congestion, mild anosmia, and fatigue in November of 2020. SARS-CoV-2 PCR was positive on day 4 of illness, with return to baseline 10 days after testing. Five additional pediatric and adult immunocompetent controls with mild ( n = 4) to severe ( n = 1) symptoms of SARS-CoV-2 infection were included in the analysis for comparison. Serologic Responses SARS-CoV-2 antibody testing was performed via luciferase immunoprecipitation assay on the kindred 84 days after the first family member developed symptoms (P5). At that time, P2, P3 and P4 were 79 days, 80 days, and 65 days, respectively, from the onset of their own symptoms. Patient 6 was evaluated 33 days following detection of SARS-CoV-2 infection by PCR testing. Patient 7 was evaluated 80 days after first developing symptoms. All five subjects in our kindred (P1–P5) had detectable antibodies targeting both spike (median LU = 2.43 × 10 6 ) and nucleocapsid (median LU = 1.33 × 10 6 , Fig. 1 ) proteins of SARS-CoV-2. In contrast, P6 is seronegative for spike (LU = 2.16 × 10 4 ) and nucleocapsid (LU = 4.45 × 10 4 ), while P7 is seropositive for spike (LU = 6.61 × 10 5 ) and nucleocapsid (LU = 8.29 × 10 5 ) (Fig. 1 ). Antibody levels of additional pediatric and adult immunocompetent controls evaluated after SARS-CoV-2 infection are included for comparison (Fig. 1 ). Fig. 1 Antibody responses as measured by LIPS assay for patients with antibody deficiencies (P1, P2, P4, P6, P7) in black compared to immunocompetent controls (P3, P5, C1-C5) in red for nucleocapsid ( a ) and spike ( b ). Negative cutoff values (denoted by the dotted line) are based on uninfected negative controls as previously described [ 9 ] Full size image T Cell Responses Intracellular cytokine staining demonstrated specific CD4 + T cell responses in all affected patients ( n = 5) targeting spike (mean IFN-γ/TNF-α + 0.75%; standard deviation [SD] 0.62), membrane (mean IFN-γ/TNF-α + 1.94%; SD 1.9), and nucleocapsid (mean IFN-γ/TNF-α + 1.58%; SD 1.37) (Fig. 2a , Supplemental Fig. 3 ). All immunocompetent control patients ( n = 7) demonstrated specific CD4 + T cell responses to spike (mean IFN-γ/TNF-α + 0.33%; SD 0.20), membrane (mean IFN-γ/TNF-α + 1.12%; SD 0.80), and nucleocapsid (mean IFN-γ/TNF-α + 0.75%; SD 0.89) (Fig. 2b , Supplemental Fig. 3 ). Specificity was determined as a response > 2 × the mean of the negative control, actin (mean IFN-γ/TNF-α + 0.025%; SD 0.05). There is no statistically significant difference between the affected patients and control groups with respect to CD4 + T cell responses to actin ( p = 0.67) or any of the SARS-CoV-2 proteins: membrane ( p = 0.32), envelope ( p = 0.86), nucleocapsid ( p = 0.23), or spike ( p = 0.13) (Fig. 2c ). Single IFN-γ + and TNF-α + CD4 + populations are reported in Supplemental Table 3 . Affected and control patients did not show appreciable CD8 + T cell responses (Supplemental Fig. 4 ). CD107a expression was minimal and did not differ between patients with antibody deficiency compared to immunocompetent controls (data not shown). Fig. 2 Flow cytometry of CD4 + cells positive for IFN-γ and TNF- \(\alpha\) for actin (negative control), and for SARS-CoV-2 membrane, envelope, nucleocapsid, and spike for ( a ) affected patient (P1) and ( c ) immunocompetent control patient (P5). Mean percent positive IFN-γ and TNF-α responses by intracellular flow cytometry for CD4 + cells for membrane, envelope, nucleocapsid, and spike are presented graphically ( c ) for antibody-deficient (in black) and immunocompetent (in red) patients. Specificity was determined as a response > 2 × the mean of the negative control (actin) as denoted by the dotted line. No significant difference between affected and control CD4 T cell response for actin ( p = 0.67), membrane ( p = 0.32), envelope ( p = 0.86), nucleocapsid ( p = 0.23), and spike ( p = 0.13) was found by unpaired t test. Flow plots for other patients and controls are shown in supplemental Fig. 3 Full size image Memory T Cell Phenotype Memory T cell phenotype of SARS-CoV-2-specific cells was evaluated after 10 days of VST microexpansion. In P1-P5, SARS-CoV-2-specific CD3 + cells were primarily effector (mean 78.59%; SD 10.7) and central memory (mean 20.83%; SD 10.5) T cells. Patient 6 (unrelated) also had detectable SARS-CoV-2-specific CD3 + cells comprising both effector (14.74%) and central memory T cell (55.21%) populations despite an undetectable humoral response. Patient 7 had detectable SARS-CoV-2-specific CD3 + cells comprising effector (51.92%), central memory (26.71%), naïve (16.92%), and terminal effector (4.45%) memory T cell populations. The specific CD4 + T cell memory response in the affected patients was predominantly effector memory for membrane (mean 70.89%; SD 39.72), nucleocapsid (mean 68.63%; SD 39.40), and spike (mean 76.86%; SD 20.75) (Fig. 3 ). The specific CD4 + T cell memory response in the control patients is predominantly effector for membrane (mean 90.87%; SD 5.62), envelope (mean 73.32%; SD 1.80), nucleocapsid (mean 91.1%; SD 4.03), and spike (mean 85.32%; SD 11.56) (Fig. 3 ). There was no significant difference in CD4 + T cell memory response for spike between affected and control patients with respect to naïve ( p = 1.0), central memory ( p = 0.63), effector memory ( p = 0.62), and terminal effector ( p = 0.57) T cells . Overall, the T cell responses in all the CVID patients were not significantly different from healthy adult and pediatric convalescent subjects (additional data not shown). Fig. 3 Flow cytometry memory phenotype of CD4 + cells positive for IFN-γ and TNF- \(\alpha\) in antibody-deficient (in black) and immunocompetent control (in red) patients. Percent distribution and mean (horizontal lines) are shown for each T cell phenotype (naïve, central memory, effector memory and terminal effector) based on stimulation with each peptide library evaluated (membrane ( a ), envelope ( b ), nucleocapsid ( c ), spike ( d )) Full size image Discussion To date, there are very little data on adaptive immune responses to SARS-CoV-2 in patients with IEI. Though it may be expected that antibody responses could be impaired in patients with various forms of antibody deficiency, it has been demonstrated that some patients with CVID do have detectable primary antibody responses to viral antigens (e.g., influenza) as well as memory B cell responses [ 18 , 19 ]. Furthermore, patients with many forms of antibody deficiency can demonstrate cellular responses to antigens which impact clinical decision-making regarding inactivated vaccine administration to patients on immunoglobulin therapy [ 18 , 19 , 20 , 21 ]. Here, we demonstrate that 3 members of a family with varying degrees of antibody deficiency and 2 unrelated patients with CVID all had a robust adaptive immune response to SARS-CoV-2 following asymptomatic or mild disease. While supplemental immunoglobulin therapy has been shown to potentially contain some anti-SAR-CoV-2 antibodies, the high antibody titers in the proband (P1) and unrelated P7 suggest that this was in fact a primary immune response. Furthermore, the type and magnitude of B and T cell response was similar between this small group of antibody-deficient patients and healthy controls. Of note, the LIPS assay used for this study has been compared to the commercially available Roche assay for nucleocapsid [ 22 ], with result concordance in 383 of 400 tested samples. Similar to the administration of influenza vaccination in patients with IEI, these findings provide some preliminary support for vaccination in the management of patients with antibody deficiencies. In contrast, an unrelated adult (P6) with CVID receiving immunoglobulin supplementation who was positive for SARS-CoV-2 by PCR testing did not have a demonstrable antibody response at 33 days after diagnosis but did have a detectable SARS-CoV-2-specific T cell response. Lack of a humoral response following relatively asymptomatic infection (similar to this patient’s course) has been described, which may have contributed to these findings [ 23 ]. However, given the spectrum of severity of patients with CVID and related antibody disorders, it also stands to reason that not all patients will have robust B cell and/or T cell responses to SARS-CoV-2 following infection or vaccination. To our knowledge, this is the first report showing robust T cell activity and humoral responses against SARS-CoV-2 structural proteins in patients with antibody deficiency. Given the reliance on spike protein in most candidate vaccines [ 24 , 25 ], the responses demonstrated are encouraging, though additional studies will be needed to further define the quality of the antibody response and the longevity of immune responses against SARS-CoV-2 in immunocompromised patients compared with healthy donors. Data Availability This data or associated data is not in a data repository. Code Availability Not applicable. Abbreviations CVID: Common variable immunodeficiency IEI: Inborn errors of immunity IgA: Immunoglobulin A IgG: Immunoglobulin G IgM: Immunoglobulin M LU: Light units PFAPA: Periodic fever, aphthous stomatitis, pharyngitis, adenitis SAD: Specific antibody deficiency VST: Viral specific T cell
According to data from a cohort of adult and pediatric patients with antibody deficiencies, patients that often fail to make protective immune responses to infections and vaccinations showed robust T-cell activity and humoral immunity against SARS-CoV-2 structural proteins. The new study, led by researchers at Children's National Hospital, is the first to demonstrate a robust T-cell response against SARS-CoV-2 in immunocompromised patients. "If T-cell responses to SARS-CoV-2 are indeed protective, then it could suggest that adoptive T-cell immunotherapy might benefit more profoundly immunocompromised patients," said Michael Keller, M.D., director of the Translational Research Laboratory in the Program for Cell Enhancement and Technologies for Immunotherapy (CETI) at Children's National. "Through our developing phase I T-cell immunotherapy protocol, we intend to investigate if coronavirus-specific T-cells may be protective following bone marrow transplantation, as well as in other immunodeficient populations." The study, published in the Journal of Clinical Immunology, showed that patients with antibody deficiency disorders, including inborn errors of immunity (IEI) and common variable immunodeficiency (CVID), can mount an immune response to SARS-CoV-2. The findings propose that vaccination may still be helpful for this population. "This data suggests that many patients with antibody deficiency should be capable of responding to COVID-19 vaccines, and current studies at the National Institutes of Health and elsewhere are addressing whether those responses are likely to be protective and lasting," said Dr. Keller. The T-cell responses in all the COVID-19 patients were similar in magnitude to healthy adult and pediatric convalescent participants. Kinoshita et al. call for additional studies to further define the quality of the antibody response and the longevity of immune responses against SARS-CoV-2 in immunocompromised patients compared with healthy donors. Currently, there is also very little data on adaptive immune responses to SARS-CoV-2 in these vulnerable populations. The study sheds light on the antibody and T-cell responses to SARS-CoV-2 protein spikes based on a sample size of six patients, including a family group of three children and their mother. All have antibody deficiencies and developed mild COVID-19 symptoms, minus one child who remained asymptomatic. Control participants were the father of the same family, who tested positive for COVID-19, and another incidental adult (not next of kin) experienced mild COVID-19 symptoms. The researchers took blood samples to test the T-cell response in cell cultures and provided comprehensive statistical analysis of the adaptive immune responses. "This was a small group of patients, but given the high proportion of responses, it does suggest that many of our antibody deficient patients are likely to mount immune responses to SARS-CoV-2," said Dr. Keller. "Additional studies are needed to know whether other patients with primary immunodeficiency develop immunity following COVID-19 infection and will likely be answered by a large international collaboration organized by our collaborators at the Garvan Institute in Sydney."
10.1007/s10875-021-01046-y
Medicine
Researchers find key genetic driver for rare type of triple-negative breast cancer
E E Martin et al, MMTV-cre;Ccn6 knockout mice develop tumors recapitulating human metaplastic breast carcinomas, Oncogene (2016). DOI: 10.1038/onc.2016.381 Journal information: Oncogene
http://dx.doi.org/10.1038/onc.2016.381
https://medicalxpress.com/news/2017-01-key-genetic-driver-rare-triple-negative.html
Abstract Metaplastic breast carcinoma is an aggressive form of invasive breast cancer with histological evidence of epithelial to mesenchymal transition (EMT). However, the defining molecular events are unknown. Here we show that CCN6 (WISP3), a secreted matricellular protein of the CCN (CYR61/CTGF/NOV) family, is significantly downregulated in clinical samples of human spindle cell metaplastic breast carcinoma. We generated a mouse model of mammary epithelial-specific Ccn6 deletion by developing a floxed Ccn6 mouse which was bred with an MMTV-Cre mouse. Ccn6 fl/fl ;MMTV-Cre mice displayed severe defects in ductal branching and abnormal age-related involution compared to littermate controls. Ccn6 fl/fl ;MMTV-Cre mice developed invasive high grade mammary carcinomas with bona fide EMT, histologically similar to human metaplastic breast carcinomas. Global gene expression profiling of Ccn6 fl/fl mammary carcinomas and comparison of orthologous genes with a human metaplastic carcinoma signature revealed a significant overlap of 87 genes ( P =5 × 10 −11 ). Among the shared deregulated genes between mouse and human are important regulators of epithelial morphogenesis including Cdh1, Ck19, Cldn3 and 4, Ddr1, and Wnt10a . These results document a causal role for Ccn6 deletion in the pathogenesis of metaplastic carcinomas with histological and molecular similarities with human disease. We provide a platform to study new targets in the diagnosis and treatment of human metaplastic carcinomas, and a new disease relevant model in which to test new treatment strategies. Introduction Metaplastic breast carcinomas constitute approximately 1% of invasive carcinomas and are characterized by histological evidence of epithelial to mesenchymal transition (EMT) towards spindle, squamous, and less frequently heterologous elements including chondroid and osseous differentiation. 1 Metaplastic carcinomas are a subtype of triple negative breast cancer (TNBC) that are poorly responsive to chemotherapy, with high propensity for distant metastasis. 2 , 3 Recent comprehensive genomic analyses revealed that metaplastic carcinomas belong to the mesenchymal-like subtype of TNBC, with a distinct transcriptional profile than other invasive breast carcinomas. 4 , 5 , 6 However, the molecular determinants of metaplastic carcinomas needed to inform effective treatments are still incompletely understood. Metaplasia is the reversible change in which one adult cell type is replaced by another adult cell type. Epithelial cells may undergo metaplasia through EMT, which during tumorigenesis results in the acquisition of molecular and phenotypic changes towards a spindle cell morphology and dysfunctional cell-cell adhesion, leading to invasion and metastasis. 7 Of all human cancers, metaplastic carcinomas provide the clearest evidence of morphological and molecular EMT with deregulation of genes involved in cell adhesion and low expression of claudins. 4 , 5 , 6 CCN6 (WISP3) is a secreted matricellular protein of the CCN family, which includes six members that play regulatory rather than structural roles in embryonic development, cell attachment and growth. 8 CCN proteins act in a cell and tissue-specific manner primarily through direct binding to cell surface receptors including integrins and by modulating the effect of extracellular growth factors on epithelial cells. 9 , 10 , 11 , 12 , 13 , 14 Located at 6q21-22, CCN6 was first identified as a novel gene downregulated in the highly aggressive inflammatory breast cancers. 15 We have shown that CCN6 is secreted by ductal epithelial cells in the breast, and that CCN6 downregulation in nontumorigenic breast cells is a robust inducer of EMT and is sufficient to confer growth factor independent survival and resistance to anoikis, a property of metastatic cancer cells. 9 , 11 , 12 , 13 , 14 , 15 Despite this knowledge, a direct role of CCN6 in breast tumorigenesis and the underlying mechanisms have remained elusive in part due to the lack of a physiologically relevant genetically engineered mouse model. Results CCN6 protein is reduced in human metaplastic carcinomas Based on our previous studies showing that CCN6 regulates the transition between epithelial and mesenchymal states in breast cells, we set out to investigate CCN6 expression in breast tissues. Immunohistochemical evaluation revealed that CCN6 level is significantly reduced in spindle metaplastic carcinomas compared to normal breast and invasive ductal carcinomas ( Figure 1a ). All normal breast lobules examined, and 67% of invasive ductal carcinomas maintained high levels of CCN6. This distribution was shifted in metaplastic carcinomas, among which 67.9% had low CCN6 expression ( Figure 1b ). Figure 1 CCN6 protein is reduced in human spindle metaplastic carcinomas of the breast compared to invasive ductal carcinoma and normal breast lobules. ( a ) images of a normal breast lobule, an invasive ductal carcinoma with no special features, and a spindle metaplastic carcinoma stained with hematoxylin and eosin (H&E) and CCN6 immunostaining (magnification × 400). ( b ) Invasive metaplastic and ductal carcinomas with high or low CCN6 protein. Chi-square test P =0.0018. (Scale bars, 20 μm.) Full size image Mammary epithelial specific Ccn6 deletion causes abnormal mammary gland development To directly investigate whether loss of Ccn6 in vivo triggers the development of carcinomas that recapitulate human metaplastic carcinomas, we generated a new model of epithelial cell specific Ccn6 deletion in the mammary gland using Cre/loxp-mediated recombination ( Figures 2a and b ). We specifically inactivated the Ccn6 gene in the mammary epithelium, by intercrossing the floxed Ccn6 mice with the MMTV-Cre mice. The offspring were genotyped using primers specific for various Ccn6 alleles (that is, floxed, wild type and deleted) and with primers specific for Cre ( Figure 2c ). We confirmed the presence of Ccn6 deletion in the mammary glands by quantitative reverse transcriptase-PCR and Western blots using whole mammary gland protein extracts, and by immunohistochemistry in mammary gland tissue samples ( Figures 2d–f ). Figure 2 Conditional deletion of Ccn6 in the mammary epithelium using the Cre-Lox system. ( a ) Mouse Ccn6 containing exons 2 and 3. Pictured: wild-type allele; targeted allele after homologous recombination in embryonic stem (ES) cells; the floxed allele in which exons 2 and 3 are flanked by loxP sites and the neomycin cassette has been excised; the allele in which exons 2 and 3 have been deleted in Ccn6 fl/fl ;MMTV-Cre mice. 5′ homology arm (~5.6 kb), 3′ homology arm (~3.7 kb) and cKO region (~4.5 kb) are marked. ( b ) Embryonic stem cell clones were screened for recombination of the targeting vector at the 5′ (BamHI) or 3′ (NheI) ends. ( c ) Example of genotyping results demonstrating amplification of the Ccn6 wt and Ccn6 fl/fl alleles. ( d ) Relative expression of Ccn6 at the mRNA level as assessed by qt-PCR in 8-week-old mice and 4-month-old mice ( n ⩾ 3 mice per genotype per timepoint; P <0.01 for 8 weeks old, P <0.05 for 4 months old). ( e ) Immunoblot for Ccn6 in representative samples from different timepoints in Ccn6 fl/fl ;MMTV-Cre and littermate control (Ccn6 wt/wt ;MMTV-Cre) mice. ( f ) Immunohistochemistry showing the expression of Ccn6 protein in 4-month-old mice mammary glands (at least three mice were stained from each genotype) (scale bars, 20 μm). Full size image Our initial characterization studies showed that pre-pubertal (5-week-old) and pubertal (8-week-old) Ccn6 fl/fl ;MMTV-Cre mice exhibit reduced numbers of terminal end buds (TEBs) and of bifurcated TEBs compared with controls ( Figures 3a and b , Supplementary Figures 1A–D ). At 16 weeks adult virgin Ccn6 fl/fl ;MMTV-Cre mice displayed a significant reduction in mammary gland complexity in Carmine alum-stained whole mounts and histopathology compared to littermate controls ( Figures 3c and d , Supplementary Figures 2A–C ). Ccn6 deletion resulted in a hypoplastic ductal epithelium with significantly reduced ductal thickness. The ductal epithelial cells were flat compared to the tall columnar cells present in the control ducts ( Figures 3e and f ). The ductal epithelial hypoplasia of Ccn6 fl/fl ;MMTV-Cre mice was highlighted by immunostaining with CK-18 and CK-5 marking luminal and basal cells, respectively. Consistent with this phenotype, Ccn6 fl/fl ;MMTV-Cre mammary glands had reduced mitosis identified by pH3 immunostaining compared to controls ( Figure 3e ). Despite these abnormalities, Ccn6 fl/fl ;MMTV-Cre dams were able to nurse their litters, and glands underwent post-lactational involution similar to controls ( Supplementary Figures 3A–C ). Figure 3 Mammary epithelial cell-specific Ccn6 knockout leads to defects in branching and ductal epithelial hypoplasia. ( a ) Representative pictures of the distal end of invading ducts in pubertal Ccn6 wt/wt ;MMTV-Cre and Ccn6 fl/fl ;MMTV-Cre mice, × 2 magnification. Inset of a normal bifurcated duct giving rise to two TEBs in Ccn6 wt/wt ;MMTV-Cre , compared to a duct that failed to normally bifurcate in an Ccn6 fl/fl ;MMTV-Cre mouse (× 20 magnification). ( b ) Quantification of the average number of TEBs and bifurcated TEBs of Ccn6 fl/fl ;MMTV-Cre and littermate controls. ( c ) Representative mammary whole mounts from 4-month-old Ccn6 fl/fl ;MMTV-Cre and Ccn6 wt/wt ;MMTV-Cre mice. ( d ) Quantification of the number of ducts and acini at 5, 10 and 15 mm from the mammary lymph node. ( e ) Representative images of Ccn6 fl/fl ;MMTV-Cre and Ccn6 wt/wt ;MMTV-Cre glands stained with H&E and with antibodies against the indicated proteins. ( f ) Ductal epithelial thickness was quantified using ImageJ. ( g ) Gene ontology (GO) analysis of Ccn6 fl/fl ; MMTV-Cre and Ccn6 wt/wt ; MMTV-Cre 8-week-old mice mammary glands. For all experiments quantification was performed blindly to genotype by two independent investigators with results reported as average±s.e.m., n = at least 3 mice per genotype, in triplicate. Two-tailed Student’s t test * P <0.05 (scale bars, 20 μm). Full size image To assess the global impact of CCN6 ablation on mammary epithelial cell development in vivo in an unbiased fashion, isolated RNA from 8-week-old virgin Ccn6 fl/fl ;MMTV-Cre mammary glands and littermate controls was subjected to transcriptional profiling. Ccn6 deletion significantly altered the expression of ~180 unique transcripts with marked effects noted in gene ontologies associated with extracellular signaling, matrix-associated functions and mammary gland lobule development including Bmp8a, Acta1, Cxcl13 and Dmbt1 ( Figure 3g , Supplementary Figures 4A–C ). Together, the results indicate that Ccn6 is required for branching morphogenesis and proper development of the mammary ductal epithelium in virgin mice. Deletion of Ccn6 in the mammary epithelium results in abnormal age-related involution In humans, delayed age-related mammary gland involution has been associated with increased risk for breast cancer. 18 , 19 We found evidence of abnormal age-related involution in virgin Ccn6 fl/fl ;MMTV-Cre mice compared to controls. As expected, while at 8 months of age, one of four (25%) control mice had foci of brown adipose tissue, all Ccn6 fl/fl ;MMTV-Cre mice examined had residual brown adipose tissue in the mammary glands ( Figures 4a and b ). Likewise, while at 8 months of age there were no residual TEBs in virgin control mice, TEBs persisted in four of six (66.7%) virgin Ccn6 fl/fl ;MMTV-Cre mice ( Figures 4c and d ). Thus, Ccn6 expression is required for proper age-related involution of the virgin murine mammary gland. Figure 4 Ccn6 KO mice exhibit residual persistent brown adipose tissue (BAT) and terminal end bud (TEB)-like structures at 8 months of age. ( a ) Histological H&E sections from 8-month-old virgin Ccn6 wt/wt ;MMTV-Cre and Ccn6 fl/fl ;MMTV-Cre mice. Note the presence of BAT in Ccn6 fl/fl ;MMTV-Cre mice. ( b ) Quantification of the prevalence of BAT in the mammary whole mounts of virgin mice. ( c ) Mammary whole mounts of 8-month-old Ccn6 fl/fl ;MMTV-Cre and Ccn6 wt/wt ;MMTV-Cre mice show the persistence of TEB-like structures in Ccn6 fl/fl ;MMTV-Cre mice. ( d ) Bars show the prevalence of TEB-like structures, n =at least 3 mice per genotype. Two-tailed Fischer’s exact tests were used for all statistical analyses. * P <0.05. Full size image Ccn6 is a tumor suppressor in the murine mammary gland Thirteen of 18 (72.2%) Ccn6 fl/fl ;MMTV-Cre mice formed mammary carcinomas (age range 10-21 months, mean 15.5 months), compared to 3 of 24 (12.5%) Ccn6 wt/wt ;MMTV-Cre mice (Fisher’s exact test P <0.005). The mammary carcinomas of Ccn6 fl/fl ;MMTV-Cre exhibited mitotically active elongated cells of high histological grade, remarkably similar to the spindle subtype of human metaplastic carcinomas ( Figures 5Aa–f ). Ccn6 fl/fl ;MMTV-Cre mice tumors were highly invasive into the surrounding mammary tissues and skeletal muscle ( Figures 5Aa–f ). The non-neoplastic mammary gland tissues of Ccn6 fl/fl ;MMTV-Cre mice displayed a variety of histological abnormalities including persistent TEBs, secretory hyperplasia and atypical ductal hyperplasia similar to the human counterpart ( Figure 5B , Table 1 ). Figure 5 Ccn6 deletion induces mammary tumors morphologically similar to human spindle metaplastic carcinomas and enhances distant metastasis. ( A ) Histological sections of Ccn6 fl/fl ;MMTV-Cre mammary tumors (a–f) showing a strikingly spindle cell morphology and high tumor grade. In (f) the carcinoma infiltrates skeletal muscle. × 400 magnification. ( B ) Histological sections of non-neoplastic mammary glands of Ccn6 fl/fl ;MMTV-Cre mice adjacent to invasive carcinomas showing atypical ductal hyperplasia, secretory hyperplasia and residual TEBs. ( C ) Representative images of fresh lungs of Ccn6 fl/fl ; PyMT and Ccn6 wt/wt ;PyMT mice. Arrows indicate grossly identified metastasis. Below are histological sections of the lungs showing metastatic nodules, × 100 magnification. ( D ) Number of macroscopic lung metastasis and primary tumor volume in Ccn6 fl/fl ; PyMT and Ccn6 wt/w t ; PyMT mice. Shown is also CCN6 mRNA expression by quantitative reverse transcriptase-PCR in primary mammary tumors. ** P <0.005, two-tailed Student’s t test (scale bars, 50 μm). Full size image Table 1 Histopathological features of 13 mammary carcinomas from Ccn6 fl/fl virgin female mice Full size table Distant metastasis developed in 6 of 13 (46.2%) tumor-bearing Ccn6 fl/f ;MMTV-Cre mice, 5 to the lungs, and 1 to the soft tissues of the neck, compared to none of the Ccn6 wt/wt ;MMTV-Cre control mice. To investigate the effect of Ccn6 deletion in metastasis in greater detail, we crossed Ccn6 fl/fl ;MMTV-Cre mice with polyoma middle-T antigen (PyMT)-MMTV mice. PyMT-MMTV mice uniformly develop multifocal mammary tumors with a high incidence of pulmonary metastasis. 20 Despite no differences in primary mammary tumor volume, female virgin Ccn6 fl/fl ;MMTV-Cre/PyMT mice showed a significant increase in the number of macroscopically evident lung metastasis compared to Ccn6 wt/wt ;MMTV-Cre/PyMT mice ( Figures 5C and D ). Mammary carcinomas of Ccn6 fl/fl ;MMTV-Cre mice share an 87-gene signature with human metaplastic breast carcinomas We next sought to examine whether Ccn6 fl/fl ;MMTV-Cre mammary carcinomas recapitulate human metaplastic carcinoma phenotypes as defined by gene expression patterns. Transcriptional profiling of Ccn6 fl/fl ;MMTV-Cre mammary carcinomas using Affymetrix revealed 5354 significantly deregulated gene transcripts compared to age-matched wild-type mammary gland controls. We included genes with a fold change of 2 or greater and an adjusted P -value of 0.05 or less, using a false discovery rate of 0.05 or less. We compared the gene profile of Ccn6 fl/fl ;MMTV-Cre tumors with a published human metaplastic breast carcinoma signature derived by comparing gene expression profiles of metaplastic carcinomas with profiles of all other breast cancers. 5 By directly comparing the relative expression ratio of orthologous genes between human and mouse, we found that Ccn6 fl/fl ;MMTV-Cre tumors exhibit similarities with the gene expression activity of human metaplastic carcinomas. Of the 602 orthologous genes in the human metaplastic carcinoma signature, there were 87 (14.4%) shared genes concordantly up- or downregulated with human metaplastic carcinomas (Fisher’s exact test P =5 × 10 −11 , Figures 6a and b ). Figure 6 Mammary carcinomas in Ccn6 fl/fl ;MMTV-Cre mice share an 87 gene signature with human metaplastic carcinomas. (a ) Venn diagram showing the overlapping 87 orthologous genes between Ccn6 fl/fl ;MMTV-Cre tumors and a previously reported profile of human high grade metaplastic carcinomas 5 ( P = 5 × 10 −11 for the overlap). Of the 2930 orthologous genes with at least two-fold up- or downregulation in Ccn6 fl/fl ;MMTV-Cre tumors compared to Ccn6 wt/wt ;MMTV-Cre mammary glands, 87 showed significant overlap with the human metaplastic carcinoma signature. 5 ( b ) Heatmap of the 87 overlapping orthologous genes. ( c ) Representative images Ccn6 fl/fl ;MMTV-Cre mice mammary carcinomas immunostained with anti-Ccn6 and clinically used markers of human metaplastic carcinoma. Arrowheads point to a normal gland entrapped by the spindle metaplastic carcinoma. × 400 magnification (scale bars, 50 μm). Full size image In Gene Ontology analyses, the 87 shared orthologous genes fell into several functional groups. The top five enriched biological processes were morphogenesis of an epithelium, embryo development, lateral plasma membrane, cellular amino acid metabolic process and very long chain fatty acid catabolic process ( Table 2 , Supplementary Figure 5 ). Ccn6 fl/fl ;MMTV-Cre tumors exhibited downregulated expression of Cldn3, Cldn5, Cdh1, Krt19, previously reported in human metaplastic carcinomas. 4 , 5 , 6 Significantly, Ddr1 , Fzd3, Elf3 , Stat5b and Foxa1 are among the shared downregulated genes, while Wnt10a, Hmga2, Hbegf and Edil3 are among shared upregulated genes ( Figure 6b ). While these genes are important for normal epithelial homeostasis and have been implicated in carcinogenesis, they have not been previously considered in the pathogenesis of metaplastic carcinoma. Table 2 Summary of selected genes significantly deregulated in Ccn6 fl/fl mammary tumors and in human metaplastic carcinomas Full size table In daily pathology practice, the diagnosis of metaplastic carcinoma is supported by detection of a combination of epithelial and mesenchymal proteins. 6 , 21 , 22 , 23 , 24 Ccn6 fl/f ;MMTV-Cre mice tumors had a protein expression profile similar to human metaplastic carcinoma characterized by negative estrogen receptor and human epidermal growth factor receptor 2, ERBB2 (HER-2/neu) expression, reduced expression of epithelial markers E-cadherin and cytokeratin-18, and increased expression of the mesenchymal markers vimentin, and the human metaplastic carcinoma markers CD10 and p63 21 , 25 , 26 ( Figure 6c ). Taken together, these data document that Ccn6 deletion in the mammary gland epithelium is sufficient to induce carcinomas with histological, immunophenotypic and transcriptomic features that recapitulate human spindle metaplastic carcinomas. Discussion Metaplastic carcinomas, and in particular those with spindle histopathology, are less responsive to chemotherapy than other TNBC. 22 , 27 Elucidation of the molecular underpinnings of this unusual but aggressive type of breast cancer is necessary to develop effective therapeutic strategies. Our previous identification of Ccn6 frame shift mutations in human metaplastic carcinoma tissues 28 coupled with the observation that CCN6 expression is reduced or absent in clinical samples of metaplastic carcinoma raised the suspicion that Ccn6 may be causally involved in their pathogenesis rather than solely serve as a marker for these tumors. Our results demonstrate that genetic ablation of Ccn6 in the mammary epithelium of mice induces the development of invasive tumors that recapitulate human metaplastic spindle carcinomas on their histopathology, transcriptional activity and biological behavior. Ample mechanistic and functional data support the importance of the process of EMT in human tumorigenesis; however, direct identification of cancer cells undergoing EMT in clinical cancer tissue samples has been challenging. 29 Metaplastic carcinomas of the breast and other organs are the only human malignancies with overt EMT consisting of spindle cells and mesenchymal-like components, loss of epithelial and gain of mesenchymal marker proteins, and a distinctive transcriptional profile rich in genes involved in EMT. 4 , 5 , 6 CCN proteins are secreted extracellular matrix associated proteins that regulate stromal-epithelial cross-talk to influence angiogenesis, cell proliferation, invasion, adhesion and other important cell functions in a context-dependent manner. 9 , 10 , 11 , 12 , 13 CCN6 mutations were reported in patients with progressive pseudorheumatoid dysplasia, and deregulated expression of CCN6 was found in colon and breast cancer. 15 , 30 , 31 However, global Ccn6 deletion in transgenic mice revealed no discernible skeletal phenotype. 32 In breast cancer, we have demonstrated that CCN6 regulates the transition between epithelial and mesenchymal states in vivo and in vitro . 9 , 11 , 33 , 34 CCN6 knockdown in nontumorigenic breast cells induces a spindle and invasive phenotype with inhibited expression of E-cadherin and cytokeratin, and upregulation of vimentin and EMT transcription factors. 9 , 11 In contrast, CCN6 overexpression in breast cancer cells reduces invasion and metastasis in xenograft models, and leads to a mesenchymal to epithelial transition. 12 , 33 , 34 However, proof that Ccn6 deficiency results in mammary tumor development and the relevance of these data to human metaplastic carcinomas was unknown. The mammary epithelial cell-specific Ccn6 knockout model developed here provides direct demonstration of an essential role for CCN6 in the pathogenesis of mammary carcinomas with overt EMT. During mammary gland development, pre-pubertal and nulliparous adult Ccn6 fl/fl ;MMTV-Cre mice exhibit fewer mammary ducts and a hypoplastic breast epithelium with defective developmental patterns and reduced mitosis compared to littermate controls. The restricted development of the mammary glands of Ccn6 fl/fl ;MMTV-Cre mice highlights a role for Ccn6 in mammary gland morphogenesis and may have implications for tumorigenesis. A similar phenotype has been reported after deletion of classical tumor suppressors including Brca1 and Brca2 . 35 , 36 , 37 Because Ccn6 -related tumorigenesis is characterized by an initial growth disadvantage it is not unexpected that tumor formation occurs after a long latency, albeit with high frequency, as observed in Brca1 conditional knockout mice. 37 The developmental delay in pre-pubertal Ccn6- deficient mice and the propensity to develop carcinomas in adult life may be linked to the deregulation of gene targets. Transcriptional profiling of Ccn6 deleted versus Ccn6 wild-type pre-pubertal mammary glands uncovered a complex range of deregulated targets, especially those associated with mammary epithelial development. We noted a significant downregulation of Dmbt1 , a glycoprotein of the scavenger receptor cysteine-rich family, with roles in immunity and cancer. 38 Dmbt1 was identified as a modifier of susceptibility to mammary tumors in Trp53+/− BALB/c breast cancer model, 39 and Dmbt1 polymorphisms are associated with increased risk of breast cancer. 40 In contrast with invasive ductal or lobular carcinomas of the breast, metaplastic carcinomas often lack an associated ductal carcinoma in situ component, and its precursors have not been identified in tissue samples. 24 The conditional Ccn6 knockout mice generated here shed light into the histological abnormalities that precede the development of spindle cell metaplastic carcinomas. Virgin Ccn6 fl/fl ;MMTV-Cre mammary glands failed to involute properly with age, as evidenced by abnormal persistence of residual TEBs and brown adipose tissue at 8 months of age, in contrast with the mammary glands of age-matched littermate control mice. Of note, defective age-related involution with persistent brown adipose tissue has been reported in mammary glands from adult Brca1 mutant mice prior to development of poorly differentiated carcinomas. 41 These data are remarkable in light of recent studies showing that defective age-related involution in human mammary glands is significantly associated with increased risk for breast cancer development. 18 , 19 Postmenopausal women with delayed age-related mammary gland involution have a three-fold increased risk of breast cancer compared with women with complete involution. 18 , 19 Collectively, our model provides insights into the role of Ccn6 loss during preneoplastic progression and may have clinical implications for preventative strategies. Our transcriptional profiling studies show that Ccn6 fl/fl ;MMTV-Cre tumors share an 87-gene signature with human metaplastic carcinomas. 5 Among the 87 shared genes (10 up- and 77 are downregulated) are several that have been previously studied as markers of human metaplastic carcinoma including downregulated expression of Cdh1, Cldn3, Cldn4 and Krt19 . 5 , 21 , 24 , 42 Also significantly overlapping with human metaplastic carcinomas are novel genes, which have not been previously studied in this context. Among the three top upregulated genes are Hmga2, Igfbp2 and Hbegf. Hmga2 (high mobility group AT-hook 2) was previously reported to promote EMT and metastasis by activating the transforming growth factor β receptor II signaling. 43 Igf2bp2 (p62/IMP2), a member of the family of insulin-like growth factor 2 mRNA binding proteins, promotes breast cancer cell migration and reduces adhesion in TNBC cells. 44 Hbegf (heparin binding epidermal growth factor-like growth factor) induces breast cancer intravasation and metastasis. 45 Among the most significantly downregulated genes shared between Ccn6 fl/fl ;MMTV-Cre tumors and human metaplastic carcinomas are Foxa1, Spint1 and 2 and Ddr1 . Foxa1 is an important regulator of the ERα and the androgen receptor, whose silencing increases invasion and induces an aggressive, basal-like breast cancer phenotype. 46 Spint 1 and 2 (hepatocyte growth factor activation inhibitors HAI-1 and HAI-2) are potent matriptase inhibitors reducing invasion and metastasis in TNBC. 47 The Discoidin Domain Receptor 1 ( Ddr1 ), a collagen-binding receptor tyrosine kinase, is associated with worse survival in women with TNBC. 48 Elucidating the functional significance and underlying mechanism of these novel genes, which were previously unconsidered in the context of metaplastic carcinoma, may lead to new diagnostic, prognostic and therapeutic targets. In conclusion, data presented herein establish that mammary epithelial cell loss of Ccn6 triggers the development of spindle metaplastic carcinomas recapitulating human disease. We believe that this mouse model will be useful in understanding how Ccn6 serves its breast cancer suppression function. Materials and methods Construction of the targeting vector and generation of floxed Ccn6 mice All animal experiments followed procedures approved by the UCUCA of the University of Michigan, protocol UCUCA #PRO00005009. Mice with mammary epithelial-cell specific Ccn6 deletion were generated using the Cre-Lox recombination system. The mouse chromosome 10 sequence of mWisp3 gene was retrieved from the Ensembl database and used as reference. Using genomic DNA from FVB mice as a template, the 5′ homology arm (~5.6 kb), 3′ homology arm (~3.7 kb) and the conditional knockout region (~4.5 kb, containing exons 2 and 3) of the Ccn6 allele were generated by PCR using high fidelity Taq DNA polymerase. These fragments were cloned in the LoxFtNwCD or pCR4.0 vector; aside from the homology arms, the final vector also contained LoxP sequences flanking the conditional KO region (~4.5 kb), Frt sequences flanking the Neo expression cassette (for positive selection of the electroporated embryonic stem (ES) cells), and a DTA expression cassette (for negative selection of the ES cells). The final vector was confirmed by both restriction digestion and end sequencing analysis. For southern blot analysis and identification of the ES positive for homologous recombination with single neo integration, 5′ and 3′ external probes were generated by PCR. Thirty micrograms of NotI-linearized targeting vector DNA was electroporated into FVB ES cells and selected with 200 μg/ml G418 (Gibco, Waltham, MA, USA, #10131-035). The primary ES cell screening was performed with 3′ PCR and distal LoxP PCR. Five potential targeted clones (A9, B7, B8, H1 and H6) were identified from one plate, which were expanded for further analysis. Upon completion of the ES clone expansion, additional Southern confirmation analysis was performed. Based on this analysis, two out of the five expanded clones (B7 and B8) were confirmed for homologous recombination with single neo integration. Flp electroporation was performed on these clones, and subsequent Neo deleted clones were identified and confirmed by PCR upon expansion. Confirmed Neo-deleted ES cell clones were injected into mouse C57BL/6NTac blastocysts and breeding of the male chimeras to wild-type FVB females resulted in germline transmission. Heterozygous Ccn6 fl/wt mice were crossed with MMTV-Cre line A mice on an FVB background (kind gift of Dr Stephen Weiss). Generation of the targeting vector, cloning, electroporation into embryonic stem cells and breeding of chimeras to F1 generation mice was carried out by Taconic Biosciences, Inc. (Salisbury, CT, USA). Ccn6 fl/fl ;MMTV-Cre mice were intercrossed with MMTV-PyMT mice (FVB/n) to generate breeding cohorts. After weaning, female mice were examined twice a week for the development of mammary tumors by palpation. Tumors were then measured once per week until tumors reached 1 cm 3 , after which tumors were measured twice per week until mice were euthanized. Mice were euthanized when tumors reached 2 cm 3 , to control for tumor size when considering the effect of Ccn6 loss on metastasis. All mammary tumors were excised, weighed and collected for histology and RNA isolation as described elsewhere in the methods. Lungs were inflated with 10% neutral buffered formalin, macrometastases were counted immediately by eye, and then lobes were separated and placed in a cassette for histology. Genotyping Mice and embryos were genotyped by PCR analysis of genomic DNA from mouse tail samples. Isolation of genomic DNA was done using the DNeasy Blood and Tissue Kit (Qiagen, Valencia, CA, USA, #69506) according to the manufacturer’s instructions. Primers used to genotype CCN6 alleles were P1, 5′-TTC AAA ATT GTG GGA ATA GCT CCA GTA TT-3′; P2 5′-CCA TTG ATA CTG GTT GAG AAC ACA GTG AG-3′. Amplification with these primers results in a 196-bp fragment from the CCN6 WT allele and a 344-bp fragment from the CCN6 fl allele after neo deletion. Primers used to genotype CRE alleles were P1, 5′-GGT TCT GAT CTG AGC TCT GAG TG-3′; P2 5′-CAT CAC TCG TTG CAT CGA CCG G-3′. PCR was performed for 35 cycles of 94 °C (30 s), 56 °C (90 s) and 72 °C (60 s) for amplification of Ccn6 and Cre alleles. Animal studies All procedures were conducted in accordance with the NIH Guide for the Care and Use of Laboratory Animals and were approved by the Institutional Animal Care and Use Committee at the University of Michigan (UCUCA #PRO00005009). Mice were housed in standard conditions, at 23 °C, with a 14/10 h light/dark cycle, and food and water supplied ad libitum . We observed no difference among any of the control mice genotypes when considering histological and morphological phenotype, immunoblots or qPCR data; therefore, the wild-type control groups presented here consist of a mixture of all of the genotypes. Mice were euthanized and the left # 4 inguinal mammary gland was excised, spread onto a microscope slide and immediately fixed in Carnoy’s fixative for 5 h. Subsequently, the tissue was washed in graded alcohol solutions (70, 50, 25%; 15 min each) and finally in ddH 2 O for 5 min. Staining was carried out overnight in carmine alum stain. The tissue was then dehydrated in graded alcohol solutions (70, 80, 95 and 100%; 15 min each), cleared in xylene overnight, and mounted using Permount. Whole mounts were observed under a Leica MZFL III Stereo/Dissecting microscope (Leica Microsystems GmbH, Wetzlar, Germany), and digital images were recorded using an Olympus DP-70 digital camera. A portion of the mammary gland was placed in a cassette and fixed in 10% neutral buffered formalin for 48 h at room temperature. After fixation, tissues were transferred to 70% ethanol, embedded in paraffin, sectioned and stained with hematoxylin and eosin for routine histological examination, or left unstained for later immunohistochemistry. Microarray analysis and statistics RNA was isolated from either 8-week-old knockout and control mouse mammary glands ( n =4 for each group), or mammary tumors from Ccn6 fl/fl ;MMTV-Cre mice and age matched Ccn6 wt/wt ;MMTV-Cre control mammary glands ( n =4 for each group). Isolated RNA was processed on a Mouse Gene ST 2.1 strip assay at the University of Michigan Microarray Core using the Affy Plus Kit (Affymetrix). For statistical analyses, robust multi-array average was used to fit log2 expression values to probesets using the oligo package of bioconductor. 49 Linear models were fit using the limma package of bioconductor to identify differentially expressed probesets. 50 Arrays were weighted based on a gene-by-gene update algorithm designed to downweight genes deemed less reproducible. 51 P -values were adjusted using false discovery rate. 52 Comparison to human metaplastic carcinoma was done using data from a custom Agilent array of 12 patient samples developed at the University of North Carolina, GEO# GSE10885. 5 The mouse Affymetrix array data were compared to the human Agilent array data to identify the matching gene symbols between the arrays; further, a list of the gene symbols that were significantly deregulated in the same direction in both arrays was compiled. Statistical analysis of the resulting contingency table was done using a Fisher’s exact test. Immunoblots Whole mammary tissue sample (50 mg) was weighed from the most distal portion of the #4 inguinal mammary gland and lysed in RIPA lysis buffer (Thermo-Fisher Scientific, Waltham, MA, USA) with protease and phosphatase (Thermo-Fisher Scientific) inhibitors at 100 × dilution in Precellys 2 ml tissue homogenizing mixed beads kit (Cayman Chemical, Ann Arbor, MI, USA) using a Precellys homogenizer at 6500 rpm (2 cycles of 15 s motion−15 s rest). Immunoblots were carried out with 40 μg of total protein. Membranes were blocked in Tris-buffered saline, 0.1% Tween 20 (TBS-T) (Bio-Rad, Philadelphia, PA, USA, with 0.05% Tween20) with 4% milk (Bio-Rad, #170-6404) and incubated with primary antibodies in TBS-T at 4 °C overnight. Membranes were stained with Ponceau-S solution (Sigma-Aldrich, St Louis, MO, USA) to confirm equal loading. Anti-CCN6 (Santa Cruz Biotechnology, Santa Cruz, CA, USA, #SC-25443) was the primary antibody. Immunohistochemical studies Immunohistochemistry of human breast cancer tissue samples was performed in 110 cases of invasive breast carcinoma comprising 82 invasive ductal and 28 metaplastic carcinomas obtained with University of Michigan IRB approval (HUM00050330). Hematoxylin and eosin-stained samples were histologically evaluated and 5 μm thick sections were immunostained using anti-CCN6 (Santa Cruz Biotechnology) following published protocols. 9 CCN6 expression was evaluated as either low or high based on intensity of staining and percentage of cells staining. 9 To immunostain mouse mammary glands formalin-fixed, paraffin-embedded sections were cut at 5 μm and rehydrated with water. Heat induced epitope retrieval was performed with FLEX TRS High pH Retrieval Buffer (9.01) for CCN6, for 20 min (Dako, Carpinteria, CA). Immunohistochemical staining was performed using Peroxidase for 5 min to quench endogenous peroxidases, followed by incubation for 60 min with one of the following antibodies: CCN6 (Santa Cruz Biotechnology), CK-18 (Abcam, Cambridge, MA, USA), E-cadherin (BD Biosciences, San Jose, CA, USA), Vimentin (BD Biosciences), CD10 (BD Biosciences), p63 (Santa Cruz Biotechnology), ERα (Santa Cruz Biotechnology) and HER-2/neu (Santa Cruz Biotechnology). The EnVision+Rabbit horseradish peroxidase System was used for detection (DakoCytomation, Santa Clara, CA, USA). 3,3'-diaminobenzidine chromagen was then applied for 10 min. Slides were counterstained with hematoxylin for 5 s and then dehydrated and coverslipped. For quantification, three representative areas from three Ccn6 fl/fl ;MMTV-Cre and three Ccn6 wt/wt ;MMTV-Cre mice were counted for epithelial cells that stained positive for the protein of interest and calculated as a percentage of the total number of epithelial cells; all counts were done blind to genotype by two separate investigators. Statistics Data are expressed as mean±s.d. All experiments were repeated at least three times with similar results. The two-tailed Student’s t test was performed to determine the probability of statistically significant difference ( P -values) and recorded in figure legends. A P -value less than 0.05 was considered statistically significant.
For more than a decade, Celina Kleer, M.D., has been studying how a poorly understood protein called CCN6 affects breast cancer. To learn more about its role in breast cancer development, Kleer's lab designed a special mouse model - which led to something unexpected. They deleted CCN6 from the mammary gland in the mice. This type of model allows researchers to study effects specific to the loss of the protein. As Kleer and her team checked in at different ages, they found delayed development and mammary glands that did not develop properly. "After a year, the mice started to form mammary gland tumors. These tumors looked identical to human metaplastic breast cancer, with the same characteristics. That was very exciting," says Kleer, Harold A. Oberman Collegiate Professor of Pathology and director of the Breast Pathology Program at the University of Michigan Comprehensive Cancer Center. Metaplastic breast cancer is a very rare and aggressive subtype of triple-negative breast cancer - a type considered rare and aggressive of its own. Up to 20 percent of all breast cancers are triple-negative. Only 1 percent are metaplastic. "Metaplastic breast cancers are challenging to diagnose and treat. In part, the difficulties stem from the lack of mouse models to study this disease," Kleer says. So not only did Kleer gain a better understanding of CCN6, but her lab's findings open the door to a better understanding of this very challenging subtype of breast cancer. The study is published in Oncogene. "Our hypothesis, based on years of experiments in our lab, was that knocking out this gene would induce breast cancer. But we didn't know if knocking out CCN6 would be enough to unleash tumors, and if so, when, or what kind," Kleer says. "Now we have a new mouse model, and a new way of studying metaplastic carcinomas, for which there's no other model." One of the hallmarks of metaplastic breast cancer is that the cells are more mesenchymal, a cell state that enables them to move and invade. Likewise, researchers saw this in their mouse model: knocking down CCN6 induced the process known as the epithelial to mesenchymal transition. "This process is hard to see in tumors under a microscope. It's exciting that we see this in the mouse model as well as in patient samples and cell lines," Kleer says. The researchers looked at the tumors developed by mice in their new model and identified several potential genes to target with therapeutics. Some of the options, such as p38, already have antibodies or inhibitors against them. The team's next steps will be to test these potential therapeutics in the lab, in combination with existing chemotherapies. They will also use the mouse model to gain a better understanding of metaplastic breast cancer and discover new genes that play a role it its development. "Understanding the disease may lead us to better ways to attack it," Kleer says. "For patients with metaplastic breast cancer, it doesn't matter that it's rare. They want - and they deserve - better treatments."
10.1038/onc.2016.381
Biology
World's smallest bears' facial expressions throw doubt on human superiority
Facial Complexity in Sun Bears: Exact Facial Mimicry and Social Sensitivity, Scientific Reports (2019). DOI: 10.1038/s41598-019-39932-6 , www.nature.com/articles/s41598-019-39932-6 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-019-39932-6
https://phys.org/news/2019-03-world-smallest-facial-human-superiority.html
Abstract Facial mimicry is a central feature of human social interactions. Although it has been evidenced in other mammals, no study has yet shown that this phenomenon can reach the level of precision seem in humans and gorillas. Here, we studied the facial complexity of group-housed sun bears, a typically solitary species, with special focus on testing for exact facial mimicry. Our results provided evidence that the bears have the ability to mimic the expressions of their conspecifics and that they do so by matching the exact facial variants they interact with. In addition, the data showed the bears produced the open-mouth faces predominantly when they received the recipient’s attention, suggesting a degree of social sensitivity. Our finding questions the relationship between communicative complexity and social complexity, and suggests the possibility that the capacity for complex facial communication is phylogenetically more widespread than previously thought. Introduction Behavioural mimicry pervades human social interactions 1 . Facial, postural and vocal signals are regularly automatically shared by others, sometimes resulting in or from emotional contagion 2 . There has been a vast literature on facial mimicry 3 , leading to the view that humans are capable of matching the facial expressions of others with great precision 4 (hereafter, “exact facial mimicry”), with benefits during social interactions such as strengthened social bonds and sharing detailed emotional information 2 , 5 . Although it has been shown that facial mimicry is also present in non-human mammals 6 , 7 , 8 , only one study of gorillas ( Gorilla gorilla gorilla ) 9 has so far demonstrated the precision as seen in humans, i.e., by mimicking one variant of a facial display over another. In this study, we examined facial expressions in spontaneous social play of group-housed sun bears ( Helarctos malayanus ). Many basic facts of sun bear biology are unknown, due to the difficulties of studying this elusive species in the wild 10 , 11 . Nonetheless, it is known that sun bears feed on an omnivorous diet in tropical rainforests 12 , and a study on adult sun bears in Ulu Segama Forest Reserve showed that they seldom participate in social interactions with one another outside of mating contexts despite home ranges overlapping by up to 20% 10 , indicating a largely solitary lifestyle. Notably, mothers often raise two offspring simultaneously, which are highly altricial for the first 3 months and interact with the mother extensively during this period 11 . Facial expressions in sun bears have not been studied, but open-mouth expressions are shown mostly by juveniles and during play in the closely related American black bears ( Ursus americanus ) 13 . Sun bears use two distinct variants of open-mouth faces during play (personal observations), similar to American black bears 13 and other carnivorans 14 . This observation is intriguing because it raises the possibility that sun bears exhibit complex forms of facial communication comparable to those that have been shown mostly in species with strong social tendencies 6 , 7 , 8 , 9 . In turn, this implies that complex forms of communication cannot be explained only as evolved adaptations to a demanding social environment 15 . Hallmarks of complex facial communication include for instance muscular variation in expressions 15 , facial mimicry 1 , 3 , 4 , and social sensitivity to the attentional states of others during expression production 16 , 17 , 18 . As facial mimicry occurs in phylogenetically distanced mammalian species during play (primates 6 , 7 , 9 ; dogs 8 ) and sun bears produce distinct facial variants during social play which is an essential precondition for exact mimicry, our hypothesis is that facial mimicry and exact facial mimicry are present in sun bears during social play. Additionally, social sensitivity was also measured as a component of such complexity. Social sensitivity via facial communication was previously suggested for dogs ( Canis familiaris ) 16 and apes 17 , 18 , mammalian taxa that are closely associated with humans on a social and phylogenetic level, respectively. Social sensitivity is essential to effective facial communication, particularly during play wherein the absence of play signalling often escalates play into aggression 19 . Given that bears are known to engage in play 13 , we therefore expect sun bears show social sensitivity in their facial expression production. Material and Method Sun bears and data collection Twenty-two group-housed rehabilitant sun bears (aged 2–12 years; mean age = 6.0 ± 2.9 SD) of the Bornean Sun Bear Conservation Centre (Malaysia) were studied. All bears were unrelated. The bears were video-recorded using 3-minute focal recordings and ad libitum recordings from January 2015 to September 2016 and from August to December 2017. Recordings of the bears were collected in three outdoor forest enclosures, ranging from 0.13–0.32 hectares, meaning enclosures were large enough that bears did not have to socially interact by necessity. Group compositions were changed throughout the data collection of this study, but the group sizes within enclosures did not exceed six bears. For further details about these bears (see Table S1 ), study site and recording equipment, see Supplementary Methods. Behavioural coding Social play involved one bear directing a play action towards another bear, and the other bear responding with a play action. Social play began with the first play action and ended when play actions stopped for 10 seconds or more. Three-hundred seventy-two social play bouts were identified. Within these play bouts, scenes of rough play were observed 135 times and scenes of gentle play were observed 333 times (see supplementary electronic materials for examples and definitions of rough and gentle play). A single play bout could include both rough and gentle play at different points. Nine-hundred and thirty-one open-mouth facial expressions were coded during these bouts when the mouth opened widely and the jaw dropped. All observed expressions were coded. Two variants were identified: WUI (With Upper Incisors) occurred when the upper lip and nose were raised, which resulted in the wrinkling of the muzzle bridge and the revealing of the upper incisors (i.e., the row of small teeth between the two canine teeth) (n = 450); NUI (No Upper Incisors) occurred when the bear did not raise the upper lip and nose to such an extent, displaying therefore no wrinkling of the muzzle bridge and no upper incisors (n = 481 expressions). During social play, it was coded whether playmates were facing each other. Bears had to be within a 45-degree head rotation to be considered face-to-face. Inter-coder reliability tests showed Kappa values of 0.73 for facial variants as well as 0.80 for facing (both based on 102 expressions – 10.9% of the sample) and 0.75 for play intensity (based on 51 play bouts – 13.7% of the sample). For additional details on the behavioural coding, see Supplementary Methods Tables S2 – S4 . Data analyses Facial mimicry is a phenomenon whereby a facial expression of a subject is triggered specifically by a similar facial expression it has just observed in another individual (See supplementary electronic materials). Here, the subject was always the individual who perceived an expression, the individual who first produced an expression was always the ‘playmate’. To examine whether subjects showed facial mimicry, we used the method developed by Davila-Ross and colleagues 6 , 20 . We first coded whether a subject produced an open-mouth face within one second of perceiving an open-mouth face in their playmate while face-to-face; such types of scenes were named ‘scene 1’. We then searched for comparable scenes, where the same dyad was engaging in the same play intensity while face-to-face, but wherein the playmate was not showing an open-mouth face; such scenes were named ‘scene 2’. We then coded whether the subject showed an open-mouth face within 1 second of the onset of scene 2. Combining the two types of scenes allows assessment of whether subjects’ facial behaviour is influenced by playmates’ facial behaviour while controlling for other relevant variables, i.e., dyad composition and play intensity 20 . The starting point for locating a scene 2 was 5 seconds following a scene 1. If the playmate was producing an open-mouth face at this point, the search was continued linearly until a scene wherein the playmate was not producing an open-mouth face was found. Together, scene 1 and 2 gave rise to 4 possible case types: subject shows an open-mouth face only in scene 1, subject only shows an open-mouth face in scene 2, subject always shows an open-mouth face, and subject never shows an open-mouth face. If sun bears show facial mimicry, they should be significantly more likely to produce open-mouth faces in the first case type compared to the second case type. The comparison of these case types test directly if the open-mouth faces of the subjects are actual responses to open-mouth faces of the playmates and, thus, this method represents a highly controlled quantitative manner to gauge responses to a specific stimulus in natural social settings 6 . Afterwards, we examined whether the subjects responded to their playmates’ open-mouth faces with exactly matching expressions. Specifically, we coded exactly matching when a given subject displayed a facial variant that matched the facial variant produced by their playmate emitted within 1 second prior to it (e.g., NUI following NUI). We coded ‘Non-exact’ behaviour otherwise (e.g., NUI following WUI). See Fig. 1 for an example. The number of times each subject matched the perceived variant was compared to the number of times each subject produced a non-exact variant. If a significantly greater number of expressions were exactly matching rather than non-exact, then it would represent evidence of exact facial mimicry in sun bears. Only expressions produced when face-to-face were included in all mimicry analyses because an expression not seen could by definition not be mimicked. Figure 1 Exact matching of open-mouth variants. Series of photographs demonstrating exact matching of ( A ) NUI open-mouth expression and ( B ) WUI open-mouth expression. Full size image To examine social sensitivity, we compared the total number of expressions per subject that were produced when face-to-face versus not face-to-face. Finally, to explore the role of open-mouth expressions and facial matching during social play, rates of expression production and rapid responses (both exactly matching and non-matching) were calculated by dividing the number of expressions produced or the total number of instances of rapid facial responses by the amount of time spent engaging in rough or gentle social play. These rates were then compared within and between play intensities. The relationship between rates of expression production, mimicry, and play duration was also examined. For further analyses (on play intensity), see Supplementary Methods. For multiple comparisons, Bonferroni corrections were used. Results Of the 22 bears studied, 21 produced open-mouth expressions, and 13 showed them within 1 second following the open-mouth face of a playmate while face-to-face. A mean of 19.5% (±21.7% SD) of open-mouth expressions produced by the playmates when facing each other was followed by an open-mouth expression within 1 second. Percentages per subject can be seen in Supplementary Results Table S5 . Testing for facial mimicry To test for facial mimicry, we examined whether subjects were more likely to produce their open-mouth expressions as a response to the open-mouth expression in a playmate than as a response to an expression with a closed mouth in a playmate. Subjects showed significantly more expressions in the former (one-tailed McNemar test, χ 2 = 294.22, p < 0.001) (Fig. 2 ). See Supplementary Results Table S6 for a breakdown of case types per subject. Figure 2 Number of each case type observed in total. The white face on the left represents the playmate, and the orange face on the right represents the subjects’ facial behaviour within the following second. Circular mouths correspond to a facial expression whereas flat mouths do not. Full size image Testing for exact facial mimicry Following playmates’ NUI expressions, subjects produced significantly higher numbers of NUI expressions than WUI expressions within 1 second (one-tailed Wilcoxon signed-ranks, Z = −2.61, T = 2, N = 10, p = 0.005); 72.2% (±7.1% SD) of these facial expressions produced by subjects were NUI expressions. Similarly, the subjects showed significantly more WUI expressions than NUI expressions following the playmates’ WUI expressions produced within 1 second prior (Z = −2.30, T = 10, N = 12, p = 0.011) (Fig. 3 ); 82.2% (±9.0% SD) of these facial expressions produced by subjects were WUI expressions. See Tables S7 and S8 of the Supplementary Results for a breakdown per subject. Figure 3 Subject NUI (N = 10 bears) and WUI (N = 12) expressions following NUI and WUI expressions of the social partners within 1 second. The box plots depict medians, upper and lower quartiles, and minimum and maximum range values. *p < 0.05. Full size image Facial behavior in relation to social sensitivity, play intensity, and play duration Social sensitivity The bears showed significantly higher occurrences of open-mouth expressions when face-to-face than when not face-to-face (two-tailed Wilcoxon Signed-Ranks, Z = −3.17, T = 1, N = 21 p < 0.001). The bears were facing each other only 14.6% (±5.5% SD) of the play duration. Play intensity Within gentle play, exact matching of open-mouth faces within 1 second occurred significantly more frequently than matching of open-mouth faces that was not exact (two-tailed Wilcoxon Signed-Ranks, Z = −0.268, T = 1, N = 11, p = 0.042). Within rough play, no such differences were found (Z = −0.943, T = 6, N = 7, p = 0.345). Furthermore, exact matching of NUI variants did not differ significantly from exact matching of WUI variants during gentle play (Z = −0.135, N = 6, p = 0.892), nor during rough play (Z = −0.406, N = 6, p = 0.684). No significant differences between gentle versus rough play were found for overall rates of exact open-mouth matching (Z = −0.710, T = 30, N = 12, p = 0.477), rates of exact NUI matching (Z = −0.1.183, T = 7, N = 7, p = 0.237) and rates of exact WUI matching (Z = −0.700, T = 13, N = 8, p = 0.484). For additional results on play intensity, see Supplementary Results. Play duration No significant correlation was observed between the rate of expressions produced (total number of expressions divided by the total amount of time spent playing during observations) and the play duration per subject (Spearman’s rank correlation: r s (22) = −0.07, p = 0.750). No statistically significant relationship was found between the play bout duration and the rate of facial mimicry; this was the case when all matched expressions (both exact and non-exact) were examined (Spearman’s rank correlation, r s (14) = −0.01, p = 0.960) and when only exactly matched open-mouth expressions were examined (Spearman’s rank correlation: r s (11) = −0.21, p = 0.500). Discussion This study examined facial communication in sun bears. The results showed bears produced the majority of their open-mouth expressions when their playmates faced them. To our knowledge, this is the first demonstration that the production of facial expressions is sensitive to social partner’s attentional state in a bear species. Thus, even a non-domesticated non-primate mammal is likely to have social sensitivity as part of their communication, comparable to dogs 16 and apes 17 , 18 , who modify their facial morphologies when seen by social partners. Such sensitivity may be important for efficiently communicating facial displays. Although, we failed to find any relationship between facial expression production and play duration, in contrast to previous studies 21 , 22 , indicating that unveiling the function of facial expression production in sun bears requires further investigation. Special focus of this study was facial mimicry and exact facial mimicry. Firstly, the results suggested that the sun bears mimicked facial expressions of their playmates. Facial expressions of sun bears are, thus, likely to promote communicative exchanges via mimicking similar to dogs 8 , and primates 6 , 7 , 9 . Secondly, the results showed that the bears matched the same facial variant of their social partners, suggesting facial mimicry in sun bears to be ‘exact’. This precision in facial replication, shown so far only in humans 4 and gorillas 9 , was found despite primates arguably having more specialized brain regions for facial processing than other mammals 23 . As neither facial variant was more likely to occur in either rough or gentle play, such exact matching cannot be attributed to particular variants only occurring in particular contexts. Moreover, play vocalizations in the studied sun bears are rare and they were not heard to be produced during these facial exchanges, so this is unlikely to have impacted the results. Although there was no relationship between facial behaviour and play duration in contrast to previous studies 21 , 22 , facial mimicry was exact predominantly during gentle play. Perhaps exact facial mimicry helps to signal a readiness to transition into rougher play in sun bears, which is consistent with the proposition that facial communication helps regulate high play intensity 19 , 22 , and is a pattern previously associated with canid play signalling 19 . However, this possibility requires further research into whether gentle play is more likely to transition into rough play when exact facial mimicry occurs. Alternatively, exact facial mimicry might be more directly linked to gentle play and hereby function, for instance, to strengthen social bonds 24 . Again, this requires further research, involving quantifying social bonds between group members and examining whether exact mimicry is more common among closely bonding dyads. Altogether, this study provided evidence that sun bears produce facial expressions to communicate in an efficient, effective and exact way. Such complexity in facial communication was previously not known for a non-domesticated, non-primate species and, furthermore, cannot be explained by evolved adaptations to a complex social environment, as these bears are primarily solitary in the wild. Consequently, we suggest the ability to facially communicate in complex ways could be a pervasive trait present across various mammal taxa 25 , allowing mammals to navigate socio-ecologies that can vary in space and time 26 . To explore this possibility, we encourage researchers to test for the presence of this trait in a wide range of mammalian taxa. Ethics Ethics approval was obtained from the University of Portsmouth Animal Research Ethics Committee. BSBCC is a project joined with Land Empowerment Animals People, Sabah Wildlife Department and Sabah Forestry Department. Research at BSBCC is conducted in accordance with their national legal standards of animal care. Data Availability Data is available as a supplementary material.
The world's smallest bears can exactly mimic another bear's facial expressions, casting doubt on humans and other primates' supremacy at this subtle form of communication. It is the first time such exact facial mimicry has been seen outside of humans and gorillas. The research, by Dr. Marina Davila-Ross and Ph.D. candidate Derry Taylor, both at the University of Portsmouth, is published in Scientific Reports. The researchers studied sun bears—a solitary species in the wild, but also surprisingly playful—for more than two years. They found bears can use facial expressions to communicate with others in a similar way to humans and apes, strongly suggesting other mammals might also be masters of this complex social skill and, in addition, have a degree of social sensitivity. Dr. Davila-Ross said: "Mimicking the facial expressions of others in exact ways is one of the pillars of human communication. Other primates and dogs are known to mimic each other, but only great apes and humans, and now sun bears, were previously known to show such complexity in their facial mimicry. "Because sun bears appear to have facial communication of such complexity and because they have no special evolutionary link to humans like monkeys are apes, nor are they domesticated animals like dogs, we are confident that this more advanced form of mimicry is present in various other species. This, however, needs to be further investigated. "What's most surprising is the sun bear is not a social animal. In the wild, it's a relatively solitary animal, so this suggests the ability to communicate via complex facial expressions could be a pervasive trait in mammals, allowing them to navigate their societies." Facial mimicry is when an animal responds to another's facial expression with the same or similar expression. Mr Taylor coded the facial expressions of 22 sun bears in spontaneous social play sessions. The bears, aged 2-12, were housed in Bornean Sun Bear Conservation Centre in Malaysia in which enclosures were large enough to allow bears to choose whether to interact or not. Despite the bears' preference in the wild for a solitary life, the bears in this study took part in hundreds of play bouts, with more than twice as many gentle play sessions compared to rough play. During these encounters, the research team coded two distinct expressions—one involving a display of the upper incisor teeth, and one without. The bears were most likely to show precise facial mimicry during gentle play. Mr Taylor said such subtle mimicking could be to help two bears signal that they are ready to play more roughly, or to strengthen social bonds. He said: "It is widely believed that we only find complex forms of communication in species with complex social systems. As sun bears are a largely solitary species, our study of their facial communication questions this belief, because it shows a complex form of facial communication that until now was known only in more social species. "Sun bears are an elusive species in the wild and so very little is known about them. We know they live in tropical rainforests, eat almost everything, and that outside of the mating season adults have little to do with one another. "That's what makes these results so fascinating—they are a non-social species who when face to face can communicate subtly and precisely." Sun bears, also known as honey bears, stand at 120-150 cm tall and weigh up to 80kg. They are endangered and live in the tropical forests of south-east Asia. Social sophistication aside, sun bear numbers are dwindling due to deforestation, poaching and being killed by farmers for eating crops. Increasingly, new mother bears are killed so their cub can be taken and raised as a pet or kept in captivity as 'bile bears' where their bile is harvested for use in some Chinese medicines. The field research was funded by the Royal Society and the Leakey Foundation. Previous research at the University of Portsmouth showed dogs alter their facial expressions if they know someone is looking at them.
10.1038/s41598-019-39932-6
Other
Pair of star-crossed oviraptors yield new clues about dinosaur mating habits
"A possible instance of sexual dimorphism in the tails of two oviraptorosaur dinosaurs." Scientific Reports 5, Article number: 9472 DOI: 10.1038/srep09472 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep09472
https://phys.org/news/2015-04-pair-star-crossed-oviraptors-yield-clues.html
Abstract The hypothesis that oviraptorosaurs used tail-feather displays in courtship behavior previously predicted that oviraptorosaurs would be found to display sexually dimorphic caudal osteology. MPC-D 100/1002 and MPC-D 100/1127 are two specimens of the oviraptorosaur Khaan mckennai . Although similar in absolute size and in virtually all other anatomical details, the anterior haemal spines of MPC-D 100/1002 exceed those of MPC-D 100/1127 in ventral depth and develop a hitherto unreported “spearhead” shape. This dissimilarity cannot be readily explained as pathologic and is too extreme to be reasonably attributed to the amount of individual variation expected among con-specifics. Instead, this discrepancy in haemal spine morphology may be attributable to sexual dimorphism. The haemal spine form of MPC-D 100/1002 offers greater surface area for caudal muscle insertions. On this basis, MPC-D 100/1002 is regarded as most probably male and MPC-D 100/1127 is regarded as most probably female. Introduction As in all major vertebrate groups, dinosaurs must have included many species with gross anatomical traits that were sexually dimorphic. However, the identification of sexual dimorphism in dinosaurs is hindered by the limitations of an ancient fossil record, which restricts comparative sample size, degrades the quality of available specimens and usually precludes the observation of non-osteological features. Among non-avian theropod dinosaurs, previous attempts to recognize sexual dimorphism have been controversial and inconclusive. Here we describe a possible instance of sexual dimorphism based on chevron morphology in the oviraptorosaur Khaan mckennai . Two specimens of Khaan mckennai (Paleontological Center of the Mongolian Academy of Sciences MPC-D 100/1002 and MPC-D 100/1127) were excavated from the same Upper Cretaceous locality in the Djadokhta Formation (Ukhaa Tolgod, Gurvan Tes Somon, Omnogov Aimak, Gobi Desert, Mongolia) and were found in close proximity to each other (approximately 20 cm away) and in the same bedding plane 1 . Geological work at Ukhaa Tolgod indicates that the preserved animals were buried alive by catastrophic dune collapses, precipitated by heavy rains 2 , 3 . MPC-D 100/1002 and MPC-D 100/1127 were likely killed by a single collapse event and appear to have been near each other prior to death. The two specimens were collectively given the informal, but perhaps fortuitous, nicknames of “Romeo and Juliet” or, occasionally, “Sid and Nancy”. MPC-D 100/1002 is slightly larger than MPC-D 100/1127 (femur lengths of 195 mm and 190 mm, respectively). MPC-D 100/1127 – the holotype of Khaan mckennai 1 – is a complete skeleton. MPC-D 100/1002 is a nearly complete skeleton, missing only the middle and posterior portions of the tail. It nevertheless displays all diagnostic characteristics of Khaan mckennai , including a proximally narrow metacarpal III that does not contact the distal carpals (for a full description of both specimens and their taxonomic assignment see Clark et al. 1 and Balanoff and Norell 4 ). In both individuals, all vertebral neural arches and centra are fully fused, indicating that both had reached adulthood before death 5 , 6 , although histological evidence is still wanting. Based on the similarities in size and proportions, both individuals also appear to have reached roughly the same level of maturity. However, the chevrons of MPC-D 100/1002 and MPC-D 100/1127 show a striking disparity in morphology. Description The anterior chevrons of MPC-D 100/1127 have an overall form that is typical of non-avian theropods, including that previously reported from other oviraptorosaurs ( Figure 1 ). The anterior haemal spines are flat laterally and are simple finger-like projections. In terms of dorsoventral depth, the haemal spine of the second chevron is the deepest and all more posterior haemal spines sequentially decrease in depth. At roughly the middle of the tail, the haemal spines gradually transition into a more boot-shaped form, with weak anterior projections and strong posterior projections ( Figure 1 and 2 ). Although the posterior haemal spines of some oviraptorosaurs (such as Anzu wyliei CM 78000 and 78001, Gigantoraptor erlianensis LH V0011 and Nomingia gobiensis MPC-D 100/119) are rectangular 7 , the general pattern of gradual chevron shape change seen in MPC-D 100/1127 is typical, not only of oviraptorosaurs, but of most non-avian coelurosaurs 8 . As in MPC-D 100/1127, the first chevron of MPC-D 100/1002 lies between the second and third caudal vertebrae; it also has a lateromedially flat haemal spine that is a simple finger-like projection, although the ventral tip is proportionately wider anteroposteriorly than that of the first chevron of MPC-D 100/1127. The second chevron of MPC-D 100/1002 has a haemal spine that is transversely flat and has a prominent posterior heel-like projection located roughly two thirds the way down the central shaft. The haemal spines of the third and fourth chevrons resemble that of the second, but are sequentially shorter and have an increasingly more prominent posterior projection. Each also has a smaller, but increasingly prominent, anterior projection that is slightly ventral to the posterior projection. The haemal spine of the fourth chevron has a distinctive ventrally-projecting spear-head shape ( Figure 1 and 2 ). Unfortunately, only the first four chevrons of MPC-D 100/1002 are preserved and it is impossible to determine if the shape of the haemal spine form of the fourth chevron is representative of more posterior chevrons, or if the haemal spine shape changed posteriorly. Relative to the proportions of the vertebrae, the anterior haemal spines of MPC-D 100/1002 all exceed those of MPC-D 100/1127 in ventral depth ( Figure 1 and 2 , Table 1 ). Table 1 Caudal vertebra and chevron measurements of MPC-D 100/1002 and MPC-D 100/1127 -- “???” indicates measurements that were obscured and could not be reliably measured Full size table Discussion The dissimilarity among the anterior chevrons of MPC-D 100/1002 and MPC-D 100/1127 cannot be readily explained as pathologic. In both specimens, the chevrons show no signs of rugosities or of bilateral asymmetries and neither form is limited to a single chevron. The finger-like form of MPC-D 100/1127 is shared by all anterior haemal spines and the more unusual spear-head haemal spine form of MPC-D 100/1002 manifests progressively across the second, third and fourth haemal spines ( Figure 1 and 2 ). Similarly, the differences between the haemal spine forms of MPC-D 100/1002 and MPC-D 100/1127 are too extreme and purposive to be reasonably attributed to the degree of individual variation that is expected among con-specifics. The possibility that the observed discrepancies in chevron form are the result of sexual dimorphism merits consideration. It has been previously reported that the anterior chevrons of modern crocodilians are sexually dimorphic 9 , 10 , 11 , 12 , 13 and it has been previously hypothesized and widely repeated within the literature that the anterior chevrons of non-avian theropod dinosaurs were as well 10 , 11 , 12 , 13 , 14 , 15 . Two functional explanations have been offered to explain alleged sex-specific chevron forms. First, anterior chevrons with reduced haemal spines would theoretically increase the space between the axial skeleton and the posterior projection of the ischium. This would provide more room for the oviduct and for the passage of eggs 12 , 13 . Thus, reduced anterior haemal spines would be a female characteristic. Second, the haemal spine of the first chevron could serve as an attachment surface for the penis retractor muscle 10 , 11 , 13 . Following this explanation, lengthy anterior haemal spines would be a characteristic of males, which would benefit from the potentially larger surface for muscle attachment. Oviraptorosaurs are known to have laid eggs that were large in comparison to adult body size and to have laid pairs of eggs simultaneously 16 , 17 , 18 , 19 . Female oviraptorosaurs were therefore particularly likely to have had large pelvic canals, which would make oviraptorosaurs good candidates to display sexually dimorphic chevrons. However, in a reconsideration of chevron sexual dimorphism in crocodilians, Erickson et al. 20 examined the morphology of 36 Alligator mississippiensis of known sex and found no significant support for the claim that chevron shape is a means of determining sex. Similarly, Peterman and Gauthier 21 recently reported no evidence of chevron sexual dimorphism in a survey of 31 specimens of the Tiger Whiptail lizard ( Aspidoscelis tigris ). These results cast serious doubt on expectations of identifying sexually dimorphic chevrons in non-avian theropods for the previously hypothesized reasons. More recently, based on numerous anatomical traits related to enhanced caudal musculature and caudal flexibility and on the discovery of feather-fan supporting pygostyles in multiple oviraptorosaur genera, Persons et al. 7 offered the hypothesis that oviraptorosaurs had tails that were uniquely adapted to serve as dynamic display structures (this function appears to be supported by reconstructed evolutionary changes in intervertebral joint stiffness 22 ). Persons et al. 7 postulated that these caudal displays were likely employed during courtship rituals. Following this, Persons et al. 7 , predicted that oviraptorosaur tails would be found to show sexual dimorphism. It is not clear how all aspects of the chevron forms of MPC-D 100/1002 and MPC-D 100/1127 might enhance the theoretical role of the tail as a display structure. However, the anterior and posterior projections of the haemal spines of MPC-D 100/1002 certainly increase the surface area available for caudal muscle insertions. Furthermore, the relatively expanded ventral haemal tips of MPC-D 100/1002 provided enlarged insertion surfaces for the m. ischiocaudalis 23 , which is a key muscle in controlling lateral and ventral tail flexure. If sexual dimorphism is accepted as an explanation for the morphological differences between MPC-D 100/1002 and MPC-D 100/1127, then the question of which of the two forms represents which sex remains. Based on the results from the comparative studies of modern crocodilian and lacertian dimorphism, it appears that chevron form is not a common and generally reliable indicator of sex. However, if it is true that shorter anterior haemal spines are an adaptation that facilitates enlarged oviduct size 12 , 13 , then the short haemal spines of MPC-D 100/1127 would be regarded as the female characteristic and the longer haemal spines of MPC-D 100/1002 would be regarded as the male character. Similarly, if a longer and more robust first chevron does facilitate anchoring of the penis retractor muscle 10 , 11 , 12 , then the longer and broader tipped haemal spines of MPC-D 100/1002 is the male character and the shorter and slighter haemal spines of MPC-D 100/1127 would be regarded as the female form. Lastly, if it true that oviraptorosaur tails were specialized to serve as dynamic display structures and haemal spines with increased surface areas for muscle insertion facilitated such displays 7 , the longer and broader tipped haemal spines of MPC-D 100/1002 would be regarded as the male form. This is suggested because gaudy feather-fanning courtship and other social displays are typically performed by males among modern birds (e.g. peafowl, sage grouse, turkeys etc.) 24 , 25 , 26 . Thus, regardless of which of the functional interpretations is/are considered correct, MPC-D 100/1002 is most probably male and MPC-D 100/1127 is most probably female. Finally, it should be noted that the possible instance of sexual dimorphism described is, at present, limited to Khaan mckennai and is unrecognized in any other species of oviraptorosaurs. Examination of the well preserved axial skeletons of eight individuals of Conchoraptor gracilis (MPC-D 102/3 and a suite of casts of poached specimens – University of Alberta Laboratory for Vertebrate Paleontology UALVP 54983, 54984, 54986 and 54987) reveals no strong variation in the form of their anterior chevrons (see Figure 2 above and Table 1 in supplementary information ). These Conchoraptor gracilis specimens are largely articulated and part of a single bonebed layer, with a taphonomic history that is presumably similar to that of MPC-D 100/1127 and MPC-D 100/1002. In all cases, the Conchoraptor gracilis chevrons are similar in general shape to those of MPC-D 100/1127, with no specimens showing the spear-head form of MPC-D 100/1002. Social display structures frequently vary between closely related species, while the gross anatomy of internal reproductive structures generally does not. Thus, if it is true that not all genera of oviraptorosaurs were sexually dimorphic in anterior-chevron form, then this interspecific discrepancy supports the interpretation of the observed dimorphism in Khaan mckennai as functioning to facilitate social displays. Other oviraptorosaurs are known from too few specimens to allow for similar consideration. However, articulated skeletons of oviraptorosaurs are more common than those of most other non-avian theropods. The recognition of the dimorphism observed in the anterior chevrons of Khaan mckennai will hopefully inspire similar close observations among other oviraptorid species as they are collected, so that the extent of the dimorphism may be better established within the group.
Paleontologists at the University of Alberta have discovered evidence of a prehistoric romance and the secret to sexing some dinosaurs. "Determining a dinosaur's gender is really hard," says graduate student Scott Persons, lead author of the research. "Because soft anatomy seldom fossilizes, a dinosaur fossil usually provides no direct evidence of whether it was a male or a female." Instead, the new research focuses on indirect evidence. Modern birds, the living descendants of dinosaurs, frequently show sexually dimorphic display structures. Such structures—like the fans of peacocks, the tall crests of roosters or the long tail feathers of some birds of paradise—are used to attract and court mates, and are almost always much larger in males (who do the courting) than in females (who do the choosing). Back in 2011, Persons and his colleagues published research on the tails of a group of birdlike dinosaurs called oviraptors. Oviraptors were strictly land-bound animals, but according to the study, they possessed fans of long feathers on the ends of their tails. If these dinosaurs weren't able to fly, what good were their tail feathers? Scott Persons "Our theory," explains Persons, "was that these large feather fans were used for the same purpose as the feather fans of many modern ground birds, like turkeys, peacocks and prairie chickens: they were used to enhance courtship displays. My analysis of the tail skeletons supported this theory, because the skeletons showed adaptations for both high tail flexibility and enlarged tail musculature—both traits that would have helped an oviraptor to flaunt its tail fan in a mating dance." The U of A researchers took the idea a step further. "The greatest test of any scientific theory is its predictive power," says Persons. "If we were right, and oviraptors really were using their tail fans to court mates, then, just as in modern birds, the display structures ought to be sexually dimorphic. We published the prediction that careful analysis of more oviraptor tails would reveal male and female differences within the same species." That prediction has come home to roost. In the new study, published this week in the journal Scientific Reports, Persons and his team have confirmed sexual dimorphism, after meticulous observation of two oviraptor specimens. The two raptors were discovered in the Gobi Desert of Mongolia. Both died and were buried next to each other when a large sand dune collapsed on top of them. When they were first unearthed, the two oviraptors were given the nicknames "Romeo and Juliet," because they seemed reminiscent of Shakespeare's famously doomed lovers. It turns out that the nickname may have been entirely appropriate. "We discovered that, although both oviraptors were roughly the same size, the same age and otherwise identical in all anatomical regards, 'Romeo' had larger and specially shaped tail bones," says Persons. "This indicates that it had a greater capacity for courtship displays and was likely a male." By comparison, the second specimen, "Juliet," had shorter and simpler tail bones, suggesting a lesser capacity for peacocking, and has been interpreted as a female. According to Persons, the two may very well have been a mated pair, making for an altogether romantic story, as the dinosaur couple was preserved side by side for more than 75 million years.
10.1038/srep09472
Medicine
New insights into the molecular basis of memory
Rashi Halder et al. DNA methylation changes in plasticity genes accompany the formation and maintenance of memory, Nature Neuroscience (2015). DOI: 10.1038/nn.4194 Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/nn.4194
https://medicalxpress.com/news/2015-12-insights-molecular-basis-memory.html
Abstract The ability to form memories is a prerequisite for an organism's behavioral adaptation to environmental changes. At the molecular level, the acquisition and maintenance of memory requires changes in chromatin modifications. In an effort to unravel the epigenetic network underlying both short- and long-term memory, we examined chromatin modification changes in two distinct mouse brain regions, two cell types and three time points before and after contextual learning. We found that histone modifications predominantly changed during memory acquisition and correlated surprisingly little with changes in gene expression. Although long-lasting changes were almost exclusive to neurons, learning-related histone modification and DNA methylation changes also occurred in non-neuronal cell types, suggesting a functional role for non-neuronal cells in epigenetic learning. Finally, our data provide evidence for a molecular framework of memory acquisition and maintenance, wherein DNA methylation could alter the expression and splicing of genes involved in functional plasticity and synaptic wiring. Main An organism's ability to learn and establish memory is essential for its behavioral adaptation to environmental changes. On a cellular level, learning and memory is mediated by structural and functional changes of cells in the nervous system. These changes, also termed plasticity, change a neuron's response to an external stimulus 1 . At the molecular level, these structural changes are dependent on intracellular signaling networks that regulate the synthesis of proteins and the activity of genes 1 . In addition to transcriptional and translational changes, a large body of evidence suggests that chromatin modifications are an important part of learning and memory processes 2 , 3 , 4 , 5 . In brief, genetic and pharmacological evidence indicates that changes in the activity of chromatin modifying enzymes influence cognitive abilities in animals and humans 5 . Furthermore, correlational evidence links bulk changes in histone post-translational modifications (HPTMs), predominantly histone acetylation, to the formation of short- (cellular consolidation) and long-term (systems consolidation) memory 5 , 6 , 7 . Dynamic and stable changes of HPTM and 5-methylcytosine DNA methylation (DNAme) have been reported for the learning-relevant genes Reelin , Calcineurin and Bdnf during cellular consolidation, systems consolidation and memory maintenance 3 , 8 , 9 . Consistent with the 'histone code' theory 10 , this evidence suggests that chromatin modifications, specifically histone acetylation and DNAme, might be a molecular correlate of long-term memory (mnemonic substrate) by modulating the stimulus-dependent activation of learning-relevant genes in memory-forming cellular networks 3 , 11 , 12 . In addition to a potential role as mnemonic substrate, chromatin modifications might be functionally important during cellular and systems consolidation. Although this theoretical framework highlights the potential dual role of chromatin modifications in learning and memory processes, the spatio-temporal extent of DNAme and HPTM changes and their functional implications remain unclear 5 . We set out to clarify the functional roles of chromatin modification changes in short- and long-term memory formation and maintenance. Results Cell type – specific chromatin modification data To identify DNAme and HPTM changes that are associated with short- and long-term memory consolidation and maintenance, we charted an unbiased, genome-wide profile of chromatin modifications, brain region and cell type specifically, over time ( Supplementary Fig. 1 and Online Methods ). In brief, we chose contextual fear conditioning (CFC) as the learning procedure given its robustness and its wide application 13 . The areas that we investigated were the hippocampal CA1 region, a region that is crucial for the short-term memory formation during CFC 14 , and the anterior cingulate cortex (ACC), a region important for associative memory acquisition and maintenance 15 , 16 . To decipher the role of neuronal and non-neuronal cells, we optimized the BiTS-ChIP (batch isolation of tissue-specific chromatin and ChIP) protocol 17 , 18 , 19 to murine brain tissue, sorting for NeuN-positive neuronal and NeuN-negative non-neuronal cells ( Fig. 1a ). In addition, we substantially reduced the amounts of starting material needed for high-quality ChIP- and MeDIP-seq (methylated DNA immunoprecipitation) experiments ( Supplementary Table 1 , Supplementary Fig. 2 and Online Methods ). To allow for comparisons between chromatin modification and gene expression levels, we performed RNA-seq on unsorted samples ( Supplementary Table 2 ). Figure 1: High-resolution brain region– and cell type–specific chromatin modification data. ( a ) Method outline. The hippocampal CA1 or cortical ACC were dissected from wild-type mice and nuclei were fixed during isolation, resulting in high-confidence in vivo chromatin modification data. Nuclei were stained with antibody to the neuronal marker NeuN, sorted into NeuN-positive (+) and NeuN-negative (−) nuclei via fluorescence-activated cell sorting, and chromatin was sheared. ChIP or MeDIP experiments were performed on neuronal and non-neuronal samples under naive or learning conditions. Notably, all experiments shown are in a naive state in which the animals performed baseline cognitive tasks and correspond to data from the hippocampal CA1 region. ( b ) Cell type–specific chromatin modification enrichment on the neuronally expressed Camk2a gene locus. Promoter arrows indicate the direction of transcription. Neuronal data (+) showed strong enrichment, whereas non-neuronal data (−) showed an absence of activity-linked chromatin modifications (for example, H3K4me3). ( c ) Aggregate plot of H3K79me3 using merged neuronal (green, D+) and non-neuronal data (blue, D−) for genes that are known to be expressed only in neurons (G+) or in non-neurons (G−). Shaded areas represent 95% confidence intervals. Full size image Many chromatin modifications have been linked to learning and memory processes and we chose to examine a subset of these in our experiments. Of the seven modifications investigated, three were previously found to be differentially regulated during CFC (DNAme, H3K9ac and H3K4me3) 7 , 8 , 9 , 20 . In addition to these learning-associated modifications, we examined four chromatin modifications that were not previously linked to learning and memory (H3K4me1, H3K79me3, H3K27ac and H3K27me3). All of these chromatin modifications have been studied extensively and are well correlated with gene and/or enhancer activity 17 , 21 , 22 . In brief, H3K4me3, H3K9ac, H3K79me3 and H3K27ac demarcate active genes, H3K27ac and H3K4me1 are found on active enhancers, and H3K27me3 and DNAme delineate repressed regions 17 , 22 ( Supplementary Table 1 ). In general, our data is of high technical quality, as assessed by replicate correlations, peak enrichment rates, and various other quality metrics and diagnostic plots ( Supplementary Fig. 3 and Supplementary Table 3 ). Brain region specificity was confirmed by enrichment of CA1 and ACC-specific marker genes ( Supplementary Fig. 4 ). Cell type specificity was validated by immunocytochemistry (ICC) pre- and post-sorting ( Supplementary Fig. 4 ) and by qualitative profiling of neuronal and non-neuronal genes ( Fig. 1b and Supplementary Fig. 5 ). For instance, the neuronal gene Camk2a showed high levels of H3K4me3, H3K9ac, H3K27ac, H3K79me3 and H3K4me1 in neurons, but these marks were almost completely absent in non-neuronal data ( Fig. 1b and Supplementary Fig. 5 ). Chromatin modifications that are linked to gene inactivity (DNAme and H3K27me3), on the other hand, showed high signal in non-neuronal cells and low signal in neurons ( Fig. 1b and Supplementary Fig. 5 ). To more globally assess the cell type specificity of our approach, we compiled a list of 518 genes, 259 of which are exclusively expressed in neurons and 259 in glia 23 , 24 ( Supplementary Table 4 ), and evaluated how well our data reflect the cell type–specific expression of these genes. Aggregate gene plots of the average chromatin modification occupancy resulted in a clear separation of neuronal and glial gene expression ( Fig. 1c and Supplementary Fig. 6 ). Neuron-specific genes showed high signals of active marks (H3K27ac, H3K9ac, H3K79me3 and H3K4me3) in neuronal data, but low signals in non-neuronal data ( Fig. 1c and Supplementary Fig. 6 ). Furthermore, active marks correlated positively and inactive marks (DNAme, H3K27me3) correlated negatively with gene expression ( Supplementary Fig. 7 and Supplementary Table 2 ). The observed negative correlation of gene body DNAme with gene expression is consistent with recent observations in dentate gyrus neurons 25 , 26 , contrasting positive correlations between gene body DNAme and gene expression in other cell types 27 . Finally, individual chromatin modifications were used for the classification of known cell type–specific genes, with an average precision of 88% and a recall of 69% ( Supplementary Fig. 8 and Online Methods ). Cell type – specific genes and regulatory modules Given the classification performance of the individual HPTMs, we next assessed whether the data are sensitive and specific enough to allow for the prediction of previously unknown region and cell type–specific genes and cis -regulatory modules (CRMs; in other words, enhancers and other functional genomic regions). A heuristic best-out-of-three classifier predicted 1,647 new neuron-specific genes with 94% precision and 37% recall and 803 new non-neuron–specific genes with 100% precision and 31% recall ( Fig. 2a , Supplementary Table 5 and Online Methods ). The predicted genes showed cell type–specific chromatin modifications ( Supplementary Fig. 9 ) and were enriched for neuron-specific gene ontology (GO) terms for neuronal genes and glial and developmental GO terms for non-neuronal genes ( Supplementary Fig. 10 ). Figure 2: A high-quality, in vivo network of neuronal and non-neuronal gene and CRM activity. ( a ) Precision and recall values for cell type–specific gene expression predictions. Neuronal and non-neuronal gene expression was predicted using histone modifications and precision and recall were evaluated on a list of known cell type–specific genes. ( b ) Genomic distribution of neuronal and non-neuronal predicted CRMs. 40% of all predicted enhancers were located in intronic regions, a known site of active enhancers. ( c ) Percentage of GFP-positive zebrafish embryos that showed neuronal CRM activity for 28 predicted neuronal CRMs randomly selected for this enhancer reporter assay (Online Methods ). The negative control (c) and CRM CA1-14 (1) did not show any neuronal activity. Inset, representative image of a GFP-positive neuronal cell (n) and red control expression in muscle cells (s). sc, spinal cord; nc, notochord. Significance was estimated using a permutation-based test, *** P ≤ 0.0001. Full size image To predict neuronal and non-neuronal CRMs and their activity, we used a discriminative random forest classifier that was trained on a published set of positive and negative transcription factor (TF) binding sites and HPTMs (Online Methods ) 28 , 29 . The trained model predicted a total of 60,544 CRMs with an estimated accuracy of 87.9% ( Fig. 2b , Supplementary Fig. 11 and Supplementary Table 6 ). 22,312 CA1 (29,714 ACC) CRMs were predicted in neurons and 17,435 CA1 (16,511 ACC) CRMs were predicted in non-neuronal cells ( Supplementary Fig. 11 ). In general, CRMs for the same cell type in different brain regions were largely overlapping ( ∼ 66%), whereas the overlap of CRMs between neurons and non-neurons was very low ( ∼ 9%), corroborating previous findings that enhancer regions are highly cell type specific ( Supplementary Fig. 11 ) 30 . Motif enrichment analysis on neuronal CRMs predicted transcription factor binding sites for MEF2C 31 , NEUROD1 (ref. 32 ) and RFX family members 33 , all of which govern neuronal identity ( Supplementary Fig. 12 ). Non-neuronal CRMs were enriched for SOX TF family binding sites, which have an integral role in the maintenance of neural stem cells, as well as in the specification and differentiation of astrocytes and oligodendrocytes ( Supplementary Fig. 12 ) 34 . In an attempt to biologically validate our CRM predictions, we selected 30 predicted neuronal CRMs and tested their activity using a reporter assay in zebrafish ( Fig. 2c and Supplementary Table 7 ). 28 of 30 CRMs were successfully cloned and 96% (27) showed neuronal activity in vivo ( Fig. 2c and Online Methods ). Overall, the data represent a region- and cell type–specific in vivo map of genomic functional elements in the murine brain. To assess DNAme and HPTM changes during memory acquisition and maintenance, we next compared dynamic changes following CFC in their functional genomic context (predicted genes and CRMs). Learning-induced histone modifications changes A relatively large body of evidence suggests that HPTM levels change primarily during cellular consolidation in the first few hours following learning 6 , 7 , 35 . Accordingly, we first assessed learning-induced HPTM changes 1 h after CFC. We compared hippocampal HPTM differences in naive mice (N) to mice that were exposed to a novel context (context only, C) and mice that were exposed to a novel context followed by an electric foot shock (context shock, CS). In all of the tested cases, the CFC protocol resulted in robust learning, as CS-exposed mice showed increased freezing behavior ( Supplementary Fig. 13 ). Tissue was isolated 1 h after C or CS exposure ( Supplementary Fig. 1 and Online Methods ). This setup allows for comparisons between naive and context only (N-C), naive and context shock (N-CS), and context and context shock (C-CS) data. We first examined whether we could detect global HPTM changes by immunoblot (IB) analysis of five HPTMs ( Supplementary Fig. 14 ), three of which have been shown to increase during cellular consolidation after CFC 6 , 7 , 20 . To our surprise, no early changes in bulk HPTMs could be discerned by IB analysis of the CA1 and ACC regions in any of the analyzed conditions (N-C, N-CS or C-CS; Supplementary Fig. 14 ). These results were corroborated by IB analysis of cell type–specific chromatin of the hippocampal CA1 region ( Supplementary Fig. 15 ). The reason for not detecting global HPTM changes following CFC using IB could be a result of the relatively mild, but robust, CFC protocol that we used (0.7-mA electric foot shock; Online Methods and Supplementary Fig. 13 ) or simply of a low sensitivity of the IB analyses 5 . To substantially increase our sensitivity and specificity to detect global changes, we next examined ChIP-seq HPTM changes in gene, CRM and intergenic regions using aggregate plots. Average gene intensity profiles (aggregate gene plots) revealed global increases in the activity-related modifications H3K4me3 and H3K9ac and a global decrease in the inactivity-related modification H3K27me3 during cellular consolidation (CA1 1 h, N-C and N-CS; Fig. 3a,b and Supplementary Fig. 16 ). In contrast with global HPTM changes in genic regions, aggregate intergenic intensity profiles displayed global decreases in H3K4me3 and a global increase in H3K27me3 levels, compensating for the observed changes in active regions (genes and CRMs; Fig. 3a,b and Supplementary Fig. 16 ). Despite the usage of the term global, the activity-related increase in HPTM levels seemed to be restricted to genes that were active in the naive state, as aggregate plots of inactive genes displayed no activity-related H3K4me3 increases ( Supplementary Fig. 17 ). Notably, the observed changes were not limited to neuronal cells, as changes in non-neuronal cells could also be detected ( Supplementary Fig. 18 ). Moreover, global HPTM changes were largely decoupled from the differential expression of genes, as increased HPTM levels could be detected on both up- and downregulated genes after CFC ( Supplementary Fig. 17 ). To corroborate our in vivo results, we next analyzed global HPTM changes following KCl stimulation of primary neuronal cell culture (previously published data, see Online Methods ) 28 , 29 . In total agreement with the learning-related global HPTM changes, aggregate gene H3K4me3 levels were increased and H3K27me3 levels were decreased after KCl stimulation, whereas H3K4me1 levels remained unchanged ( Fig. 3a,b, and Supplementary Figs. 19 and 20 ). Again, global HPTM changes were largely decoupled from differential gene expression, as increased HPTM levels could be detected on up- and downregulated genes ( Supplementary Fig. 20 ). Figure 3: Histone modifications display global and gene-specific changes during learning. ( a , b ) Global learning– ( in vivo ; N, C, CS) and stimulation-induced ( in vitro ; Un, KCl) neuronal HPTM changes. Shown are aggregate plots of H3K4me3 ( a ) and H3K27me3 ( b ) changes after learning (CA1 1 h, left) and stimulation (KCl, middle). In vivo changes (CA1 1 h) are displayed for naive (N, green), context only (C, blue) and context shock–exposed (CS, red) mice. In vitro changes are shown for unstimulated (Un, green) and KCl-stimulated (KCl, blue) neuronal cell culture. Shaded areas represent 95% confidence intervals and bar graphs display peak HPTM levels per condition. On the right, the quantification of H3K4me3 ( a ) and H3K27me3 ( b ) revealed that signal changes in intergenic regions compensated for changes in active regions (CRMs and genes). ( c ) Changes in H3K79me3 (CA1 1 h) and H3K27ac (Un-KCl) were significantly associated with DEGs. The percentage of genes showing differential HPTMs (DHPTMs) that were also differentially expressed is summarized in donut plots for in vivo H3K79me3 (left, N-C and N-CS) and in vitro H3K27ac (right, Un-KCl) data. The percentage of DHPTM-DEG genes that had HPTM changes in non-neurons (only) is highlighted in green, in neurons (only) in blue, and in non-neurons and neurons in yellow. A red outer line is present if DHPTM-DEG overlap was significantly enriched (Fisher's exact test P value = 0.05). ( d ) Hierarchical clustering of significantly enriched GO terms (WebGestalt, Cellular Component, adjusted P value = 0.1) for DHPTM-DEGs during cellular consolidation (N-CS CA1 1 h) or after KCl depolarization (Un-KCl). Enrichment was calculated as the ratio of observed over expected genes per GO category. The magnitude of enrichment is displayed with colors ranging from white (no enrichment) to red (strong enrichment). Each column represents the enrichment of a significant GO term. The GO terms are summarized for clarity (see Supplementary Fig. 22d for the complete list). Full size image In contrast with global HPTM changes, we detected little ChIP-seq region-specific changes for many of the analyzed HPTMs during cellular consolidation (CA1 1 h; Supplementary Table 8, and Supplementary Figs. 21 and 22 ). To detect small changes, we analyzed differential HPTM changes using high-quality uni- and multi-mapped reads or only uniquely mapped reads with a rather lenient significance threshold (adjusted P value = 0.1) and no fold-change threshold ( Supplementary Table 8 ). Of note, the inclusion or removal of multimapped reads led to almost identical results and we therefore decided to keep high-quality multimapped reads in all subsequent analyses ( Supplementary Table 8 ). Specific genomic regions were analyzed for modifications covering only promoters or genes (H3K4me3, H3K79me3 and H3K27me3), whereas genome-wide peaks were considered for modifications that occurred throughout the genome in active genes and CRMs (H3K27ac and H3K4me1). Only H3K79me3 (N-CS), a marker for actively transcribed gene bodies, showed 850 differences, 611 of which were neuronal, 133 non-neuronal and 106 of neuronal and non-neuronal origin ( Fig. 3c , Supplementary Fig. 22a and Supplementary Table 8 ). All other modifications had maximally 86 significant changes (H3K4me3, N-CS), a relatively low number considering the 915 differentially expressed genes (DEGs) during cellular consolidation (CA1 1 h) ( Supplementary Fig. 22a and Supplementary Table 2 ). To corroborate these findings, we analyzed HPTM changes by ChIP-qPCR comparing differential expression and chromatin modification levels in 15-min intervals for the learning-induced immediate-early genes Fos and Egr1 ( Supplementary Fig. 23 and Supplementary Table 9 ), detecting no significant changes ( P ≤ 0.05). Notably, apart from the observed H3K79me3 changes, these results are in stark contrast with the predictive differences in CRMs and genes that we obtained using the same data and analysis routines when comparing neuronal and non-neuronal cells ( Figs. 1 and 2 ). Recent evidence suggests that only a subset of the analyzed cells in the CA1 actually takes part in the formation of a network-correlate of memory, resulting in reduced sensitivity to detect region-specific HPTM changes 36 , 37 , 38 , 39 . To further increase our sensitivity in detecting HPTM changes after neuronal stimulation, we analyzed published neuronal cell culture HPTM data for region-specific differences, a system that should lead to stimulation of almost every cell 28 , 29 . In accordance with published results, we detected 1,106 H3K27ac changes (650 up- and 456 downregulated), many of which were located in intergenic regions ( Supplementary Fig. 22a and Supplementary Table 8 ). The observed H3K27ac regions overlapped with regions that were previously found to change, as exemplified by the detection of three Fos enhancer regions ( Supplementary Fig. 21 ). To our surprise, however, we could only detect 64 H3K4me3, 337 H3K27me3 and no H3K4me1 changes ( Supplementary Table 8 ). Given that 915 genes changed expression in the CA1 during CFC in vivo (CA1 1 h, N-CS; Supplementary Table 2 ), it surprised us to find so few changes for some HPTMs in vivo and in vitro . On the other hand, these results seem to be in accordance with a previous study detecting only 140 H4K5ac changes during CFC 40 . It is interesting to note, however, that the observed H3K79me3 changes during cellular consolidation and H3K27ac changes during neuronal stimulation were highly enriched in DEGs ( Fig. 3c , Supplementary Fig. 22a and Supplementary Table 10 ). Furthermore, genes that displayed H3K79me3 and gene expression changes during cellular consolidation (CA1 1 h) seemed to be involved in the reshaping of synapses, as suggested by the enrichment of the GO categories 'dendrite, 'synaptic membrane' and 'axon' ( Fig. 3d , Supplementary Fig. 22b–d and Supplementary Data Set ). These results provide evidence for global HPTM changes during memory formation in neuronal and non-neuronal cell populations. The increase of activity-related HPTMs and the concomitant decrease in inactivity-related HPTMs strengthen the theory that HPTMs might have a role in population priming, sensitizing neurons and non-neuronal cells for future activity. The relatively few region-specific learning-associated changes for most analyzed HPTMs might be a result of a lack of sensitivity, but could also indicate that HPTMs might not constitute a mnemonic substrate, as the timing (usually short lived 6 , 7 , 35 ), location (neurons and non-neuronal cells) and the specificity (global and few region specific) of changes seem to argue against it 5 . Spatio-temporal correlation of DNAme and memory To characterize the role of DNAme during cellular consolidation, systems consolidation and memory maintenance, we examined genome-wide changes in DNAme 1 h and 4 weeks after CFC in the hippocampal CA1 and the ACC using MeDIP-seq (Online Methods ). In contrast with learning-related changes in HPTMs, we could not detect global DNAme changes during cellular consolidation (CA1 1 h) or memory maintenance (ACC 4 weeks) ( Supplementary Fig. 24 ). We did, however, observe a potential global DNAme decrease during systems consolidation (ACC 1 h, N-C and N-CS) that is consistent with the idea of a global priming of cells for future activity ( Supplementary Fig. 24c,g,k ). This decrease might be present in both neurons and non-neuronal cells and seems to be independent of the CpG content of genes ( Supplementary Fig. 24g,k ). In addition to the potential global DNAme changes during systems consolidation, substantial changes in DNAme during memory consolidation and maintenance were present at specific inter- and intragenic regions ( Fig. 4a–c , Supplementary Table 11 and Supplementary Figs. 25–27 ). These changes were further validated by MeDIP-qPCR experiments ( Supplementary Fig. 25 ). In neurons, differentially methylated regions (DMRs) were preferentially located in intergenic (64%) and intronic (30%) regions ( Fig. 4c ), consistent with the distribution of DMRs in activity-induced dentate gyrus neurons 25 . In contrast, promoter regions constituted only 1% of DMR regions, stressing the importance of genome-wide studies as compared with restricted analyses of single-target gene promoters ( Fig. 4c ). Furthermore, associative memory–induced DMRs (C-CS, CA1 1 h and ACC 4 weeks) significantly colocalized with H3K27ac-positive regions, indicating that 21–29% of the DMRs reside in functional cis -regulatory regions, many of which are intronic ( Supplementary Fig. 26 ). These results suggest that a substantial proportion of DMRs might regulate TF binding in cis -regulatory regions, a hypothesis that is supported by the recently documented hypomethylation of binding regions for the neuronal activity–dependent TFs NPAS4 and SRF 25 . The distribution of non-neuronal DMRs is very similar to the neuronal one, being mainly restricted to intergenic (59%) and intronic (34%) regions ( Supplementary Fig. 27a ). Figure 4: Transient and stable DNA methylation during memory acquisition and maintenance. ( a , b ) Examples of MeDIP-seq genome tracks displaying two DMRs (top blue bar and shaded area) in the Reelin gene. The merged naive (N, green), context (C, blue) and context-shock (CS, purple) tracks are displayed for neuronal (+) and non-neuronal (−) data. Notably, although a DMR close to exon 23 of the Reelin gene showed increased methylation following learning ( a ), a second DMR near exon 2 displayed decreased methylation following CS exposure. ( c ) Genomic distribution of neuronal DMRs comparing naive and context shock–exposed mice (N-CS). The DMRs represent the sum of all N-CS neuronal DMRs at 1 h and 4 weeks in the CA1 and ACC. Numbers represent the percentage of all DMRs. Only categories with more than 1% DMRs have numbers. ( d , e ) Learning-related hypo- and hyper-methylated genes (DMGs) in neuronal CA1 cells ( d ) and ACC cells ( e ) 1 h and 4 weeks (4 w) after CFC. Green bars denote N-C comparisons, blue bars N-CS, and gray bars indicate the overlapping DMGs between N-C and N-CS comparisons. Black numbers indicate the amount of hypo- and hyper-methylated genes and white numbers indicate the set of DMGs per condition. The break in the bars and x axis indicates a shift in scale. ( f ) Overlap (proportional Venn diagram) of N-CS DMGs between the ACC 1 h (blue), ACC 4 weeks (green) and CA1 1 h (purple). Full size image Neuronal cellular consolidation (CA1, 1 h) was associated with substantial changes in DNAme ( Fig. 4d and Supplementary Table 11 ). In total, 1,148 genes were differentially methylated (DMGs, 4,758 DMRs) comparing N-C data and 1,206 DMGs (3,759 DMRs) comparing N-CS data. Notably, non-neuronal cell types are also differentially methylated during cellular consolidation (CA1, 1 h), with 671 DMGs (1,681 DMRs) comparing N-C data and 651 DMGs (1,619 DMRs) comparing N-CS data ( Supplementary Fig. 27b and Supplementary Table 11 ). These changes were not permanent, as almost no DNAme changes could be detected in neuronal and non-neuronal cells during memory maintenance in the CA1 after 4 weeks. Neuronal systems consolidation (ACC, 1 h) was associated by far with the largest changes in DNAme ( Fig. 4e,f and Supplementary Table 11 ). Thus, there were 6,527 DMGs (49,285 DMRs) comparing naive with context only data (N-C) and 6,250 DMGs (46,395 DMRs) comparing naive with context shock (N-CS) data in ACC neurons 1 h after learning. These changes were highly overlapping and we consistently detected no changes in associative memory formation (C-CS comparison; Fig. 5b and Supplementary Table 11 ). Involvement of cortical regions during early learning and systems consolidation phases has been observed during socially transmitted food preference in rats and CFC in mice 35 , 41 . In contrast with the absence of long-term methylation changes in the hippocampal CA1 region, substantial differential methylation could be detected in cortical neurons during memory maintenance (ACC 4 weeks). Thus, 49 DMGs (109 DMRs) comparing N-C and 1,223 DMGs (5,018 DMRs) comparing N-CS data were identified ( Fig. 4e ). Notably, the neuronal ACC 4 weeks (87%) and CA1 1 h (77%) DMGs overlapped significantly with the ACC 1 h DMGs, whereas the overlap between the ACC 4 weeks and CA1 1 h DMGs was only 29% ( Fig. 4f and Supplementary Table 11 ). These results indicate that the early DMR signature could be split into a memory consolidation and a memory maintenance component, with the former setting up the memory and the latter representing a mnemonic substrate ( Fig. 4f and Supplementary Table 11 ). As observed during cellular consolidation, not only neuronal, but also non-neuronal, cell types of the ACC were differentially methylated during systems consolidation, but lacked changes during memory maintenance ( Supplementary Fig. 27b,c ). Figure 5: DNA methylation correlates strongly with the spatio-temporal location of associative memory. Comparison of associative memory–related (C-CS) DMRs (dark blue) and DMGs (light blue) in neuronal (+) and non-neuronal (−) cells after CFC. Numbers indicate the total hypo- and hyper-methylated regions (black) and genes (white or black in parenthesis). ( a ) Associated memory–related DMRs and DMGs in the hippocampal CA1 region 1 h or 4 weeks (4 w)after CFC. DMGs are exclusively found in neurons 1 h after learning (one DMR in non-neuronal cells). ( b ) Associated memory–related DMRs and DMGs in the cortical ACC region 1 h or 4 weeks after CFC. In contrast with the CA1, DMGs in the ACC were exclusive to the 4-week time point (two DMRs at the 1 h time point). Although most of the changes occurred in neurons, there were 69 hypo- and 3 hyper-methylated DMGs in non-neuronal cells. Full size image In general, learning and memory-related changes in DNAme correlate well with the known physical location of memory in time and space. Thus, DMRs in the hippocampal CA1 region are most prominent during the cellular consolidation phase and absent in the memory maintenance in almost all comparisons (N-C, N-CS; Fig. 4d and Supplementary Table 11 ). In the cortical ACC, DMR changes in the N-C comparison were largely restricted to the 1-h time point, as context exposure should only form short-term memory ( Fig. 4e ). Conversely, the N-CS comparison resulted in strong changes at both time points ( Fig. 4e ). Most impressive was the spatio-temporal correlation of associative memory (C-CS) with differential methylation, as changes were almost exclusively restricted to the neuronal CA1 at 1 h and the ACC at 4 weeks ( Fig. 5 and Supplementary Table 11 ). Thus, in neurons, 1,137 DMGs (3,216 DMRs) were changed during cellular consolidation, whereas no DMGs could be detected during memory maintenance in the CA1 ( Fig. 5a ). Conversely, in neurons, no DMGs were detected during systems consolidation (ACC 1 h), whereas 153 DMGs (365 DMRs) are associated with memory maintenance (ACC 4 weeks) ( Fig. 5b ). Notably, few associative DMGs were detected in non-neuronal cells during cortical memory maintenance (ACC 4 weeks; Fig. 5b ). The spatio-temporal changes strongly support a dual role for DNAme, first in the formation of memory during cellular and systems consolidation and second as a mnemonic substrate during memory maintenance. Functional DNAme changes on plasticity genes To understand the functional role of methylation changes during memory acquisition and maintenance, we first assessed which categories of genes and pathways were affected. Differential methylation preferentially targets genes that confer cell type–specific functionality. Thus, neuron-specific genes ( Fig. 2 and Supplementary Table 5 ) were three- and fivefold more likely to contain DMRs than non-specific genes during associative memory formation (C-CS, CA1 1 h) and maintenance (C-CS, ACC 4 weeks) ( Supplementary Fig. 28a ). DNAme changes during associative cellular consolidation (C-CS, CA1 1 h) involved the hyper-methylation of ion-gated transmembrane channels as well as the hypo-methylation of many transcription factor genes, as evidenced by the enrichment of GO terms 'ion gated channel activity' and 'transcription regulatory region DNA binding' ( Supplementary Fig. 28b–d ). In addition, DMGs were enriched in components of the CREB and PKA signaling cascade during associative cellular consolidation ( Supplementary Fig. 28f ). DMRs in associative memory maintenance (C-CS, ACC 4 weeks), on the other hand, comprised the hypo-methylation of many genes that are involved in cortical neuronal rewiring and phospholipid signaling. Thus, GO terms 'ephrin receptor signaling pathway', 'main axon', 'dendritic spine head' and 'postsynaptic density' support a role of differential methylation in modulating the expression of genes required for structural changes ( Supplementary Fig. 28 ). In addition, DMRs in associative memory maintenance were enriched in genes of the CREB and PKA signaling cascades ( Supplementary Fig. 28 and Supplementary Data Set ). To be of functional consequence, differential methylation could regulate the expression and/or splicing of learning- and memory-related target genes 42 , 43 . To compare gene expression and differential methylation, we performed RNA-seq analysis of unsorted cells during cellular consolidation (CA1 1 h) and cortical memory maintenance (ACC 4 weeks) after CFC ( Supplementary Tables 2 , 3 and Supplementary Fig. 29 ). During cellular consolidation (CA1 1 h), 1,212 (N-C) and 915 (N-CS) genes changed their expression ( Supplementary Table 2 and Supplementary Fig. 29a ), functionally altering the long-term potentiation and synaptic plasticity of neurons (N-CS; Supplementary Fig. 29d ). DEGs during cortical memory maintenance (ACC 4 weeks, N-C 2,980, N-CS 6,426) showed enrichment of similar categories, but were also enriched in the functional categories 'formation of cellular protrusions' and 'morphology of nervous system' ( Supplementary Fig. 29a,d ). In addition, published RNA-seq data during associative systems consolidation (C-CS, mPFC 1 h) was included in the analysis 41 . Finally, the differential exon expression of genes (DEE) was analyzed, resulting in 160 (N-C) and 164 (N-CS) DEEs during cellular systems consolidation (CA1 1 h) and 2,681 (N-C) and 1,700 (N-CS) DEEs during memory maintenance (ACC 4 weeks) ( Supplementary Tables 2 , 3 and Supplementary Fig. 29b ). The analysis of the published systems consolidation RNA-seq data (ACC 1 h) did not provide any significant DEEs ( P ≤ 0.1). Consistent with the assumption that DMRs could influence gene expression, we found a highly significant association of DNAme changes and differential gene expression during cellular consolidation, systems consolidation and cortical memory maintenance in neurons and non-neuronal cells ( Fig. 6a–d ). Although some DEGs contain only one DMR, many contain several DMRs throughout the gene body and the promoter ( Fig. 6a,b ). Thus, during cellular consolidation (CA1 1 h), 6% (N-C) and 10% (N-CS) of all DEGs were also differentially DNA methylated in neurons and/or non-neurons, displaying a 3.0- (N-C) and 3.4-fold (N-CS) enrichment over background ( Fig. 6c ). During systems consolidation, 37% (N-C) and 49% (N-CS) of all DEGs also contained DMRs in neuronal and/or non-neuronal cells (ACC 1 h), resulting in an enrichment of 2.6- (N-C) and 3.0-fold (N-CS) over background ( Fig. 6c ). Similarly, the percentage of DMGs that were also differentially expressed during cortical memory maintenance was 24% (N-C) and 32% (N-CS), with an enrichment of 3.6- (N-C) and 2.1-fold (N-CS) over background ( Fig. 6c ). Notably, the co-occurrence of the differential methylation and expression of genes was highly significant in all analyzed conditions (two-sided Fisher's exact test; Fig. 6d and Supplementary Table 12 ). Figure 6: DNA methylation changes are associated with the expression of functional and synaptic plasticity genes. ( a , b ) Example genome tracks of DMRs associated with differentially expressed genes in CA1 ( a ) and ACC ( b ). The neuronal MeDIP-seq data from N and CS mice 1 h after CFC is displayed in the two upper tracks and the corresponding RNA-seq tracks are shown below. The blue bar on the top and the shaded areas represent the DMR locations. DMRs can be observed at the TSS, in the promoter region or in the coding region of the DEGs. ( c ) DMGs were significantly associated with DEGs. The percentage of DEGs that were also DMGs in the CA1 and ACC after 1 h (CA1 1 h, ACC 1 h) and the percentage of DMGs that were also DEGs in the ACC 4 weeks after CFC are shown in donut plots. The inner donuts represent the N-C comparisons and the outer donuts represent the N-CS comparisons. The percentage of DMG-DEG genes that were differentially DNA methylated in non-neurons (only) is highlighted in green, in neurons (only) in blue, and in non-neurons and neurons in yellow. A red outer line is present if the DEG-DMG overlap was significantly enriched (Fisher's exact test P value = 0.05). ( d ) Graphical representation of the DMG-DEG overlap significance. The circle area represents the total overlap (log 2 transformed) and the color represents the enrichment P value (log 10 transformed). Numbers represent the total DMG-DEG overlap per condition. ( e ) DMGs were significantly associated with differential exon expression (DEE). The percentage of DEEs that also showed differential DNA methylation in the CA1 after 1 h (CA1 1 h) and the percentage of DMGs that were also DEEs in the ACC 4 weeks after CFC are shown in donut plots. The inner donuts represent the N-C comparisons and the outer donuts represent the N-CS comparisons. The percentage of DMG-DEE genes that were differentially DNA methylated in non-neurons (only) is highlighted in green, in neurons (only) in blue, and in non-neurons and neurons in yellow. A red outer line is present if the DEE-DMG overlap was significantly enriched (Fisher's exact test P value = 0.05). ( f ) Hierarchical clustering of significantly enriched GO terms (WebGestalt, Biological Process, adjusted P value = 0.1) for genes that showed DMG-DEG (neurons in blue and non-neurons in green text) or DMG-DEE (neurons in yellow text) overlap during cellular consolidation (CA1 1 h), systems consolidation (ACC 1 h) or memory maintenance (ACC 4 weeks). Enrichment was calculated as the ratio of observed over expected genes per GO category. The magnitude of enrichment is displayed with colors ranging from white (no enrichment) to red (strong enrichment). Each column represents the enrichment of a significant GO term. The GO terms are summarized for clarity (see Supplementary Fig. 30e for the complete list). Full size image In addition, DMGs were significantly associated with differential exon usage of genes during cellular consolidation (CA1 1 h) and memory maintenance (ACC 4 weeks), corroborating a role for DNAme in the splicing of alternative transcript isoforms ( Fig. 6e , Supplementary Fig. 30a–c and Supplementary Table 12 ) 43 . Thus, during cellular consolidation (CA1 1 h), 18% (N-C) and 10% (N-CS) of all DEEs were also differentially DNA methylated in neurons and non-neurons, showing an enrichment of 6.8- (N-C) and 3.7-fold (N-CS) over background ( Fig. 6e ). Accordingly, the percentage of DMGs containing DEEs during cortical memory maintenance was 14% (N-C) and 8% (N-CS), with an enrichment of 2.3- (N-C) and 2.9-fold (N-CS) over background ( Fig. 6e right panel). Again, the co-occurrence of the differential methylation and exon usage of genes was highly significant in all analyzed conditions (two-sided Fisher's exact test; Supplementary Fig. 30c and Supplementary Table 12 ). To understand the functional role of genes that are differentially expressed or spliced, as well as differentially methylated during learning, we next assessed which categories of genes and pathways were affected. Genes that are differentially methylated and expressed during cellular consolidation in neurons (N-CS, CA1 1 h) regulate signal transduction processes, as evidenced by the enrichment of the GO categories 'intracellular signal transduction' and 'protein kinase activity' ( Fig. 6f and Supplementary Fig. 30d–f ). In contrast with changes during cellular consolidation, genes that are differentially methylated and expressed during memory maintenance (N-CS, ACC 4 weeks) regulate the structural changes in dendrites and axons, as evidenced by the enrichment of the GO categories 'postsynaptic density', 'dendrite', 'axon', and 'neuron spine' ( Fig. 6f and Supplementary Fig. 30d–f ). Notably, genes that are differentially spliced and methylated during memory maintenance (N-CS, ACC 4 weeks) also seem to regulate axonal and dendritic morphology, as corroborated by the enrichment of the GO terms 'axon guidance', 'synapse organization', 'neuron spine' and 'growth cone' ( Fig. 6f and Supplementary Fig. 30d–f ). These results might lay the molecular framework of how changes in DNAme could regulate learning and memory processes. The consolidation of memory is accompanied by the differential methylation of genes encoding for ion channels, TFs, and constituents of the CREB and PKA signaling cascades, all of which have been shown to contribute to the early phases of learning and memory processes 1 . During memory maintenance, DMRs targeting genes involved in axonal rewiring might be responsible for the formation of specific neuronal networks that constitute a memory trace 36 , 37 , 38 , 39 . The differential methylation of these genes during the consolidation and maintenance of memory will most probably alter their concomitant transcription and splicing, resulting in the observed short- and/or long-term changes in cellular plasticity and the concomitant formation of memory. Discussion In summary, our results provide insights into the role of chromatin modifications in learning and memory processes in two different brain regions, two cell populations and at three different time points. To allow for easy access to the data and results, we have deployed a dedicated web-server ( ). We found that DNAme changes correlated well with the spatio-temporal location of memory, were gene and CRM specific, were mostly found in neurons, and could be dynamic or stable. The analyzed HPTMs, on the other hand, showed global changes, but rather few region-specific changes. Although HPTM changes are most probably restricted to memory consolidation 6 , 7 , 35 , changes in DNAme appeared to have a dual role during the consolidation and the maintenance of memory. HPTM changes seemed to be largely decoupled from differential gene expression, with the notable exceptions of H3K79me3 during learning and H3K27ac after KCl stimulation, whereas DNAme changes were significantly correlated with the differential expression and splicing of genes. The weak correlation between differential gene expression and the HPTM changes is intriguing; in general, HPTMs have a strong correlation with gene activity (expression) in developmental processes 17 , 21 , 22 . In principle, it is impossible to exclude the possibility that HPTM changes are very fast and outside of our time resolution. Moreover, the experiments might simply lack the sensitivity to detect small changes in a subset of the analyzed cell population, and the observed H3K79me3 ( in vivo ) and H3K27ac ( in vitro ) changes might strengthen this assumption. It should be noted, however, that we have a time resolution of up to 15 min while applying very sensitive analysis routines to the region- and cell type–specific data (from genome-wide to single-target gene analyses, from in vivo to in vitro ). Alternatively, global HPTM changes and the few observed region-specific HPTM changes might simply point to a divergent functional role of HPTMs during neuronal signaling as compared to differentiation processes. Although differentiating cells change their chromatin modifications and gene expression programs more permanently, neuronal signaling during learning processes is defined by short-lived, pulse-like changes in gene expression. Thus, the correlation between HPTMs and gene expression might reflect the maintenance of cell identity by the stabilization of cell-determining gene-regulatory networks, but not fast and reversible changes in gene expression during neuronal signaling. On the other hand, the global increase in activity- and decrease in inactivity-related HPTMs in neuronal and non-neuronal cells during memory consolidation, a phenomenon we refer to as population priming, might reflect an enhanced formation of memories during a relatively brief time interval. In other words, population priming could constitute a cellular correlate of attention. Future experiments analyzing the spatio-temporal correlation between global HPTM changes and increased learning ability following an initial learning event could conclusively answer this question. Chromatin modification changes, especially during memory consolidation, strongly suggest a functional role of non-neuronal cells during memory formation. Although the exact role of non-neuronal chromatin modification changes in learning and memory processes requires further investigation, it is quite unlikely that they represent a cellular correlate of memory, as their timing, specificity and location indicate otherwise 5 . Nevertheless, these results imply that chromatin modification changes are not necessarily of neuronal origin and provide further rationale for the cell type–specific analysis of chromatin modifications in the brain, especially in the emerging field of neuroepigenetics 4 . Finally, the spatio-temporal correlation of DNAme with the physical location of memory, especially during associative learning, suggests that DNAme could indeed be a mnemonic substrate. To conclusively answer this question, it will be necessary to show that the cellular network that forms a memory trace is the source of the observed DNAme changes 5 . The analysis of DNAme and HPTM changes in genetically tagged memory-forming neurons could give final evidence for the existence of a chromatin modification trace of memory 44 . Inhibition of chromatin modification changes in these cellular networks should cause a concomitant loss of memory. Methods 1.1 Behavioral experiments. To investigate the molecular mechanisms governing learning, we make use of the very well defined contextual fear conditioning (CFC) paradigm with the slight modification that the animals were not subjected to the 'test' phase after training 13 . To this end, three-month-old male C56BL/6 mice were individually housed in standard conditions with a 12-h light/dark cycle and access to food and water ad libitum . All experiments were conducted in the morning. For training, animals were allowed to explore the context either for 180 s (group: Context - C) or 178 s followed by a 2 s constant 0.7 mA mild foot shock (group: Context Shock - CS) and later were housed back in cages. One group of C and CS animals was trained and sacrificed after 1 h, while another group of C and CS animals was sacrificed after 4 weeks of training. Animals that did not undergo CFC were used as naive controls. An extra group of mice was trained and tested for memory retrieval 4 weeks after context (C) or context shock (CS) exposure. The motion of mice was tracked and the percentage of freezing (no movement) was calculated using the Video Freeze automated monitoring system (Med Associates). All experiments were performed in the accordance with the animal protection law and were approved by the District Government of Niedersachsen (Lower Saxony), Germany. Data collection and analysis were not performed blind to the conditions of the experiments. Animals were randomly assigned to the experimental conditions. Throughout this study, three different cohorts of mice were subjected to CFC: ChIP- and MeDIP-seq cohort. For cell type-specific ChIP- and MeDIP-seq experiments (sections 1.3 to 1.5), tissue from 20 mice for each of the two biological replicates was pooled. We chose to pool 20 mice per biological replicate since chromatin modification changes were expected to be small, coming from a small population of memory-forming cells. Consequently, the statistical detection of chromatin modification changes required low variance data, which can be obtained by pooling many biological replicates. In theory, the variance of the estimator for the true distribution mean θ is given by where n p is the total number of pools, represents the biological variation, signifies the technical variation, r s denotes the number of individual samples that contribute to a pool, and r a is the number of sequenced samples for each pool (biological replicates) 45 , 46 , 47 . Given that r s > 1 in a pooled design the concomitant decrease in variance should lead to an increase in power to identify differentially expressed or modified regions. In addition, sample pooling reduces the general workload and monetary investment. Following the CFC paradigm animals were sacrificed and the CA1 and ACC regions were isolated (section 1.2) for subsequent nuclear sorting (section 1.3) and chromatin and DNA extraction (section 1.4 and 1.5). Chromatin from this cohort was also used for immunoblot (IB) analyses as described in section 1.9 and shown in Supplementary Figure 15 . RNA-seq cohort. For RNA-seq experiments (section 1.7), 5 mice per biological condition were used (5 groups, 1xN, 2xC, 2xCS). In contrast to the ChIP- and MeDIP-seq cohort, the tissue for the RNA-seq experiments was not pooled and not subjected to cell type-specific sorting. Apart from expression profiling, tissue from the RNA-seq cohort of mice was used for IB analyses (section 1.9, Supplementary Fig. 14 ). qPCR cohort. For ChIP-qPCR experiments with high temporal resolution (section 1.6), we used 2 mice per group and 5 groups (1xN, 4xCS). Animals were sacrificed 0 min, 15 min, 30 min or 45 min after CFC. 1.2 Tissue isolation. Animals were sacrificed by cervical dislocation and whole brain was isolated in ice-cold Dulbecco's Phosphate Buffered Salt (DPBS, PAN-biotech GmbH) supplemented with EDTA-free protease inhibitor cocktail (Roche). The CA1 and ACC regions were isolated, snap frozen in liquid nitrogen, and stored at −80 °C. Enrichment of region specific marker genes was validated by RT-QPCR ( Supplementary Fig. 4 , section 1.6). 1.3 Sorting of cell type-specific nuclei. Once the tissues were collected, cell type-specific chromatin for NeuN positive neuronal cells (+) and the NeuN negative non-neuronal cells (−) was extracted adapting the BiTS protocol to mouse brain tissue 17 , 18 . For each replicate, tissues from twenty (20) mice were pooled and nuclei were isolated. All steps except the formaldehyde crosslink were performed at 4 °C or on ice. In brief, frozen mouse tissue from 5 mice was homogenized using a micro-pestle in 500 μl of low sucrose buffer (0.32 M Sucrose, 10 mM HEPES pH 8.0, 5 mM CaCl 2 , 3 mM Mg(CH 3 COO) 2 , 0.1 mM EDTA, 0.1% Triton X-100, 1 mM DTT) and crosslinked with 1% formaldehyde (Sigma-Aldrich F1635) for 5 min at room temperature. The reaction was quenched by adding glycine to 125 mM and incubating 5 more minutes. The nuclei were pelleted by centrifugation, re-suspended and homogenized in 3 mL of low sucrose buffer with protease inhibitors (Roche Complete) using a mechanical homogenizer (IKA T 10 basic ULTRA-TURRAX). The nuclei were purified through a sucrose cushion (10 mM HEPES pH 8, 1 M sucrose, 3 mM Mg(CH 3 COO) 2 , 1 mM DTT; 6 mL of cushion for 1,5 mL of lysate) by centrifugation (3,200 rcf for 10 min in Oak Ridge centrifuge tubes), resuspended in PBS, and aggregates were cleared by filtering through a 70 μm filter. The nuclei were stained with anti-NeuN mouse antibody (Millipore mab377) diluted 1:500 in PBS-T, (0,1% Tween 20 in PBS) with 5% BSA and 3% goat serum, incubating for 30 min at 4 °C. The nuclei were washed 4 times with PBS-T and stained for 15 min with anti-mouse Alexa 488 (Life Technologies) diluted 1:1,000. The nuclei were washed once with PBS-T and stored in PBS-T with 5% BSA until the sorting. After dissociation by passing the samples through a 26G needle 10 times, the nuclei were filtered (70 μm) right before sorting on a FACSAria II (BD Bioscience) into ice-cold conical tubes containing 1 mL of 5% BSA in PBS. The 'gate settings' for sorting were based on the size and density of unstained nuclei and both NeuN stained (NeuN+) and unstained (NeuN−) fractions were collected. The average purity of the sorted nuclei exceeded 95%, yielding highly cell type-specific material. The NeuN positive population contains every neuron expressing NeuN endogenously, which are primary excitatory neurons as well as interneurons. NeuN negative cells consist to a very large percentage of glial cells but also contain other cell types. 1.4 ChIP-seq. In order to accommodate for the limited amounts of chromatin we obtained from very small, cell type-specific, in vivo samples we optimized our ChIP, MeDIP, and library conditions for each chromatin modification individually. The optimized ChIP protocols show reliable enrichment of IPed areas with as little as 0.5-1 μg of chromatin as input ( Supplementary Table 1 ). For optimal results only ChIP-grade antibodies that were previously validated according to the Antibody Validation Database 48 were used. We would like to highlight that we only performed ChIP-seq experiments during the 1 h time point in the CA1 (cellular consolidation). In detail, sorted nuclei were pelleted by centrifugation at 3,200 rcf for 15 min, transferred to Diagenode shearing tubes and resuspended carefully in RIPA buffer (10 mM Tris-Cl, pH 8.0, 140 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.1% sodium deoxycholate, 1% SDS and Roche Complete protease inhibitors). The samples were incubated 10 min at 4 °C and then sheared using a Bioruptor Plus (Diagenode; 4 times 5 cycles 30″ ON/OFF High power, spinning down the samples in between). The sheared chromatin was cleared by centrifugation at 16,000 rcf for 5 min, aliquoted in DNA low-binding tubes (Eppendorf), and snap-frozen for storing at −80 °C. The DNA from an aliquot was extracted and purified. The size of the DNA fragments was analyzed on an Agilent 2100 Bioanalyzer (Agilent Technologies) using a High Sensitivity chip and the concentration was determined with the Qubit dsDNA HS Assay Kit (Life Technologies). The chromatin was diluted 10 times in IP buffer (50 mM Tris-HCl at pH 8, 150 mM NaCl, 1% NP-40, 0.5% sodium deoxycholate, 20 mM EDTA, Roche Complete protease inhibitors) and pre-cleared with BSA-blocked protein A magnetic beads (Dynabeads, Invitrogen) for 1 h at 4 °C. The appropriate amount of chromatin was used for immunoprecipitation by the different antibodies ( Supplementary Table 1 ), by overnight incubation on a rotating wheel at 4 °C. Subsequently, 15 μL of BSA-blocked protein A magnetic beads were added to each sample and the mixture was incubated on a rotator at 4 °C for 2 h. The complexes were washed twice with IP buffer with 0.1% SDS, three times with Wash buffer (100 mM Tris-HCl pH 8, 500 mM LiCl, 1% NP-40, 1% sodium deoxycholate, 20 mM EDTA), once more with IP buffer and twice with TE. The beads were resuspended in 1 mM Tris pH 8 containing RNAse A (0.1 μg/μL, Qiagen) and incubated 30 min at 37 °C. The de-crosslinking was performed overnight at 65 °C with 1% SDS and proteinase K (0.5 μg/μL). The supernatant was transferred to a DNA low-binding tube and the beads were washed once more to increase the yield. The DNA was purified by SureClean (Bioline) precipitation in the presence of 15 μg of linear acrylamide (Ambion). The DNA pellet was washed twice with 70% ethanol, dried and re-suspended in Tris 10 mM pH 8. The DNA concentration was determined using Qubit dsDNA HS Assay Kit and the IP efficiency was validated by qPCR (section 1.6) on a minimum of 2 negative and 2 positive loci ( Supplementary Table 9 ). For the library generation, we have established conditions to generate reliable and moreover quantifiable libraries from as little as 0.5 ng of input material ( Supplementary Fig. 2 ) using the Diagenode MicroPlex Kit or NEBNext Ultra DNA Library Prep Kit for Illumina (New England BioLabs). After adaptor ligation, to avoid over-amplification, the number of amplification cycles was determined for each sample by a qPCR on a small aliquot. The libraries were purified by SureClean (Bioline) precipitation and resuspended in 10 mM Tris pH 8. DNA size was determined using a Bioanalyzer chip (DNA high sensitivity) and libraries were validated by qPCR (section 1.6). The sample concentration was measured with a Qubit dsDNA HS Assay Kit and adjusted to 2 nM before sequencing (50 bp) on a HiSeq 2000 (Illumina) according to the manufacturer's instructions. 1.5 MeDIP-seq. The optimized MeDIP protocols showed reliable enrichment of IPed areas with as little as 0.1 μg of DNA input ( Supplementary Table 1 , see also section 1.4). NeuN+ and NeuN- sorted nuclei were centrifuged at 3,200 rcf for 15 min at 4 °C and carefully resuspended in 200 μl of lysis buffer (10 mM Tris-Cl pH 7.5, 10 mM NaCl, 2 mM EDTA, 0.5% SDS, 100 μg Proteinase K) and incubated for 16 h at 65 °C. Genomic DNA was obtained using phenol-chloroform extraction and precipitated by adding 0.3 M sodium acetate pH 5.2 and 400 μl 100% Ethanol. The precipitated DNA was centrifuged at 20,000 rcf for 20 min at room temperature. The pellet was carefully washed once with 70% Ethanol and resuspended in 100 μl of TE buffer (10 mM Tris-Cl, pH 7.5, 1 mM EDTA) with 20 μg/ml RNase A and incubated for 30 min at 37 °C followed by 1 h at 65 °C. The genomic DNA was sheared using a Bioruptor NGS (Diagenode) for 10 cycles (30 s ON, 30 s OFF) to an average size of 250 – 300 bp and ∼ 700 ng of the sheared DNA was used further for library preparation. Briefly, the DNA was end-repaired and A-tailed using NEBNext ChIP-Seq Library Prep Master Mix (NEB, E6240 kit) as described in the kit's protocol. Custom synthesized Illumina paired-end sequencing adaptors (Sigma-Aldrich) were ligated as mentioned in the NEB-E6240 kit protocol and 100 ng of the adaptor ligated DNA fragments (al-DNA) was used for immunoprecipitation using anti-5-methylcytosine (5mC) antibody as described earlier for MeDIP 49 . MeDIP DNA enrichment was assessed by qPCR using primers listed in Supplementary Table 9 (section 1.6). For the generation of MeDIP sequencing libraries, MeDIP and input samples were PCR amplified as described in the NEB-E6240 kit using 9 μl of DNA sample, 0.4 μl of custom synthesized TruSeq PCR primer cocktail with index-sequence (Sigma-Aldrich, 25 mM), and 9.4 μl of 2 × Phusion High-Fidelity PCR master mix (NEB-E6240 kit). The DNA was further purified from PCR mix using Agencourt AMPure XP beads (Beckman Coulter, A63880) Finally, MeDIP libraries were quality controlled (section 1.6) and sequenced as described in section 1.4. 1.6 Quantitative PCR (qPCR). As we couldn't detect any HPTM changes in the CA1 1 h after CFC, we decided to look at earlier time points studying the immediate early (IE) response to CFC with fine-grained time resolution ( Supplementary Fig. 23 ). We isolated the CA1 and the DG regions from 10 mice: 2 naive mice and 8 mice exposed to context shock as previously described (sections 1.1 and 1.2). 2 mice were sacrificed immediately after CFC (0′), 2 after 15′, 2 after 30′ and the last 2 after 45′. The CA1 regions were processed for ChIP-qPCR for H3K27ac and H3K9ac as previously described (section 1.4, Supplementary Table 1 ) but without nuclei sorting and using one mouse per IP. qPCR was performed using SYBR Green I Master mix (Roche) with custom primers (Sigma-Aldrich, Supplementary Table 9 ) on a LightCycler 480 system (Roche). For each mouse, dentate gyri from both hemispheres (left and right) were processed separately, one was used for RNA extraction and RT-qPCR and the other one for H3K27ac, H3K9ac and H4K12ac ChIP-qPCR. We checked H4K12ac in addition to the marks used in the rest of the study, as this mark was also described to increase on immediate early genes promoters after CFC 20 . We performed RT-qPCR on 2 IE gene mRNAs that show a strong increase in expression 1 h after context shock exposure. Furthermore, RT-qPCR was used to assess the specificity of the tissue isolation (section 1.2, Supplementary Fig. 4 ). RNA was isolated using Tri Reagent (Sigma) according to manufacturer's protocol. RNA was treated with 2U of DNase I for 20 min at 37 °C, extracted using phenol-chloroform, and resuspended in water before being converted to cDNA using the Transcriptor High Fidelity cDNA synthesis kit (Roche Applied Science). The RT-qPCR was performed using primers from the Universal Probe Library (Roche) for specific genes. Data was normalized to hypoxanthine guanine phosphoribosyl transferase (Hprt) mRNA levels. qPCR was also used to assess the enrichment of ChIP and MeDIP experiments and to validate the integrity of sequencing libraries. Finally, qPCR was used to validate some of the differentially methylated regions identified by MeDIP-seq analysis. We chose some DMRs induced in the ACC 1 h after CFC and designed primers in the middle of the MeDIP peaks ( Supplementary Fig. 25 ). 2 nM MeDIP libraries were diluted 1:50 and 5 μL of DNA was used into 15 μL PCR reaction. The data was analyzed using pyQPCR (pyqpcr.sourceforge.net) with absolute quantification using dilution of inputs as standard samples. 1.7 RNA-seq. RNA was isolated as described in section 1.6. Libraries were prepared using the TruSeq RNA Sample Preparation v2 kit (Illumina). The library quality was checked using an Agilent 2100 Bioanalyzer and a Qubit dsDNA HS Assay Kit. Sequencing was performed as described in section 1.4. 1.8 CRM validation in zebrafish. In order to biologically validate the CRMs we predicted, we took advantage of the the Zebrafish Enhancer Detector (ZED) vector system 50 . The zebrafish wildtype strain AB was used in all experiments. All embryos were kept at 28.5 °C in E3 media (5 mM NaCl, 0.17 mM KCl, 0.33 mM CaCl 2 , 0.33 mM MgSO 4 ) supplemented with 10 −5 % methylene blue and were staged according to Kimmel 51 . All experiments were performed in accordance with animal protection standards of the Ludwig-Maximilians University Munich and were approved by the government of Upper Bavaria (Regierung von Oberbayern, Munich, Germany). To validate enhancer regions, we randomly chose 30 sequences from the 60,544 predicted enhancers, 15 sequences from the CA1 and 15 from the ACC. Then, 300 bp regions were selected based on the conservation between mouse and zebrafish. Selected predicted enhancer sequences were synthesized flanked by attL sites and cloned into the pMK-RQ vector (GeneArt). Using LR clonase (Gateway) enhancer sequences were cloned into the ZED vector containing attR sites in front of the gata2a promoter that drives expression of GFP (kind gift of Jose´ Bessa 50 ). Successful integration of the enhancer sequence could be confirmed by analytical digest (loss of BglII site). Zebrafish were injected with 2-4 pl of 25 ng/μl plasmid DNA of ZED vector containing the respective enhancer sequence to be validated at the 1 cell stage. Injected eggs and controls were cultured at 28 °C in E3 buffer until analysis. Injected larvae (both males and females, indifferently) were analyzed for transient expression of the dsRed reporter in somites and GFP expression driven by the cloned enhancer element at 2 and 5 days post fertilization (dpf). Larvae were anesthetized with Tricaine (0.016% w/v) and mounted in 3% methylcellulose on coverslips. Fluorescent images were taken using an LSM710 META inverted confocal microscope (Zeiss) and assembled in Photoshop 8.0 (Adobe Systems). GFP-positive cells that had a clear neuronal shape or were located in the CNS were scored as neuronal. The enhancer element was scored as non-neuronal if GFP was expressed in non-neuronal cells in at least 20 injected embryos. 1.9 Immunoblotting (IB). We used IB on whole CA1 regions ( Supplementary Fig. 14 ) or cell type-specific chromatin ( Supplementary Fig. 15 ) to assess if we could detect global changes for our set of 6 HPTMs. Global changes in CA1 regions were investigated in three-month-old male C56BL/6 mice using 5 mice per biological condition (N, C, CS in the CA1 at 1 h). The IB analyses on cell type-specific chromatin were derived from the previously described chromatin of 20 pooled mice per replicate (sections 1.1 to 1.4). Mouse brain regions (CA1 and ACC) were thawed and processed to enrich nuclear proteins. They were homogenized using a micro-pestle in 200 μl TX buffer (50 mM Tris-HCl pH 7.4, 150 mM NaCl, 1 mM EDTA, 1% Nonidet P40, 0.05% SDS) and protease inhibitor (Complete, Roche). After 10 min incubation on a rotating wheel, they were centrifuged for 10 min at 400 rcf. Supernatant was discarded and the pellet was washed with TX buffer (with protease inhibitors) and lysed in TX buffer with 1% SDS by 5 min incubation on a rotating wheel. The samples were then sheared in a Bioruptor (Diagenode) for 15 min (30 s on/off cycle), cleared by centrifugation for 10 min at 9,300 rcf and the supernatant (enriched nuclear proteins) was collected. Protein concentration was measured using Pierce BCA Protein assay kit according to manufacturer instructions in a clear bottom 96 well micro-plate. Absorbance was measured at 562 nm on a TECAN plate reader. After denaturation at 95 °C for 5 min, 4 μg of proteins were run in a pre-casted Bolt Bis-Tris 12% gel (Novex, Life technologies) in reducing condition in MES-SDS buffer (200V for 35 min). The proteins were transferred to nitrocellulose membrane (pore size 0.2 μm; GE Healthcare) at 50V for 90 min in cold 1X Tris-glycine transfer buffer with 20% methanol. The membrane was blocked in TBS 0.1% Tween 20 (TBST) with 5% BSA for 1 h (room temperature) and incubated with primary antibody (H4, Abcam ab31830, 1/5,000; H3 Abcam ab1791, 1/10,000; H3K4me3, Abcam ab8580, 1/1,000; H3K27ac, Abcam ab4729, 1/1,000; H3K9ac, Millipore 07-352, 1/1,000; H4K12ac, Millipore 07-595, 1/1,000; H4K5ac, Millipore 07-327, 1/1,000; H3K27me3Abcam ab6002, 1/1,000) overnight at 4 °C. After 3 washes in TBST, the membrane was incubated with the secondary antibody (IRDyeR 1/10,000, LI-COR) for 1 h at room temperature. After 3 additional washes, the membrane was imaged on an Odyssey CLX imaging system (LI-COR). Images were acquired with a resolution of 84 μm and 'high quality' settings in 700 nm and 800 nm channels and the signal was quantified using Image Studio (LI-COR). Alternatively, IBs were performed using sheared chromatin from sorted nuclei (see sections 1.1 to 1.4) ( Supplementary Fig. 15 ). The chromatin was diluted in RIPA buffer with 0.1% SDS and incubated at 99 °C for 10 min. Subsequently, loading buffer was added and the samples were further processed as described for the nuclear protein lysate (see above) using 100 ng of chromatin (measuring DNA) per well. 2.1 Cell type–specific chromatin modification data. This section and those that follow it, describing the computational analyses used in this work, are ordered chronologically according to the first appearance of the methods in the main text. This structure is also reflected in the titles of the five subsections 2.1 to 2.5, which correspond to the section headers in the text of the main manuscript. 2.1.1 ChIP-, MeDIP- and RNA-seq pre-processing and quality control. As a first step, ChIP-, MeDIP-, and RNA-seq data was subjected to an in-house quality control workflow. Read quality was assessed using FastQC 52 (v0.10.1) to identify sequencing cycles with low average quality, adaptor contamination, or repetitive sequences from PCR amplification. Alignment quality was analyzed using samtools flagstat 53 (v0.1.18) with default parameters. Data quality was visually inspected in a genome browser at . Furthermore, we assessed if samples were sequenced deep enough by analyzing the average per base coverage and the saturation correlation for all samples using the MEDIPS R package 54 (section 2.4.2). The saturation function splits each library in fractions of the initial number of reads (10 subsets of equal size) and plots the convergence. The correlation between biological replicates was evaluated using Pearson correlation (function MEDIPS.correlation). By using MEDIPS R objects for all samples, the principal components were plotted based on the read density distribution of each sample to assure that samples cluster into their respective biological groups (for example, for chromatin modification, cell type, brain region) ( Supplementary Fig. 3 ). This analysis was conducted using the PCA function of the FactoMineR R package and the PCA plotting function of Oasis 55 . ChIP-seq peak enrichment was assessed by calculating normalized strand cross correlation (NSC) and relative strand cross correlation (RSC) coefficients using the in-house R package 'chequeR'. In general, good quality libraries have an NSC>1.05 and an RSC>0.8 although these values also depend on the analyzed modification or histone 56 . Only data passing all quality standards was used for further analyses ( Supplementary Table 3 ). 2.1.2 Read alignment . The alignment of deep sequencing reads to the mouse genome can be split into three parts, the alignment of ChIP-, MeDIP- and RNA-seq data. ChIP-seq data was aligned to the mouse NCBI genome version 38 using Bowtie2 (ref. 57 ) (v2.0.2). Reads were first aligned using default parameters allowing for 2 mismatches using seed alignment. In more detail, Bowtie2 (v2.0.2) first searches for end-to-end 0-mismatch alignments and end-to-end 1-mismatch alignments and subsequently performs a seed-based alignment with 2 mismatches. For true multi-map reads that align to multiple regions with the same score, only a single alignment was returned. The obtained Sequence Alignment/Map (SAM) files were converted into sorted Binary Alignment/Map (BAM) files using the samtools suite 53 . Subsequently reads were filtered using two alternative options, high quality uni- and multi-mapped reads and (ii) good quality uni-mapped reads (see also bowtie2 MAPQ). (i) High quality uniquely and multi-mapped reads were obtained by filtering out reads with low quality (MAPQ ! = {0, 2, 3, 4}) ( Supplementary Table 3 ). (ii) Good quality uniquely mapped reads were obtained by filtering out reads with MAPQ scores {0, 1} ( Supplementary Table 3 ). This step removes all true multi-map reads (reads that align to several genomic locations with the same score). Throughout the manuscript ChIP data was analyzed using high quality uni- and muli-mapped reads (i). In addition, data in section 2.3.2 was also analyzed using good quality uniquely mapped reads (ii). We opted to include high-quality multimapped reads in our analyses as the comparison of peak calling and differential HPTM analyses using uni- and multimapped reads or only uniquely mapped reads showed very few differences (see also next section for further rationale). MeDIP-seq data was aligned to the mouse NCBI genome version 38 using Bowtie2 57 (v2.0.2). Reads were first aligned using default parameters allowing for 2 mismatches using seed alignment. In more detail, Bowtie2 (v2.0.2) first searches for end-to-end 0-mismatch alignments and end-to-end 1-mismatch alignments and subsequently performs a seed-based alignment with 2 mismatches. The obtained Sequence Alignment/Map (SAM) files were converted into sorted Binary Alignment/Map (BAM) files using the samtools suite 53 . Subsequently, aligned reads were filtered for high quality uniquely and multi-mapped reads (MAPQ ! = {0, 2, 3, 4}) ( Supplementary Table 3 ). We would like to emphasize that we specifically opted to consider uniquely mapped and high quality multi-mapped reads in the analysis of MeDIP data for two main reasons. First, high quality multi-mapped reads constitute valid DNA methylated regions and should as such be considered. Second, more than half of DNA methylation reads fall into intergenic regions, many of which are repetitive ( Supplementary Table 3 ). RNA-seq data was aligned to the genome using gapped alignment as RNA transcripts are subject to splicing and reads might therefore span two distant exons. Reads were aligned to the whole Mus musculus mm10 genome using STAR aligner 58 (2.3.0e_r291) with default options, generating mapping files (BAM format). Read counts for all genes and all exons (Ensembl annotation v72) were obtained using FeaturesCount ( ). For data visualisation, BAM files were converted into WIG and BigWig files using the MEDIPS 'MEDIPS.exportWIG' function with a window of 50bp and RPM normalization. Subsequently, BigWig files were uploaded to a custom genome browser at (see section 2.6). 2.1.3 Known cell type-specific gene list. In order to assess the cell type-specificity of the data we manually extracted and annotated a set of neuron and non-neuron specific genes by using publically available data 23 , 24 and in-house RNA-seq information ( Supplementary Table 4 ). Cell type-specific genes were used for the analyses of cell type-specificity using aggregate gene plots (2.1.4), precision and recall calculations (2.1.5), and finally for the genome-wide prediction of cell type-specific genes (2.2.1). We would like to state that this list is not exhaustive, as it does not contain all known neuron- and non-neuron-specific genes in the brain. This is also reflected in the prediction of known neuron-specific genes that were not included in our compiled list ( Supplementary Fig. 9 , and Supplementary Tables 4 and 5 ). 2.1.4 Aggregate plots. Aggregate gene plots were created using a modified version of ngs.plot 59 using a moving window of width 5 (−MW 5) to smooth the average profiles, trimming 1% of extreme values on both ends (−RB 0.01) and using 5kb flanking regions (−L 5000). A 95% confidence interval was included into the aggregate gene plots to estimate sample variance and significance. To classify genes according to their expression levels ( Supplementary Figs. 7 and 17 ), we calculated the RPKM values for the genes of the used RNA-seq data (section 2.1.2) from naive samples (CA1 and ACC). For non-expressed genes we considered all the genes with zero read counts in all the samples. Genes with an average RPKM between 1 and 5 were considered as low expressed genes, genes with 5 to 30 RPKM were considered as medium expressed genes and genes with RPKM higher than 30 were considered as high expressed genes. Aggregate plots from Supplementary Figure 6 were computed using known cell-type specific genes (section 2.1.3). 2.1.5 Precision and recall calculations for cell type-specific expression predictions . In order to identify whether chromatin modifications can predict cell type-specific gene activity, we calculated the precision and the recall for each chromatin modification based on a set of known neuron and non-neuron specific genes (2.1.3, Supplementary Fig. 8 ). To this end, read counts from naive samples (CA1 and ACC) on transcriptional start sites (−1,500 to +1,500 bp, H3K4me3 and H3K27ac) or gene bodies (TSS to TES, H3K79me3, H3K4me1, H3K27me3, and DNAme) were compared using DEseq2 (ref. 60 ). Regions with less than 20 reads in total (for all the compared samples) were filtered out and the fitType “mean” was used. Precision and recall were calculated according to their usual definitions: Precision = TP/(TP + FP) and Recall = TP/(TP + FN). True positives (TP), false negatives (FN), and false positives (FP) were calculated based on the observed direction of fold change and the respective known gene annotation (as neuronal or non-neuronal). For active chromatin modifications, genes with significant (FDR < 0.05) positive fold changes (neuronal reads > non-neuronal reads) were annotated as neuronal genes and genes showing significant negative fold changes (neuronal reads < non-neuronal reads) were annotated as non-neuronal genes. Thereby, a significant positive fold change in a known neuron-specific gene was considered a TP. A significant positive fold change in an annotated non-neuron-specific gene was considered a FP. A significant negative fold change in an annotated non-neuron-specific gene and genes with no significant changes were considered TN (true negatives) using neuronal data. In the case of repressive chromatin modifications (H3K27me3 and DNAme) the opposite correlation was expected. It is important to note that H3K9ac was not included in precision and recall calculations, as well as in prediction of genes (2.2.1) and CRMs (2.2.4) as it was added to the study relatively late. 2.2 Cell-type specific gene and regulatory module activity. 2.2.1 Prediction of novel cell-type specifically expressed genes . Given the predictive power of the individual chromatin modifications (2.1.5), cell-type specific gene expression was predicted genome-wide from naive neuronal and non-neuronal data of the CA1 and ACC. DNA methylation was not used due to its low classification performance ( Supplementary Fig. 8 ). These predictions are not exclusive, meaning that genes are preferentially expressed in one cell type over the other but are not exclusively expressed in one cell-type. For the comparison of neuronal and non-neuronal data for each chromatin modification a matrix with the read counts for TSSs or gene bodies was created. Subsequently, DEseq2 (ref. 60 ) was used to identify statistically significant differential coverage between neuronal and non-neuronal signal. As mentioned in section 2.1.5, the region around the TSS (−1,500, +1,500) was used to compare H3K27ac and H3K4me3 ChIP-seq data, and the full gene body was used for H3K79me3, H3K27me3 and H3K4me1. The results were filtered for an FDR < 0.05, a |logFC| > 1, and a mean coverage > 50. For all comparisons neuronal data was considered as treatment and non-neuronal data as control. Consequently, a positive fold change for activity-related modifications indicates increased gene activity in neurons. To assess cell type-specificity we used a heuristic best-out-of-three classifier. To be classified as neuronal gene (meaning more active in neurons than in non-neurons), at least three chromatin modifications have to show statistically significant enrichment (or depletion for H3K27me3) in neuronal data as compared to non-neuronal data. Furthermore, the residual two chromatin modifications should not show enrichment in non-neuronal data. 2.2.2 Functional gene enrichment analysis. For the identification of enriched functional terms in sets of genes WebGestalt 61 and QIAGEN's Ingenuity Pathway Analysis - IPA were used. In general, we only considered gene lists for functional analysis that contained at least 15 different genes. For WebGestalt, each set of genes was analyzed in the WebGestalt web-service by uploading the corresponding gene IDs. GO category enrichment (adjusted p-value = 0.1) was assessed by calculating the fold-change between the observed and the expected number of genes of a given GO category. Raw WebGestalt files can be examined in the Supplementary Data Set . Functional enrichment analyses with IPA 'Core Analysis' module were conducted using default parameters. Multi-sample comparisons were conducted using the IPA 'Comparison Analysis' module with default settings. To allow for an easy comparison of different biological conditions, results were grouped and visualized using heatmaps ( Figs. 3 and 6 , and Supplementary Figs. 10 and 28–30 ). Heatmaps that display WebGestalt GO term enrichment are limited to GO levels 3 (more general terms) and 6 (more specific terms) to simplify the already complex heatmaps. Detailed information on all GO levels per analysis can be found in the Supplementary Data Set . 2.2.3 Analysis of neuronal cell culture data. To examine changes in histone modifications before and after KCl stimulation of primary neuronal cell culture, published data was downloaded and analyzed ( Fig. 3 and Supplementary Figs. 19–22 ) 28 , 29 . This experiment elicits a strong depolarization of almost all analyzed neurons, eliminating sensitivity issues arising due to few active cells in vivo . This data set contains ChIP-seq for H3K4me1, H3K4me3, H3K27ac and H3K27me3 from cultured neuronal cells before and after KCl stimulation (see section 2.3). SRA files from GEO data sets GSE21161 and GSE60192 were downloaded, converted into csfasta format (fastq-dump) and aligned to the NCBI genome version 38 using Bowtie (v1.1.1) with color space mapping (-C) and allowing for 1 mismatch in end-to-end alignment (-m 1). Moreover, we used the ChIP-seq enhancer data of the transcription factors CBP, Npas4 and CREB to build the training data set for the CRM detection (see section 2.2.4). 2.2.4 Prediction of CRMs. Active cell type-specific CRMs were predicted using a modified random forest classifier 'RFECS' 62 and naive chromatin modification data. Since RFECS is a supervised machine learner it requires a training data set to learn the parameters for future classification. Due to the lack of genome-wide CRM, transcription factor binding, and chromatin modification data in murine brain, published primary neuronal cell culture data for key transcription factors (TFs) and chromatin modifications was used (2.2.3) 28 , 29 . Training data set. In order to build positive and negative CRM sets for RFECS training (parameter estimation) published cell culture data for TFs (CBP, Npas4 and CREB) and chromatin modifications (H3K4me1, H3K4me3, H3K27ac and H3K27me3) was used. A high confidence positive CRM set was created by selecting regions in the mouse genome that were bound by at least one TF and additionally contained a peak for either H3K27ac or H3K4me1. In more detail, the binding location of each TF was downloaded in bed format and converted to mm10 coordinates (liftover tool at UCSC website - ). Only TF binding sites larger than 200 bp were considered for further analysis and TF binding sites within a distance of 1,000 bases were merged. Chromatin modification peaks for H3K27ac and H3K4me1 were obtained using MACS2 (peak score exceeding 10) and the peak summits were extended by 1,000 bases in each direction. Subsequently, H3K27ac and H3K4me1 peaks in close genomic proximity (<1,000 bases) were merged, obtaining a total of 30,700 chromatin modification-enriched regions. Next, we selected for enriched chromatin modification regions that were overlapping with TF binding sites by at least one base, resulting in a list of 2101 putative CRMs. We then removed all regions closer than 5,000 bases from any H3K4me3 peak or TSS, obtaining 762 regions that were used as the high-confidence CRMs (positive training data set). In addition to the set of high-confidence CRMs a set of genomic locations that do not represent CRMs was compiled. For this, H3K4me3 peaks (active promoters) and annotated TSS regions (+/− 500 bp around) were combined (115,988 regions). From these, all the regions overlapping with a TF binding site were removed (113398 promoter regions). Additionally, random genomic regions that do not overlap with TF binding sites (+/− 5,000 bases) and match the size distribution and number of positive CRMs were selected using bedtools shuffle (random regions). The final negative data set was created by randomly selecting 10% of the promoter regions and all of the random regions, resulting in a set of 14,879 high-confidence non-CRM regions. Training, prediction and validation. The machine learner was trained using the sets of positive and negative CRMs and neuronal histone modification data in bed format from the ACC of naive mice. Using a window size of 2,000 bases a prediction model with 65 trees was built 62 . Predictions were validated by overlapping the predicted sites with H3K27ac peaks and H3K4me1 peaks. Predictions in HK4me3 and TSS regions were considered false positives and were used for sensitivity/specificity calculations ( Supplementary Fig. 11 ). For the final CRM set these regions were manually excluded. 2.2.5 Genomic annotation of predicted regions (CRMs). The annotation of CRMs with genomic regions was performed using the bedtools intersection function comparing the CRM locations in bed format to a genome annotation bed file 63 . The mouse genome annotation bed file was extracted from UCSC tables ( ) using the Ensembl mm10 genome annotation. Individual bed files were downloaded for whole transcripts (from TSS to TES), exons, introns, 3′-UTRs, and 5′-UTRs. As a first approximation, CRMs were annotated as intra- or intergenic. Intragenic regions are defined as overlapping with annotated genes including a promoter region (−1,000 to 0 bases from the TSS). Intragenic regions were further annotated for overlap with genic features such as the promoter, 5′-UTR, exon, intron, or 3′-UTR. Intergenic regions were annotated to the closest gene and the distance was recorded. For the enrichment analysis the set of results was shuffled randomly using the 'bedtools shuffle' function over the entire genome 63 . This process was repeated for 10,000 iterations and the significance was estimated using the fraction of random associations with a higher overlap as compared to the true overlap (for example, after 10,000 permutations the overlap of random CRMs with exons is higher than the observed true overlap in five instances, giving rise to a p -value of 0.0005). 2.2.6 CRM motif enrichment. Homer (v.4.6) ( ) was used for the discovery of enriched motifs. Bed files containing the CRM positions were submitted to the findMotifsGenome.pl script using the parameters –size 500 –len 8 and -mask in order to mask repetitive regions and avoid too many false positives. Neuronal and non-neuronal CRMs were compared based on their motif enrichment. TFs for the enriched motifs were manually merged according to their similarity and only TFs expressed in brain tissue (based on our RNA-seq data) are shown ( Supplementary Fig. 12 ). 2.3 Learning-induced histone modifications changes. 2.3.1 Global HPTM changes during learning and memory. Global HPTM changes were analyzed using aggregate plots as described in section 2.1.4. In addition to gene plots, we used aggregate plots on CRMs and on intergenic regions ( Supplementary Figs. 16 and 18 ). For CRM aggregate plots, we selected all the CRMs that we predicted in the same cell type and tissue (see section 2.2.4). For the intergenic regions, we randomly selected 20,000 regions that were at least 10 kb away from the closest gene. In order to visualise learning-related (and stimulus-dependent) coordinated changes in HPTM and gene expression levels, aggregate plot HPTM signals were subset for either different gene expression levels or differentially expressed genes ( Supplementary Figs. 17 and 20 ) (section 2.5.2 for details about RNA-seq analysis). To quantify potential changes, peak HPTM levels for each condition were calculated and included as bar graphs into the aggregate plots. Bar graphs were normalized to naive (N) or unstimulated (Un) HPTM levels for in vivo or in vitro data, respectively. 2.3.2 Region-specific HPTM changes. To analyze learning-related region-specific HPTM changes read counts on transcriptional start sites (−500 to +1000 bp; H3K4me3, H3K9ac), gene bodies (TSS to TES, H3K79me3 and H3K27me3) or called peaks (H3K4me1 and H3K27ac) were compared ( Fig. 3c,d , Supplementary Figs. 21 and 22 , and Supplementary Table 8 ). For HPTMs that do not reside at predefined genomic positions, such as H3K27ac and H3K4me1, MACS2 (ref. 64 ) (v.2.0.10.20131028 tag: beta) was used to call peaks from merged BAM files, using parameters shiftsize = FS/2 and q = 0.01, where FS is the fragment size estimated by the “chequeR” package. DEseq2 (ref. 60 ) was applied to identify the differential coverage comparing naive to context only (N-C), naive to context shock (N-CS), or context to context shock (C-CS) samples (section 1.1) of all annotated mouse Ensembl genes and MACS2 peaks. Regions were required to contain at least 2 reads per 50 bp in a given TSS, gene body or MACS2 peak when averaging over samples of each condition. Due to the reduction in the number of genes/peaks that have to be statistically evaluated, the multiple-testing burden is decreased, resulting in lower p-values for the analyzed genes. Lastly, we only considered HPTM changes with an adjusted p-value = 0.1 and we did not apply any fold-change thresholds. These settings are rather lenient allowing for the detection of small changes while not detecting too much noise due to low coverage variation. Differential HPTMs (DHPTMs) for called peaks (H3K4me1 and H3K27ac) were associated with their surrounding genomic context (see also 2.2.5). Genes containing DHPTMs up to 5 kbp upstream the TSS were used for functional analysis ( Fig. 3c,d , Supplementary Fig. 22 and Supplementary Table 10 ). Genes containing DHPTMs were compared to the differentially expressed genes (DEGs) (section 2.5.2) and the overlap significance was calculated using a two-sided Fischer's Exact Test ( Supplementary Table 10 ). DHPTM-DEGs gene lists that contain more than 15 gene entries were analyzed for GO enrichment (see section 2.2.2 and 2.5.1; Fig. 3d and Supplementary Fig. 22 ). 2.4 Spatio-temporal correlation of DNAme and memory. 2.4.1 Global DNA methylation changes during learning and memory . Global DNA methylation changes were analyzed using aggregate plots as described in section 2.1.4 and 2.3.1 using Ensembl mouse genome annotations 59 ( Supplementary Fig. 24 ). In accordance with the parameters used to identify differentially methylated regions in section 2.4.2, PCR duplicates were removed from the analysis of global DNA methylation changes by aggregate plots (samtools rmdup –sS input_file.sam output_file.sam). In addition, genes were split into CpG island containing and non-containing groups to look for potential differences in global DNAme across those two groups ( Supplementary Figs. 7 and 24 ). For the categorisation of genes the UCSC mm10 CpG island annotation was used. Genes were classified as CpG positive if an annotated CpG island was located between 1000 bp upstream to 300 bp downstream of that gene. 2.4.2 Identification of differentially methylated regions. For the analysis of differentially methylated regions (DMRs) the R package MEDIPS 54 (v1.16.0) was used ( Figs. 4 and 5 , and Supplementary Fig. 27 ). MEDIPS identifies DMRs by binning the genome with a fixed window size and identifying positions with significantly different number of reads. These two parameters (window size and minimum number of reads per window) need to be adjusted. Whereas big windows generate broad methylated regions by increasing the chance of merging neighboring peaks, small windows might fail to identify large methylated regions. Thus, whereas small windows increase resolution they carry the heavy burden of a large multiple-testing adjustment, as MEDIPS tests every genomic region for differential enrichment. Similarly, only regions with a reasonable number of reads should be analyzed to alleviate the multiple-testing burden. The optimization of the window size and the minimum number of reads was performed for windows of size 100 bases up to 3,000 bases using 100 bases increments. For each window size, minimum read numbers ranging from 1% of the window length to 15% of the window length were tested (for example, for window size 100 and a 1% minimum read number at least 1 read has to be found). It was noticed that the optimum curve shows significantly changes in number of DMRs depending on the analyzed sample (data not shown). Therefore, we selected average values that performed best over all conditions. We found the optimum (highest number of significant DMRs) window size between 500 and 800 bases with read filtering values around 4 – 6%. A window size of 700 bases and 5% minimum read filtering were chosen for all comparisons. Reads were extended by 250 bases ('extend' = 250). Moreover, the MEDIPS parameter 'uniq' was set to 'true' in order to discard PCR duplicates. Importantly, we first thought of using a false-discovery rate (FDR) of 0.1 for all comparisons in order to detect small methylation changes (see also section 2.3.2), as only a subset of cells was expected to represent a network correlate of memory and therefore change their methylation (see main text and section 2.3.2). Surprisingly, the number of DMRs in N-C and N-CS comparisons was high so that we decided to reduce the p-value to 0.05 for these comparisons. Consequently, DMRs for N-C and N-CS comparisons have an adjusted p-value = 0.05 whereas C-CS DMRs have an adjusted p-value = 0.1. DMRs with positive fold changes were called hyper-methylated and DMRs with negative fold changes were called hypo-methylated. 2.4.3 Genomic annotation of predicted DMRs . The annotation of DMRs with genomic features was performed as previously described in section 2.2.5 for CRMs. To assess if DMRs preferentially reside in H3K27ac enriched regions we compared the overlap of DMRs with H3K27ac enriched regions and used a permutation-based test to assess the statistical significance as described in section 2.2.5 ( Supplementary Fig. 26 ). In contrast to the analyses described in section 2.3, we did not use MACS2 for H3K27ac peak calling but used RSEG 65 (v0.4.4) to detect broad and dispersed peaks. To run RSEG we converted the H3K27ac bam files into bed files using bedtools 63 using the following parameters: -i 20 (default number of iterations for RSEG internal HMM training) and -c mm10.bed (list of chromosome sizes for mouse NCBI genome version 38). 2.4.4 Differentially methylated genes (DMGs). In order to use DMR information for functional analyses (section 2.5) we needed to annotate DMRs with the genes they potentially affect. To this end, DMRs inside a gene, in the promoter region of a gene (1000 bp upstream) or in the terminator region of the gene (300 bp downstream the TES) were considered gene-associated DMRs or, in other words, differentially methylated genes (DMGs). For genes containing multiple DMRs only one DMG entry is considered. For DMGs containing hyper- and hypo-methylated DMRs one hyper- and one hypo-methylated DMG is considered. This last relation is important since it explains why the number of hyper- and hypo-methylated DMGs is always equal to or bigger than the total number of DMGs ( Figs. 4d,e and 5 , Supplementary Fig. 27 , and Supplementary Table 11 ). 2.5 Functional DNA methylation changes on plasticity genes. 2.5.1 DMG gene ontology and pathway enrichment. Gene ontology enrichment of DMGs that were either hyper or hypo methylated was performed as described in section 2.2.2 using WebGestalt ( Supplementary Fig. 28b–d ) 61 . Pathway enrichment of DMGs was carried out as described in section 2.2.2 using QIAGEN's Ingenuity Pathway Analysis – IPA ( Supplementary Fig. 28e,f ). In brief, 'Canonical Pathways' and 'Bio Function' analyses were performed using default options for Mus musculus . The results for each data set were combined using the 'Comparative Analysis' tool from IPA and exported as csv file. Comparative IPA and WebGestalt analyses containing enrichment p-values were post-processed using R (heatmap.2 function) to plot hierarchical clustering heatmaps ( Supplementary Figs. 28–30 ). 2.5.2 Differentially expressed genes. The processing and quality control of the RNA-seq data were performed as described in section 2.1.1 ( Supplementary Fig. 29 ). Read counts were generated using FeaturesCount and naive (N), context only (C), and context shock (CS) samples were compared using DESeq2 60 . Genes with a p-value = 0.05 were considered to be differentially expressed ( Supplementary Table 2 and Supplementary Table 12 ). It is noteworthy that one of the 5 CA1 samples from the CS1h group was excluded from the analysis as it was considered an outlier. 2.5.3 Differentially expressed exons (differential exon usage). Read counts for each exon (Ensembl annotation GRCh38.74) were generated using FeaturesCount and naive (N), context only (C), and context shock (CS) samples were compared using DEXSeq ( Supplementary Fig. 29 ) 66 . Exons with a p-value = 0.1 were considered to be differentially expressed ( Supplementary Tables 2 and 12 ). 2.5.4 DMR gene expression comparison. The differential methylation of promoter or genic regions might alter the expression and the splicing of, respectively 42 , 43 . In order to assess if the detected learning-related DMRs could functionally alter gene expression, DEGs (2.5.2) and DEEs (2.5.3; Supplementary Tables 2 and 12 , and Supplementary Figs. 29 and 30 ) were overlapped with the DMGs ( Fig. 6 and Supplementary Fig. 30 ) and the statistical enrichment of their association was assessed using a right-tailed Fisher's Exact Test. In addition to our data, we included published gene expression changes during systems consolidation (C-CS, ACC 1 h) in our analyses 41 . The functional analysis (WebGestalt GO enrichment) of the overlap between DMGs and DEGs or DMGs and DEEs was combined in a hierarchically clustered heatmap ( Fig. 6f , and Supplementary Fig. 30 , see sections 2.2.2 and 2.5.1). 2.6 Data access. The raw data are available online (NCBI GEO GSE74971 ). Alternatively, data and results can be browsed and downloaded using a dedicated genome browser at . The browser supports complex searches for tissue (CA1, ACC), cell type (neurons, non-neurons), time-point (naive, 1 h, 4 weeks), and chromatin modification. It is possible to visualize both data (RNA-seq, ChIP-seq, MeDIP-seq read tracks) as well as results (for example, CRM and DMR predictions). In addition, the browser supports data download in various formats (GFF, BED and BigWig) and the upload of custom data. A Supplementary Methods Checklist is available. Accession codes. The raw data are available online (NCBI GEO GSE74971 ). Accession codes Primary accessions Gene Expression Omnibus GSE74971
Scientists from the German Center for Neurodegenerative Diseases have shed new light on the molecular basis of memory. Their study confirms that the formation of memories is accompanied by an altered activity of specific genes. In addition, they found an unprecedented amount of evidence that supports the hypothesis that chemical labels on the backbone of the DNA (so-called DNA methylation) may be the molecular basis of long-term memory. These findings are reported in Nature Neuroscience. The brain still harbours many unknowns. Basically, it is believed that it stores experiences by altering the connections between brain cells. This ability to adapt—which is also called "plasticity"—provides the basis for memory and learning, which is the ability to draw conclusions from memories. On a molecular scale, these changes are mediated by modifications of expression of specific genes that strengthen or weaken the connections between the brain cells as required. In the current study, a research team led by Dr. Stefan Bonn and Prof. André Fischer from Göttingen, joined forces with colleagues from the DZNE's Munich site to examine how the activity of such genes is regulated. The scientists stimulated long-term memory in mice by training the animals to recognise a specific test environment. Based on tissue samples, the researchers could discern the extent to which this learning task triggered changes in the activity of the genes in the mice's brain cells. Their focus was directed on so-called epigenetic modifications. These modifications involve the DNA and DNA-associated proteins. Epigenetic modifications "The cell makes use of various mechanisms in order to turn genes on or off, without altering the DNA sequence itself. It's called 'epigenetics'," explains Dr. Magali Hennion, a staff member of the research group of Stefan Bonn. In principle, gene regulation can happen through methylation, whereby the backbone of the DNA is chemically labeled at specific sites. Changes in the proteins called histones that are packaging the DNA may also occur. Hennion: "Research on epigenetic changes that are related to memory processes is still at an early stage. We look at such features not only for the purpose of a better understanding of how memory works—we also look for potential targets for drugs that may counteract memory decline. Ultimately, our research is about therapies against Alzheimer's and similar brain diseases." A code for memory contents? In the current study, the researchers found modifications, both of the histones as well as of the methylation of the DNA. However, histone modifications had little effect on the activity of genes involved in neuroplasticity. Furthermore, Bonn and his colleagues not only discovered epigenetic modifications in nerve cells, but also in non-neuronal cells of the brain. "The relevance of non-neuronal cells for memory is an interesting topic that we will continue to pursue," says André Fischer, site speaker for the DZNE in Göttingen and professor at the University Medical Center Göttingen (UMG). "Furthermore, our observations suggest that neuroplasticity is, to a large extent, regulated by DNA methylation. Although this is not a new hypothesis, our study provides an unprecedented amount of supporting evidence for this. Thus, methylation may indeed be an important molecular constituent of long-term memory. In such a case, methylation could be a sort of code for memory content and a potential target for therapies against Alzheimer's disease. This is an aspect that we specifically want to focus on, in further studies."
10.1038/nn.4194
Medicine
Subtle biases in AI can influence emergency decisions
Hammaad Adam et al, Mitigating the impact of biased artificial intelligence in emergency decision-making, Communications Medicine (2022). DOI: 10.1038/s43856-022-00214-4 Journal information: Communications Medicine
https://dx.doi.org/10.1038/s43856-022-00214-4
https://medicalxpress.com/news/2022-12-subtle-biases-ai-emergency-decisions.html
Abstract Background Prior research has shown that artificial intelligence (AI) systems often encode biases against minority subgroups. However, little work has focused on ways to mitigate the harm discriminatory algorithms can cause in high-stakes settings such as medicine. Methods In this study, we experimentally evaluated the impact biased AI recommendations have on emergency decisions, where participants respond to mental health crises by calling for either medical or police assistance. We recruited 438 clinicians and 516 non-experts to participate in our web-based experiment. We evaluated participant decision-making with and without advice from biased and unbiased AI systems. We also varied the style of the AI advice, framing it either as prescriptive recommendations or descriptive flags. Results Participant decisions are unbiased without AI advice. However, both clinicians and non-experts are influenced by prescriptive recommendations from a biased algorithm, choosing police help more often in emergencies involving African-American or Muslim men. Crucially, using descriptive flags rather than prescriptive recommendations allows respondents to retain their original, unbiased decision-making. Conclusions Our work demonstrates the practical danger of using biased models in health contexts, and suggests that appropriately framing decision support can mitigate the effects of AI bias. These findings must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions. Plain language summary Artificial intelligence (AI) systems that make decisions based on historical data are increasingly common in health care settings. However, many AI models exhibit problematic biases, as data often reflect human prejudices against minority groups. In this study, we used a web-based experiment to evaluate the impact biased models can have when used to inform human decisions. We found that though participants were not inherently biased, they were strongly influenced by advice from a biased model if it was offered prescriptively (i.e., “you should do X”). This adherence led their decisions to be biased against African-American and Muslims individuals. However, framing the same advice descriptively (i.e., without recommending a specific action) allowed participants to remain fair. These results demonstrate that though discriminatory AI can lead to poor outcomes for minority groups, appropriately framing advice can help mitigate its effects. Introduction Machine learning (ML) and artificial intelligence (AI) are increasingly being used to support decision-making in a variety of health care applications 1 , 2 . However, the potential impact of deploying AI in heterogeneous health contexts is not well understood. As these tools proliferate, it is vital to study how AI can be used to improve expert practice—even when models inevitably make mistakes. Recent work has demonstrated that inaccurate recommendations from AI systems can significantly worsen the quality of clinical treatment decisions 3 , 4 . Other research has shown that even though experts may believe the quality of ML-given advice to be lower, they show similar levels of error as non-experts when presented with incorrect recommendations 5 . Increasing model explainability and interpretability does not resolve this issue, and in some cases, may worsen human ability to detect mistakes 6 , 7 . These human-AI interaction shortcomings are especially concerning in the context of a body of literature that has established that ML models often exhibit biases against racial, gender, and religious subgroups 8 . Large language models like BERT 9 and GPT-3 10 —which are powerful and easy to deploy—exhibit problematic prejudices, such as persistently associating Muslims with violence in sentence-completion tasks 11 . Even variants of the BERT architecture trained on scientific abstracts and clinical notes favor majority groups in many clinical-prediction tasks 12 . While previous work has established these biases, it is unclear how the actual use of a biased model might affect decision-making in a practical health care setting. This interaction is especially vital to understand now, as language models begin to be used in health applications like triage 13 and therapy chatbots 14 . In this study, we evaluated the impact biased AI can have in a decision setting involving a mental health emergency. We conducted a web-based experiment with 954 consented subjects: 438 clinicians and 516 non-experts. We found that though participant decisions were unbiased without AI advice, they were highly influenced by prescriptive recommendations from a biased AI system. This algorithmic adherence created racial and religious disparities in their decisions. However, we found that using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making. These results demonstrate that though using discriminatory AI in a realistic health setting can lead to poor outcomes for marginalized subgroups, appropriately framing model advice can help mitigate the underlying bias of the AI system. Methods Participant recruitment We adopted an experimental approach to evaluate the impact that biased AI can have in a decision setting involving a mental health emergency. We recruited 438 clinicians and 516 non-experts to participate in our experiment, which was conducted online through Qualtrics between May 2021 and December 2021. Clinicians were recruited by emailing staff and residents at hospitals in the United States and Canada, while non-experts were recruited through social media (Facebook, Reddit) and university email lists. Informed consent was obtained from all participants. This study was exempt from a full ethical review by COUHES, the Institutional Review Board for the Massachusetts Institute of Technology (MIT), because it met the criteria for exemption defined in Federal regulation 45 CFR 46. Participants were asked to complete a short complete a short demographic survey after completing the main experiment. We summarize key participant demographics in Supplementary Table 1 , additional measures in Supplementary Table 2 , and clinician-specific characteristics in Supplementary Table 3 . Note that we excluded participants who Qualtrics identified as bots, as well as those who rushed through our survey (finishing in under 5 min). We also excluded duplicate responses from the same participant, which removed 15 clinician and 2347 non-expert responses. Experimental design Participants were shown a series of eight call summaries to a fictitious crisis hotline, each of which described a male individual experiencing a mental health emergency. In addition to specifics about the situation, the call summaries also conveyed the race and religion of the men in crisis: Caucasian or African-American, Muslim or non-Muslim. These race and religion identities were randomly assigned for each participant and call summary: the same summary could thus appear with different identities for different participants. Note that while race was explicitly specified in all call summaries, religion was not, as the non-Muslim summaries simply made no mention of religion. After reviewing the call summary, participants were asked to respond by either sending medical help to the caller’s location or contacting the police department for immediate assistance. Participants were advised to call the police only if they believed the patient may turn violent; otherwise, they were to call for medical help. The decisions considered in our experiment can have considerable consequences: calling medical help for a violent patient may endanger first responders, but calling the police in a nonviolent crisis may put the patient at risk 15 . These judgments are also prone to bias, given that Black and Muslim men are often stereotyped as threatening and violent 16 , 17 . Recent, well-publicized incidents of white individuals calling the police on Black men, despite no evidence of a crime, have demonstrated these biases and their repercussions 18 . It is thus important to first test inherent racial and religious biases in participant decision-making. We used an initial group of participants to do so, seeking to understand whether they were more likely to call for police help for African-American or Muslim men than for Caucasian or non-Muslim men. This Baseline group did not interact with an AI system, making its decisions using only the provided call summaries. We then evaluated the impact of AI by providing participants with an algorithmic recommendation for each presented call summary. Specifically, we sought to understand first, whether recommendations from a biased model could induce or worsen biases in respondent decision-making, and second, whether the style of the presented recommendation influenced how often respondents adhered to it. To test the impact of model bias, AI recommendations were drawn from either a biased or unbiased language model. In each situation, the biased language model was much more likely to suggest police assistance (as opposed to medical help) if the described individual was African-American or Muslim, while the unbiased model was equally likely to suggest police assistance for both race and religion groups. In our experiment, we induced this bias by fine-tuning GPT-2, a large language model, on a custom biased dataset (see Supplementary Fig. 1 for further detail). We emphasize that such bias is realistic: models showing similar recommendation biases have been documented in many real-world settings, including criminal justice 19 and medicine 20 . To test the impact of style, the model’s output was either displayed as a prescriptive recommendation (e.g., “our model thinks you should call for police help”) or a descriptive flag (e.g., “our model has flagged this call for risk of violence”). Displaying a flag for violence in the descriptive case corresponds to the model recommending police help in the prescriptive case, while not displaying a flag corresponds to the model recommending medical help. Note that in practice, algorithmic recommendations are often displayed as risk scores 3 , 4 , 21 . Risk scores are similar to our descriptive flags in that they indicate the underlying risk of some event, but do not make an explicit recommendation. However, risk scores have been mapped to specific actions in some model deployment settings, such as pretrial release decisions in criminal justice where risk scores are mapped to actionable recommendations 21 . Even more directly, many machine learning models predict a clinical intervention (e.g., intubation, fluid administration, etc.) 2 , 22 or triage condition (e.g., more screening is not needed for healthy chest x-rays) 23 . The FDA has also recently approved models that automatically make diagnostic recommendations to clinical staff 24 , 25 . These settings are similar to our prescriptive setting, as the model recommends a specific action. Our experimental setup (further described in Fig. 1 ) thus involved five groups of participants: Baseline (102 clinicians, 108 non-experts), Prescriptive Unbiased (87 clinicians, 114 non-experts), Prescriptive Biased (90 clinicians, 103 non-experts), Descriptive Unbiased (80 clinicians, 94 non-experts), and Descriptive Biased (79 clinicians, 97 non-experts). Fig. 1: Experimental setup. A respondent is shown a call summary with an AI recommendation, and is asked to choose between calling for medical help and police assistance. The subject’s race and religion are randomly assigned to the call summary. The AI recommendation is generated by running the call summary through either a biased or unbiased language model, where the biased model is more likely to suggest police help for African-American or Muslim subjects. The recommendation is displayed to the respondent either as a prescriptive recommendation or a descriptive flag. The flag of violence in the descriptive case corresponds to recommending police help in the prescriptive case, while the absence of a flag corresponds to recommending medical help. Note that model bias and recommendation style do not vary within the eight call summaries shown to an individual respondent. Full size image Statistical analysis We analyzed the collected data separately for each participant type (clinician vs. non-expert) for each of the five experimental groups. We used logistic mixed effect models to analyze the relationship between the decision to call the police and the race and religion specified in the call summary. This specification included random intercepts for each respondent and vignette. Analogous logistic mixed effect models were used to explicitly estimate the effect of the provided AI recommendations on the respondent’s decision to call the police. Tables 1 and 2 display the results. Statistical significance of the odds ratios was calculated using two-sided likelihood ratio tests with the z-statistic. Note that our study’s conclusions did not change when controlling for additional respondent characteristics like race, gender, and attitudes toward policing (see Supplementary Tables 4 – 7 ). Further information, including assessments of covariate variation by experimental group (Supplementary Tables 8 – 9 ) and an a priori power analysis (Supplementary Table 10 ), is included in the Supplementary Methods . Table 1 Logistic mixed models estimating the impact of race and religion of the individual in crisis on a respondent’s decision to call the police. Full size table Table 2 Logistic mixed models estimating the impact of an AI recommendation to call the police on a respondent’s decision to do so. Full size table Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Results Overall, we found that respondents did not demonstrate baseline biases, but were highly influenced by prescriptive recommendations from a biased AI system. This influence meant that their decisions were skewed by the race or religion of the subject. At the same time, however, we found that using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making. These results demonstrate that though using discriminatory AI in a realistic health setting can lead to poor outcomes for marginalized subgroups, appropriately framing model advice can help mitigate the underlying bias of the AI system. Biased models can induce disparities in fair decisions We used mixed-effects logistic regressions to estimate the impact of the race and religion of the individual in crisis on a respondent’s decision to call the police (Table 1 ). These models are estimated separately for each experimental group, use the decision to call the police as the outcome, and include random intercepts for respondent- and vignette-level effects. Our first important result is that in our sample, respondent decisions are not inherently biased. Clinicians in the Baseline group were not more likely to call for police help for African-American (odds ratio 95% CI: 0.6–1.17) or Muslim men (OR 95% CI: 0.6–1.2) than for Caucasian or non-Muslim men. Non-expert respondents were similarly unbiased (OR 95% CIs: 0.81–1.5 for African-American coefficient, 0.53–1.01 for Muslim coefficient). One limitation of our study is that we communicated the race and religion of the individual in crisis directly in the text (e.g., “he has not consumed any drugs or alcohol as he is a practicing Muslim”). It is possible that communicating race and religion in this way may not trigger participants’ implicit biases, and that a subtler method—such as a name, voice accent, or image—may induce more disparate decision making than what we observed. We thus cannot fully rule out the possibility that participants did have baseline biases. Testing a subtler instrument is beyond the scope of this paper, but is an important direction for future work. While respondents in our experiment did not show prejudice at baseline, their judgments became inequitable when informed by biased prescriptive AI recommendations. Under this setting, clinicians and non-experts were both significantly more likely to call the police for an African-American or Muslim patient than a white, non-Muslim (Clinicians: odds-ratio (OR) = 1.54, 95% CI 1.06–2.25 for African-American coefficient; OR = 1.49, 95% CI 1.01–2.21 for Muslim coefficient. Non-experts: OR = 1.55, 95% CI 1.13–2.11 for African-American coefficient; OR = 1.72, 95% CI 1.24–2.38 for Muslim coefficient). These effects remain significant even after controlling for additional respondent characteristics like race, gender, and attitudes toward policing (Supplementary Tables 4 – 7 ). It is noteworthy that clinical expertize did not significantly reduce the biasing effect of prescriptive recommendations. Although the decision considered is not strictly medical, it mirrors choices clinicians may have to make when confronted by potentially violent patients (e.g., whether to use restraints, hospital armed guards). That such experience does not seem to reduce their susceptibility to a discriminatory AI system hints at the limits of expertize in correcting for model mistakes. Recommendation style affects algorithmic adherence Biased descriptive recommendations, however, do not have the same effect as biased prescriptive ones. Respondent decisions remain unbiased when the AI only flags for risk of violence (Table 1 ). To make this trend clearer, we explicitly estimated the effect of a model’s suggestion on respondent decisions (Table 2 ). Specifically, we tested algorithmic adherence, that is, the odds that a respondent chooses to call the police if recommended to by the AI system. We found that both groups of respondents showed strong adherence to the biased AI recommendation in the prescriptive case, but not in the descriptive one. Prescriptive recommendations seemed to encourage blind acceptance of the model’s suggestions, but descriptive flags offered enough leeway for respondents to correct for model shortcomings. Note that clinicians still adhered to the descriptive recommendations of an unbiased model (OR = 1.57, 95% CI 1.04–2.38), perhaps due to greater familiarity with decision-support tools. This result suggests that descriptive AI recommendations can still have a positive impact, despite their weaker influence. While we cannot say for certain why clinicians adhered to the model in the unbiased but not the biased case, we offer one potential explanation. On average, the biased model recommended police help more often than the unbiased model (see Supplementary Fig. 1 ). Thus, though the clinicians often agreed with the unbiased model, perhaps they found it unreasonable to call the police as often as suggested by the biased model. In any case, the fact that clinicians ignored the biased model indicates that descriptive recommendations allowed enough leeway for clinicians to use their best judgment. Discussion Overall, our results offer an instructive case in combining AI recommendations with human judgment in real-world settings. Although our experiment focuses on a mental health emergency setting, our findings are applicable to beyond health. Many language models that have been applied to guide other human judgments, such as resume screening 26 , essay grading 27 , and social media content moderation 28 , already contain strong biases against minority subgroups 29 , 30 . We focus our discussion on three key takeaways, each of which highlights the dangers of naively deploying ML models in such high-stakes settings. First, we stress that pretrained language models are easy to bias. We found that fine-tuning GPT-2—a language model trained on 8 million web pages of content 9 , 10 —on just 2000 short example sentences was enough to generate consistently biased recommendations. This ease highlights a key risk in the increased popularity of transfer learning. A common ML workflow involves taking an existing model, fine-tuning it on a specific task, then deploying it for use 31 . Biasing the model through the fine-tuning step was incredibly easy; such malpractice—which can result either from mal-intent or carelessness—can have great negative impact. It is thus vital to thoroughly and continually audit deployed models for both inaccuracy and bias. Second, we find that the style of AI decision support in a deployed setting matters. Although prescriptive phrases create strong adherence to biased recommendations, descriptive flags are flexible enough to allow experts to ignore model mistakes and maintain unbiased decision-making. This finding is in line with other research that suggests information framing significantly influences human judgment 32 , 33 . Our work indicates that it is vital to carefully choose and test the style of recommendations in AI-assisted decision-making, because thoughtful design can reduce the impact of model bias. We recommend that practitioners make use of conceptual frameworks like RCRAFT 34 that offer practical guidance on how to best present information from an automated decision aid. This recommendation adds to a growing understanding that any successful AI deployment must pay careful attention not only to model performance, but also to how model output is displayed to a human decision-maker. For example, the U.S. Food and Drug Administration (FDA) recently recommended that the deployment of any AI-based medical device used to inform human decisions must address “human factors considerations and the human interpretability of model inputs” 35 . While increasing model interpretability is an appealing approach to humans, existing approaches to interpretability and explainability are poorly suited to health care 36 , may decrease human ability to identify model mistakes 7 , and increase model bias (i.e., the gap in model performance between the worst and best subgroup) 37 . Any successful deployment must thus rigorously test and validate several human-AI recommendation styles to ensure that AI systems are substantially improving decision making. Finally, we emphasize that unbiased decision-makers can be misled by model recommendations. Respondents were not biased in their baseline decisions, but demonstrated discriminatory decision-making when prescriptively advised by a biased GPT-2 model. This highlights that the dangers of biased AI are not limited to bad actors or those without experience; clinicians were influenced by biased models as much as non-experts were. In addition to model auditing and extensive recommendation style evaluation, ethical deployments of clinician-support tools should include broader approaches to bias mitigation like peer-group interaction 38 . These steps are vital to allow for deployment of decision-support models that improve decision-making despite potential machine bias. In conclusion, we advocate that AI decision support models must be thoroughly validated—both internally and externally—before they are deployed in high-stakes settings such as medicine. While we focus on the impact of model bias, our findings also have important implications for model inaccuracy, where blind adherence to inaccurate recommendations will also have disastrous consequences 3 , 5 . Our main finding–that experts and non-experts follow biased AI advice when it is given in a prescriptive way–must be carefully considered in the many real-world clinical scenarios where inaccurate or biased models may be used to inform important decisions. Overall, successful AI deployments must thoroughly test both model performance and human-AI interaction to ensure that AI-based decision support improves both the efficacy and safety of human decisions. Data availability Anonymized versions of the datasets collected and analyzed during the current study are publicly available at . Code availability The free programming languages R (3.6.3) was used to perform all statistical analyses. Code to reproduce the paper’s main findings can be found at .
It's no secret that people harbor biases—some unconscious, perhaps, and others painfully overt. The average person might suppose that computers—machines typically made of plastic, steel, glass, silicon, and various metals—are free of prejudice. While that assumption may hold for computer hardware, the same is not always true for computer software, which is programmed by fallible humans and can be fed data that is, itself, compromised in certain respects. Artificial intelligence (AI) systems—those based on machine learning, in particular—are seeing increased use in medicine for diagnosing specific diseases, for example, or evaluating X-rays. These systems are also being relied on to support decision-making in other areas of health care. Recent research has shown, however, that machine learning models can encode biases against minority subgroups, and the recommendations they make may consequently reflect those same biases. A new study by researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, which was published last month in Communications Medicine, assesses the impact that discriminatory AI models can have, especially for systems that are intended to provide advice in urgent situations. "We found that the manner in which the advice is framed can have significant repercussions," explains the paper's lead author, Hammaad Adam, a Ph.D. student at MIT's Institute for Data Systems and Society. "Fortunately, the harm caused by biased models can be limited (though not necessarily eliminated) when the advice is presented in a different way." The other co-authors of the paper are Aparna Balagopalan and Emily Alsentzer, both Ph.D. students, and the professors Fotini Christia and Marzyeh Ghassemi. AI models used in medicine can suffer from inaccuracies and inconsistencies, in part because the data used to train the models are often not representative of real-world settings. Different kinds of X-ray machines, for instance, can record things differently and hence yield different results. Models trained predominately on white people, moreover, may not be as accurate when applied to other groups. The Communications Medicine paper is not focused on issues of that sort but instead addresses problems that stem from biases and on ways to mitigate the adverse consequences. A group of 954 people (438 clinicians and 516 nonexperts) took part in an experiment to see how AI biases can affect decision-making. The participants were presented with call summaries from a fictitious crisis hotline, each involving a male individual undergoing a mental health emergency. The summaries contained information as to whether the individual was Caucasian or African American and would also mention his religion if he happened to be Muslim. A typical call summary might describe a circumstance in which an African American man was found at home in a delirious state, indicating that "he has not consumed any drugs or alcohol, as he is a practicing Muslim." Study participants were instructed to call the police if they thought the patient was likely to turn violent; otherwise, they were encouraged to seek medical help. The participants were randomly divided into a control or "baseline" group plus four other groups designed to test responses under slightly different conditions. "We want to understand how biased models can influence decisions, but we first need to understand how human biases can affect the decision-making process," Adam notes. What they found in their analysis of the baseline group was rather surprising: "In the setting we considered, human participants did not exhibit any biases. That doesn't mean that humans are not biased, but the way we conveyed information about a person's race and religion, evidently, was not strong enough to elicit their biases." The other four groups in the experiment were given advice that either came from a biased or unbiased model, and that advice was presented in either a "prescriptive" or a "descriptive" form. A biased model would be more likely to recommend police help in a situation involving an African American or Muslim person than would an unbiased model. Participants in the study, however, did not know which kind of model their advice came from, or even that models delivering the advice could be biased at all. Prescriptive advice spells out what a participant should do in unambiguous terms, telling them they should call the police in one instance or seek medical help in another. Descriptive advice is less direct: A flag is displayed to show that the AI system perceives a risk of violence associated with a particular call; no flag is shown if the threat of violence is deemed small. A key takeaway of the experiment is that participants "were highly influenced by prescriptive recommendations from a biased AI system," the authors wrote. But they also found that "using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making." In other words, the bias incorporated within an AI model can be diminished by appropriately framing the advice that's rendered. Why the different outcomes, depending on how advice is posed? When someone is told to do something, like call the police, that leaves little room for doubt, Adam explains. However, when the situation is merely described—classified with or without the presence of a flag—"that leaves room for a participant's own interpretation; it allows them to be more flexible and consider the situation for themselves." Second, the researchers found that the language models that are typically used to offer advice are easy to bias. Language models represent a class of machine learning systems that are trained on text, such as the entire contents of Wikipedia and other web material. When these models are "fine-tuned" by relying on a much smaller subset of data for training purposes—just 2,000 sentences, as opposed to 8 million web pages—the resultant models can be readily biased. Third, the MIT team discovered that decision-makers who are themselves unbiased can still be misled by the recommendations provided by biased models. Medical training (or the lack thereof) did not change responses in a discernible way. "Clinicians were influenced by biased models as much as non-experts were," the authors stated. "These findings could be applicable to other settings," Adam says, and are not necessarily restricted to health care situations. When it comes to deciding which people should receive a job interview, a biased model could be more likely to turn down Black applicants. The results could be different, however, if instead of explicitly (and prescriptively) telling an employer to "reject this applicant," a descriptive flag is attached to the file to indicate the applicant's "possible lack of experience." The implications of this work are broader than just figuring out how to deal with individuals in the midst of mental health crises, Adam maintains. "Our ultimate goal is to make sure that machine learning models are used in a fair, safe, and robust way."
10.1038/s43856-022-00214-4
Medicine
Building a brain: Pioneering study reveals principles of brain tissue structure, assembly
Structural and developmental principles of neuropil assembly in C. elegans, Nature (2021). DOI: 10.1038/s41586-020-03169-5 , dx.doi.org/10.1038/s41586-020-03169-5 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-03169-5
https://medicalxpress.com/news/2021-02-scientists-capture-choreography-brain.html
Abstract Neuropil is a fundamental form of tissue organization within the brain 1 , in which densely packed neurons synaptically interconnect into precise circuit architecture 2 , 3 . However, the structural and developmental principles that govern this nanoscale precision remain largely unknown 4 , 5 . Here we use an iterative data coarse-graining algorithm termed ‘diffusion condensation’ 6 to identify nested circuit structures within the Caenorhabditis elegans neuropil, which is known as the nerve ring. We show that the nerve ring neuropil is largely organized into four strata that are composed of related behavioural circuits. The stratified architecture of the neuropil is a geometrical representation of the functional segregation of sensory information and motor outputs, with specific sensory organs and muscle quadrants mapping onto particular neuropil strata. We identify groups of neurons with unique morphologies that integrate information across strata and that create neural structures that cage the strata within the nerve ring. We use high resolution light-sheet microscopy 7 , 8 coupled with lineage-tracing and cell-tracking algorithms 9 , 10 to resolve the developmental sequence and reveal principles of cell position, migration and outgrowth that guide stratified neuropil organization. Our results uncover conserved structural design principles that underlie the architecture and function of the nerve ring neuropil, and reveal a temporal progression of outgrowth—based on pioneer neurons—that guides the hierarchical development of the layered neuropil. Our findings provide a systematic blueprint for using structural and developmental approaches to understand neuropil organization within the brain. Main To elucidate the structural and developmental principles that govern neuropil assembly, we examined the C. elegans nerve ring neuropil, a major site of neuronal integration that contains 181 of the 282 somatic neurons in the adult hermaphrodite 3 . The lineage, morphology and synaptic connectivity of all 181 neurons is known 3 , 11 . Network principles and circuit motifs 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 as well as cellular and molecular mechanisms of nerve ring formation 21 , 22 , 23 , 24 have been elucidated. However, we lack an understanding of the structural design principles that underlie the architecture and function of the nerve ring neuropil, and the developmental sequence that forms this functional structure. Quantitative analyses of neuropil organization To systematically dissect the organization of the nerve ring neuropil, we analysed previously segmented data 18 , 19 in which more than 100,000 instances of neurite–neurite contacts had been quantified for two published C. elegans electron microscopy neuropil datasets 3 (Fig. 1a ). We focused on contact profiles instead of synaptic connections to reveal both functional and structural neuropil relationships. Fig. 1: Computational detection of a hierarchical tree of neurite organization in the C. elegans neuropil. a , Pipeline for analyses of the C. elegans neuropil. We used published serial section electron microscopy (EM) data 3 and previously quantified neuron–neuron contacts 18 , 19 to generate an adjacency matrix, which was analysed by diffusion condensation (DC) 6 and visualized using C-PHATE 28 . L4 and adult worm outputs were quantitatively compared and stereotypical clusters and outliers identified. b , C-PHATE plot of diffusion condensation analysis for an L4 worm. Individual neurons are located at the edges of the graph and condense as they move towards the centre. The four clusters identified are individually coloured. C3 and C4 are more closely related than C1 and C2 (Extended Data Fig. 2a–d , Supplementary Videos 1 , 2 ). c , Top, volumetric reconstruction of the L4 C. elegans neuropil (from electron microscopy serial sections 3 ) with the four strata individually coloured. S1–S4 are stacked along the anterior–posterior axis, and S3 is basal to S4. Bottom, representations of individual strata (Extended Data Fig. 1c–h , Supplementary Video 3 ). d , Volumetric reconstruction of S1 perpendicular looping neurons (highlighted in red). A, anterior; D, dorsal. e , Schematic of d with the trajectory of S2 (in c ) through specific S1 loops. f , As e , but with the trajectories of S3 and S4. g , The looping structure formed by 32 of the 45 S1 neurons, with loops coloured according to encased strata (in c ) (Extended Data Fig. 4 , Supplementary Videos 4 , 5 ). Scale bars, 5 μm ( c , d ). Full size image We generated an adjacency matrix by summing all contact surface areas for each possible neuron pair, and applied a new diffusion condensation 6 (DC) clustering algorithm to iteratively cluster neurons on the basis of the quantitative similarity of the contact profile of each neuron (Fig. 1a ). Unlike other clustering algorithms 25 , 26 , 27 , diffusion condensation condenses data without assuming underlying data structure or forcing a k -way partition. At each iteration, diffusion condensation clusters the data by merging neurons that are within a threshold distance of each other. We then applied C-PHATE, an extension of the PHATE 28 visualization method, to generate an interactive 3D visualization of the iterative diffusion condensation clustering (Fig. 1a, b , Supplementary Methods). By iteratively condensing data points closer to their neighbours, DC/C-PHATE outputs dynamically unveil relationships among the data at varying scales of granularity, from cell–cell to circuit–circuit interactions. Quantitative comparisons of DC/C-PHATE outputs revealed similar—but not identical—clustering patterns between a larva stage 4 (L4) and an adult hermaphrodite nerve ring reconstruction (adjusted Rand index (ARI) of 0.7) (Extended Data Figs. 1a, b , 2f, m, n ), which is consistent with previous qualitative descriptions of the stereotyped C. elegans nerve ring 3 and with recent analyses of neurite adjacency differences 18 , 19 (Extended Data Fig. 2e ). Our quantitative analyses of the differences in diffusion condensation output between the larva and adult electron microscopy reconstructions also revealed that the differences were underpinned by biologically relevant changes that occur between these developmental stages (Extended Data Fig. 2a–f, q–s , Supplementary Discussion 3 ). The results of the diffusion condensation analysis of the contact profiles differed from those of the synaptic connectome, and this finding is consistent with structural relationships in the nerve ring being present in the contact profile dataset, but not being represented in the synaptic connectome (Extended Data Fig. 2o, p ). However, the examination of clusters throughout the diffusion condensation iterations of contact profiles revealed known cell–cell interactions and behavioural circuits 13 , 16 , 17 , 29 , 30 , 31 (Fig. 1b, c , Extended Data Fig. 3a, b ). Together, the contact-based multigranular diffusion condensation outputs enabled understanding of cell–cell interactions within the context of functional circuits, and of functional circuits within the context of higher-order neuropil structures. Modularity scores, a measure of cluster separation 32 , were highest in the diffusion cluster iteration that contained four (L4 dataset) to six (adult dataset) clusters (Fig. 1b , Extended Data Fig. 1a, b , Supplementary Videos 1 , 2 , Supplementary Discussion 2 ). Colour-coding the neuron members of the four clusters in the L4 dataset (without unassigned neurons; Supplementary Methods) within the 3D anatomy of the nerve ring revealed that they correspond to distinct, tightly packed layers of neurons within the greater neuropil. These four layers, or strata, stack along the anterior–posterior axis of the worm, encircling the pharynx isthmus. We named these S1, S2, S3 and S4, corresponding to strata 1–4 (Fig. 1c , Extended Data Fig. 1c–h , Supplementary Video 3 ). Our findings are consistent with those of previous studies that identified an anteroposterior hierarchy of connectivity in the nerve ring 14 . This stratified organization, resolved here at a single-neuron scale, is reminiscent of laminar organizations in the nervous system of Drosophila 33 and in the retina and cerebral cortex of vertebrates 34 , 35 . We noted no clear spaces between the laminar boundaries of the individual strata within the tightly bundled neuropil. However, we identified additional structural features that indicate that these computationally identified strata represent biologically relevant structures. For example, in S1, 32 anterior sensory neurons project axons perpendicular to the neuropil before curling 180° and returning to the anterior limits of the neuropil, where they terminate as synaptic endplates 3 , 36 , 37 (Fig. 1d , Extended Data Fig. 4a–d ). Notably, these neurite loops circumscribed computationally defined boundaries between S2 and S3/S4. The anterior loops encase around 90% of S2, and the posterior loops encase around 84% of S3 and 100% of S4 (Fig. 1d–g , Extended Data Fig. 4e–k , Supplementary Video 5 , Supplementary Table 1 ). Moreover, the looping neurites form a symmetrical structure along the arc of the neuropil, to both demarcate the individual strata and cage all of the strata within the neuropil (Fig. 1g , Extended Data Fig. 4e–h , Supplementary Video 4 ). Sensory information streams in neuropil architecture To understand the functional anatomy of the nerve ring, we first examined axonal positions of the head sensory neurons within the stratified anatomy of the neuropil. There are two main classes of sensory neuron at the anterior buccal tip of the worm: papillary and amphidial sensilla 36 . Although these two neuron classes are in close proximity, they are distinguishable by distinct dendritic sensory endings, which are thought to reflect distinct sensory modalities 36 , 37 . Both classes of neuron project axons into the neuropil to transduce sensory information onto the nerve ring 36 , 37 . We found that the papillary axons project to S1 (Fig. 2a–c ), whereas the amphidial axons project to S3 and S4 (Fig. 2a, b, d ). No papillary or amphidial axons project to S2. Therefore, these two distinct sensory organs map onto distinct and specific strata, which indicates the functional segregation of sensory information and processing within the layered structure of the neuropil. Fig. 2: Neuropil architecture reflects functional segregation of sensory and motor outputs. a , Representation of head sensilla in the context of the four strata. The representation is projected over a scanning electron microscopy image of C. elegans (inset; bottom right), and enlarged to show the head sensilla and strata. Image from WormAtlas, produced by and used with permission of R. Sommer. b , Representation of head sensilla, projected over a scanning electron microscopy image of the C. elegans mouth (corresponds to dashed box in lower-right of a ). Image produced by and used with permission of D. Hall. Scale bar, 1 μm. c , Schematic of papillary sensillum trajectories from mouth to neuropil. All papillary neurons cluster into S1. Individual neuron classes are listed at the bottom right. d , As c , but for amphidial sensillum trajectories. BAG head sensory neurons are excluded from the analyses because they are not in a sensillum 3 . e , Model of functional segregation of information streams within the neuropil. Papillary sensory information is processed in S1 and innervates head muscles to control head movement. Amphid sensory information is processed in S3 and S4 and links to body muscles (via command interneurons in S3) and neck muscles (via motor neurons in S1 and S2) to control body locomotion 29 , 38 , 42 . VNC, ventral nerve cord. Interneurons cross strata to functionally link these modular circuits (Extended Data Fig. 5 , and detailed version in Extended Data Fig. 3c ). f – i , Volumetric reconstructions of the unassigned rich-club AIB interneurons 15 , 20 in the context of nerve ring strata. Arrows indicate the regions of AIB that border strata. The proximal region of the AIB borders S3 and S4 ( g , h ), and the distal region borders S2 and S3 ( h , i ). The line in f indicates the lateral region of AIB that shifts along the anteroposterior axis to change strata. In h , S4 is transparent to show AIB bordering S3 and S4 (Supplementary Video 6 ). j , Volumetric rendering of the AIB pair. AIB is a unipolar neuron, with presynaptic specializations enriched in the distal region bordering S2 and S3, and postsynaptic specializations enriched in the proximal region bordering S3 and S4. Arrows indicate synaptic transmission flow (Extended Data Fig. 3g–k ). S1, red; S2, purple; S3, blue; S4, green; unassigned, yellow. Full size image We then correlated circuit-based connectomics 3 , 14 with the strata organization to reveal additional design principles of the functional organization of the neuropil. Within S1, the papillary sensory cells—which are mechanosensory or polymodal—control head withdrawal reflex behaviours 38 . Most neurons in S1 are part of shallow circuits formed by papillary sensory cells synapsing onto motor neurons (within S1), or even directly onto head muscles 3 , 38 (Fig. 2e , Extended Data Fig. 3c ). Notably, the S1 circuits retain the symmetry of the papillary sensillum at the interneuron, motor neuron and head neuromuscular synapse level 3 , 36 , 37 , 39 . Topographic maps—the ordered projection of sensory information onto effector systems such as muscles—are a fundamental organizational principle of brain wiring across sensory modalities and organisms 40 , 41 . We find that S1 displays a topographic map organization, from the primary sensory layer to the motor output representations (Extended Data Fig. 3d–f ). By contrast, amphid sensory axons—which are associated with plastic behaviours 29 , 42 —innervate S3 and S4. These strata also contain interneurons, but lack motor neurons. Primary and secondary interneurons in S3 and S4 synapse upon motor neurons in S1 and S2 (to innervate head and neck muscles) or upon command interneurons in S3 (that connect to motor neurons which innervate body-wall muscles) (Fig. 2e , Extended Data Fig. 3c ). Therefore, information streams from the S3 and S4 amphid sensory axons segregate to control head and neck muscles (through S1 and S2) and body-wall muscles (through S3). These findings concur with cell ablation, behavioural and connectomic studies 16 , 18 , 19 , 43 , 44 , and with anatomical models that show that the C. elegans neuropil is functionally regionalized along the anteroposterior axis 3 , 18 , 19 . Head-exploration (for example, head-withdrawal reflex) or body-locomotion (for example, chemotaxis) behaviours differentially activate distinct motor strategies in response to sensory information 44 , which is consistent with the modular segregation of the sensory information streams that are now observed for the underlying circuits within the strata. Our observations therefore uncover the somatotropic representations of these behavioural strategies in the architecture of the neuropil, revealing functional design principles in the layered structure of the nerve ring, from sensation to motor outputs. A subset of ‘rich-club’ interneurons bridge strata The four neurite strata (S1–S4) account for 151 of the 181 total neurons in the nerve ring (83%). To further understand the structure of the nerve ring, we examined the 30 neurons that clustered differently between the two examined datasets (herein called ‘unassigned neurons’) (Supplementary Methods). These neurons had one of the following properties: they possessed simple, unbranched processes at boundaries between two adjacent strata (6 neurons); had morphologies that cross strata, such as neurite branches projecting into multiple strata, or single neurites that project across strata (21 neurons); or showed sparse anatomical segmentations (3 neurons) (Extended Data Fig. 5 ). Notably, 6 unassigned neurons had previously been placed in the 14-member C. elegans ‘rich-club’ 15 , 20 . Rich-clubs are a conserved organizational feature of neuronal networks in which highly interconnected hub neurons link segregated modules 15 . The C. elegans rich-club comprises eight command interneurons (including two from our unassigned set) and six nerve ring interneurons (including four from our unassigned set) (Extended Data Fig. 5g ). Additionally, other neurons from our unassigned set—such as RMG and PVR—are hubs of behavioural circuits 43 , 45 , 46 . We examined the unassigned neurons in the context of the strata (Extended Data Fig. 5 ) by focusing on the rich-club interneuron pair of AIBs. The AIB pair was previously shown to morphologically shift between neuronal neighbourhoods 3 , 47 , and we found that the morphology, polarity and position of the AIB neurite are precisely arranged to receive inputs from S3 and S4, and to transduce outputs onto S2 and S3, thereby linking these modular strata. The proximal AIB neurite region lies on the S3/S4 border, while a perpendicular shift of precisely the width of S3 positions the distal region at the S2/S3 border (Fig. 2f–i , Supplementary Video 6 ). To examine the output performance of the diffusion condensation algorithm, we digitally dissected the AIB neurite into distal and proximal regions and observed—as expected—that the proximal region of AIB specifically clustered with its neighbouring S4, while the distal region clustered with its neighbouring S2 (Extended Data Fig. 3i–k ). The synapses of AIB are similarly partitioned: postsynaptic specializations are primarily in the proximal region in the amphid sensory-rich strata (S3 and S4), whereas presynaptic specializations are localized to the distal region in the motor neuron-rich stratum (S2) (Fig. 2j , Extended Data Fig. 3g, h ). This architecture is consistent with the role of AIB in processing amphid-derived sensory stimuli to mediate locomotory strategies 44 , 48 . Another rich-club interneuron pair, AVE, has a similar morphology to AIB: its proximal neurite region borders S2 and S3, and its distal region borders S1 and S2 (Extended Data Fig. 5a–d , Supplementary Video 7 ). Neurites of other rich-club neurons (RIB and RIA) and ‘unassigned’ neurons (AIZ) similarly shift across the strata (Extended Data Fig. 5a–d, g–u ). Our analyses reveal design principles of the C. elegans neuropil at varying degrees of granularity—from single rich-club neuron morphologies that functionally bridge different strata, to layered strata that segregate sensory–motor information onto somatotropic representations. These design principles are important organizational units in neuroscience: rich-clubs in the context of brain networks 15 , 20 , laminar organization in the context of brain structures 33 , 34 , and topographical maps in the context of vertebrate sensory systems 40 , 41 . Layered strata correlate with neuronal cell migrations To examine the developmental sequence that leads to assembly of the layered nerve ring, we used an integrated platform for long-term, four-dimensional, in vivo imaging of embryos. The platform achieves isotropic resolution 7 , 8 , 49 , 50 , systematic lineage-tracing 9 , 10 and rendering of cell movements and neuronal outgrowth (represented in the 4D WormGUIDES atlas 51 ; ) (Fig. 3a ). The embryonic atlas was systematically examined for birth order, soma positions and lineage identity for all neurons within the strata (Fig. 3b, c , Extended Data Fig. 6 ). Despite the previous hypothesis that lineage-dependent neuronal soma positions might influence neurite outgrowth into neighbourhoods 47 , we could not detect any relationships between ancestry or newborn-cell position and the final neurite position within the neuropil strata (Fig. 3b , Extended Data Fig. 6 ). Fig. 3: Developmental processes guide layered neuropil assembly. a , Analyses pipeline for C. elegans embryonic neurodevelopment. Embryonic neurodevelopment was imaged by dual-view inverted selective plane illumination microscopy (diSPIM) 7 , 8 , 49 , 50 and cell lineages were determined using StarryNite 9 and AceTree 10 . Neuronal outgrowth and morphology were quantified, and information incorporated into the WormGUIDES atlas 51 . b , WormGUIDES atlas representation of all embryonic neuronal soma positions at 330 mpf. Somas are coloured as in Fig. 1c to show their eventual neurite strata assignment. c , As b , but at 420 mpf. The dashed boxes in b and c represent the final anterior position of the migrating S1 neurons (Extended Data Fig. 7a–h , Supplementary Video 8 ). d , Three-dimensional depth trajectories of S1-cell movements between neuronal cell birth and 420 mpf. e , Comma stage embryo labelled with ubiquitous nhr-2p::membrane::gfp and mCherry::histone . Asterisks denote the three lineaged cells. The image is a single z -slice from a diSPIM arm (three embryos were lineaged; Extended Data Fig. 8a–f , Supplementary Video 9 ). f , WormGUIDES atlas 3D model of the three cells observed in e . g , Cartoon of the inset of the three cells observed in e ; asterisks denote somas of lineaged cells. h , Enlargement of the inset in f , with early outgrowing cells identified via lineaging, coloured to highlight cellular locations. i – k , Time-lapse of the outgrowth dynamics of pioneer neurons and schematic (labelled with lim-4p::membrane-tethered::gfp ; lim-4p embryonic expression was previously lineaged to the eight neuron pairs listed in k 51 ). Images are deconvolved diSPIM maximum intensity projections ( n = 16 embryos) (Extended Data Fig. 8g–r , Supplementary Video 10 ). Scale bar, 10 μm ( b – f , i , j ). Full size image Quantification of the positions of individual neurons (belonging to specific strata) in the context of the spatio-temporal dynamics of embryo morphogenesis revealed stereotypical coordinated cell movements that segregated and co-located the cell bodies of future S1 stratum. Cell bodies of neurons that later project onto S1 migrated and co-located to the anterior part of the embryo head (anterior to the future neuropil position), whereas cell bodies of neurons that later project onto S2–S4 migrated to the posterior part of the head (Fig. 3b–d , Extended Data Fig. 7 , Supplementary Video 8 ). For all strata, embryonic soma positions persist until adulthood, and for the future S1 stratum, relate to the cellular morphologies of posteriorly projecting axonal structures within the anterior stratum of the nerve ring 3 , 36 , 37 . In vertebrate embryogenesis, migration of waves of neurons helps to organize the layered architecture of the retina and the brain cortex 52 . We found that co-segregation of S1 somas in early embryogenesis might serve as an initial organizing principle to define the axes for anteroposterior layering, and later functional segregation of the sensory–motor architecture within the neuropil. Hierarchical development of the layered neuropil Previous genetic studies that examined formation of the nerve ring demonstrated roles for glia and centrally located pioneering neurons in its development 21 , 22 , 23 , 24 . To build on these findings, we examined neurite outgrowth dynamics during embryonic neuropil formation (Extended Data Fig. 8a–f , Supplementary Video 9 ). At approximately 390–400 minutes post fertilization (mpf) we observed cells sending projections into the area of the future nerve ring (Fig. 3e, g ). Through the simultaneous use of mCherry::histone (to trace the lineage of these cells) and ubiquitous membrane-tethered GFP (to observe outgrowth), we identified six of these cells as three bilateral pairs of neurons—SIAD, SIBV and SMDD—consistent with pioneering neurons previously identified 21 , 22 (the four-letter name represents a left and right bilateral neuron pair; that is, SIADL and SIADR, Fig. 3f, h ). Additional neurons were observed sending neurite projections alongside these pioneers, but dense ubiquitous membrane labelling prevented us from identifying them by lineaging. To confirm the identities of the three lineaged neuron pairs and identify the additional early-outgrowth neurons, we co-labelled embryos with ubiquitous membrane::gfp and a cytoplasmic lim-4p::mCherry reporter gene (lineaged 51 to express in SIAD, SIBV and SMDD (Extended Data Fig. 8g–j ) and in RIV, SAAV, SIAV, SIBD and SMDV)). We found that all eight neuron pairs extend neurites into the future neuropil as a tight bundle at 390–400 mpf (Fig. 3i–k , Supplementary Video 10 ). To further analyse outgrowth timing, we co-labelled embryos with a pan-neuronal::membrane::gfp marker and the lim-4p::mCherry marker, and observed that these eight neuron pairs displayed the earliest outgrowth events for the neuropil (Extended Data Fig. 8k–r , Supplementary Videos 10 , 11 ). All eight neuron pairs belong to a neuronal group that is centrally located in S2 (Fig. 4g–i ). To examine the pioneering roles of these neurons in strata formation, we adapted an in vivo split-caspase ablation system 53 that ablated these neuron pairs during embryonic neurodevelopment (Extended Data Fig. 8s–x ). We then quantified neuropil formation via a pan-neuronal::membrane::gfp ( Supplementary Video 12 ). Ablation of the putative eight pioneering neuron pairs resulted in larval stage 1 (L1) arrested worms, and aberrant embryonic neuropils (mean embryonic control volume, 136.6 μm 3 ; compared with ablated neuropil volume, 43.6 μm 3 ) (Fig. 4a–c , Extended Data Fig. 8y, z ). Systematic examination of a representative neuron from each stratum (using cell-specific promoters) revealed that ablation of putative pioneer neurons affected the outgrowth of all examined neurons (Fig. 4d–f , Extended Data Fig. 9 ). In all cases, neurites paused indefinitely near the positions of the ablated pioneer somas. The embryonic organization of neuropil strata therefore seems to be pioneered by a subset of centrally located S2 neurons. Fig. 4: Temporal progression of outgrowth guided by pioneer neurons results in inside-out neuropil development. a , b , Neuropil development in control ( a ) or pioneer-ablated ( b ) embryos monitored by pan-neuronal ceh-48p::membrane-tethered::gfp . Dashed lines represent the control neuropil ( n = 8 embryos; ablation, n = 7 embryos). Maximum intensity projections from one diSPIM arm ( Supplementary Video 12 ). c , Quantification of neuropil volume for control or pioneer-ablated worms. Data are mean ± s.e.m., with individual data points shown (control, n = 8; ablation, n = 7). ** P = 0.0023 by unpaired two-tailed Student’s t -test comparing control and ablation (Extended Data Fig. 8y, z ). d , e , AVL development in control ( d ) or pioneer-ablated ( e ) embryos monitored using an AVL-specific promoter ( lim-6p::gfp ). The dashed line depicts normal AVL outgrowth (control, n = 9 embryos; ablation, n = 7 embryos). Deconvolved diSPIM maximum intensity projections are shown ( Supplementary Video 13 ). f , Quantification of neurite outgrowth for the indicated neurons from each stratum in control and pioneer-ablated worms. The values of n represent the number of embryos (AVL, ASH, AIB) or L1 worms (OLL, AIY) scored. * P < 0.05, ** P < 0.01, *** P < 0.001, **** P < 0.0001 by two-sided Fisher’s exact test comparing control and ablation for each neuron (see Supplementary Methods for exact P values) (Extended Data Fig. 9 ). g , Schematic of worm head; the line marks the position of the electron microscopy cross-section shown in h , i . h , Volumetric reconstruction of the C. elegans L4 neuropil. Centrally located S2 pioneer neurons, purple; neuropil, brown; pharynx, grey. The dashed line indicates the width of the neuropil. i , Segmented serial section electron microscopy image 3 , coloured as in h (section corresponds to the L4 worm z -slice 54). S2 pioneers are centrally located in the neuropil. Electron microscopy image used with permission from D. Hall. Scale bar, 2.5 μm. j , Top, analysis of dorsal midline outgrowth for neurons from each stratum. Bottom, a volumetric neuropil reconstruction with the terminal location, along the dorsal midline, of the examined neurons. The values of n represent the number of embryos scored. Data are mean ± s.e.m. **** P < 0.0001 by one-way ANOVA with Tukey’s post hoc analysis comparing pioneer neurons and each representative neuron. For statistical analysis of all pairwise comparisons, see Supplementary Methods (Extended Data Fig. 10a–j ). The colours in d – f , j indicate the strata to which the neurons are assigned. Scale bars, 10 μm ( a , b , d , e ); 5 μm ( h , j ). Source data Full size image To understand the role of the pioneer neurons in strata formation, we analysed synchronized recordings of embryonic neurite outgrowth. An ordered sequence of outgrowth events emerged, in which the timing of neurite arrival at the dorsal midline of the neuropil correlated with the axial proximity of the examined neurites to the centrally located pioneer neurites (Fig. 4j , Extended Data Fig. 10a–j’ , Supplementary Videos 13 – 15 ). Our findings extend observations on the hierarchical formation of the neuropil 22 , placing the ordered sequence of events within the context of the strata. Temporal correlation was specific to arrival at the dorsal midline, but not to initiation of outgrowth from the soma. Notably, the S4 neuron AWC was observed initiating outgrowth at 390–400 mpf—a similar time to the pioneering neurons (Fig. 3i , Extended Data Fig. 10k ). However, instead of entering the nerve ring with the pioneer neurons, AWC neurites paused for 20.6 min (s.e.m. ±2.8 min) near the pioneering SAAV somas before entering the nerve ring (Fig. 3i, j , Extended Data Fig. 10k–s ). This pausing point corresponds to the stalling point seen in our pioneer neuron ablation studies (Extended Data Fig. 9c, h, v, y ). Therefore, although the initial outgrowth events for some neurons occur simultaneously, neurites extend to—and pause at—specific nerve ring entry sites. The temporal sequence of neurite entries into the nerve ring continues throughout embryogenesis. For example, both the neurites of strata-crossing AIBs and the looping S1 neurons outgrow after the neuropil has formed a ring structure (at around 420 mpf) and after all the representative neurons of the four strata have reach the dorsal midline (around 460 mpf) 50 (Figs. 3 i, j, 4j , Extended Data Fig. 10f–j′, t, u′′ ). Our observations suggest an inside-out developmental model in which the strata are assembled through the timed entry of their components: a pioneering bundle founds centrally located S2; then other S2 neurons enter, followed by peripherally located S1 (anterior) and S3 and S4 (posterior) neurons; followed by the outgrowth of neurons that link the strata, such as the S1 looping neurons or the neurons that cross strata (such as AIB). Lamination is a conserved principle of organization within brains 33 , 34 . Segregation of functional circuits into layers underpins information processing in sensory systems and higher order structures 35 . In this study we resolve these conserved features of brain organization at a single-cell level and in the context of the nerve ring neuropil, thereby linking fundamental design principles of neuropil organization with the developmental processes that underpin their assembly. Our findings provide a blueprint for the synergistic integration of structural connectomic analyses and developmental approaches to systematically understand neuropil organization and development within brains. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability The datasets generated during and/or analysed during the study are available from the corresponding author upon request. To facilitate exploration of the placement of neurites in the C-PHATE diagrams, we have generated a 3D interactive version of the C-PHATE plots. Plots can be downloaded, and neurite condensation and position can be examined. These 3D interactive versions enable identification of any neuron within the C-PHATE plot and provide the iteration number and total neurons found within any cluster. See Supplementary Discussion 2 for instructions on how to access the data. Source data are provided with this paper. Code availability Electron micrograph segmentation adjacency analysis code is available in ref. 18 . Diffusion condensation analysis code 6 is available at . C-PHATE visualization code is available to download at .
Understanding how the brain works is a paramount goal of medical science. But with its billions of tightly packed, intermingled neurons, the human brain is dauntingly difficult to visualize and map, which can provide the route to therapies for long-intractable disorders. In a major advance published next week in Nature, scientists for the first time report the structure of a fundamental type of tissue organization in brains, called neuropil, as well as the developmental pathways that lead to neuropil assembly in the roundworm C. elegans. This multidisciplinary study was a collaboration between five laboratories, including scientists at the Marine Biological Laboratory (MBL), Woods Hole, which hosted much of the collaboration. "Neuropil is a tissue-level organization seen in many different types of brains, from worms to humans," says senior author and MBL Fellow Daniel Colón-Ramos of Yale School of Medicine. "When things are that conserved in nature, they are important." "But trying to understand neuropil structure and function is very challenging. It's like looking at a spaghetti bowl," Colón-Ramos says. "Hundreds of neurons are on top of each other, touching each other, making thousands of choices as they intermingle through different sections of the animal's brain. How can you describe neuropil organization in a way that's comprehensible? That is one of the contributions of this paper." The authors focused on the neuropil in the C. elegans nerve ring, a tangled bundle of 181 neurons that serves as the worm's central processing unit. Through an innovative melding of network analysis and imaging strategies, they revealed that the nerve ring is organized into four layers, or strata. These strata, they showed, contain distinct domains for processing sensory information and motor behaviors. They were able to map the worm's sensory organs and muscle quadrants onto the relevant strata. The team also discovered unique neurons that integrate information across strata and build a type of "cage" around the layers. Finally, they showed how the layered structure of the neuropil emerges in the developing worm embryo, using high-resolution light-sheet microscopy developed by MBL Fellow Hari Shroff of the National Institute of Biomedical Imaging and Bioengineering, and MBL Investigator Abhishek Kumar. "This is a paradigm shift where we combined two fields—computational biology and developmental biology—that don't often go together," says first author Mark Moyle, associate research scientist in neuroscience at Yale School of Medicine. "We showed that by using computational approaches, we could understand the neuropil structure, and we could then use that knowledge to identify the developmental processes leading to the correct assembly of that structure." This approach can serve as a blueprint for understanding neuropil organization in other animal brains, the authors state. Volumetric reconstruction of the L4 C. elegans neuropil (from EM serial sections) with neurons from the four strata highlighted (S1-Red, S2-Purple, S3-Blue, S4-Green). Credit: Mark Moyle et al., Nature, 2021. From Buildings to Boroughs to New York City C. elegans has the best understood nervous system of all animals. More than 30 years ago, John White, Sydney Brenner, and colleagues published the worm's "connectome"—a wiring diagram of its 302 neurons and the ~7,000 synaptic connections between them. Since that pioneering study, nearly every neuron in C. elegans has been characterized: its shape, functional category, the neural circuits it participates in, and its developmental cell lineage. What was missing, though, was a picture of how these cells and circuits integrate in space and over time. Colón-Ramos and team analyzed published data on all the membrane contacts between the 181 neurons in the nerve ring. They then applied novel network analyses to group cells into "neighborhoods" based on their contact profiles—similar in principle to algorithms that Facebook uses to suggest friends based on people's common contacts. This revealed the neuropil's layered structure and enabled the team to understand cell-cell interactions in the context of functional circuits, and functional circuits in the context of higher-order neuropil structure (See video 1). "All of a sudden, when you see the architecture, you realize that all this knowledge that was out there about the animal's behaviors has a home in the structure of the brain," Colón-Ramos says. "By analogy, rather than just having knowledge of the East Side of New York and the West Side, Brooklyn and Queens, suddenly you see how the city fits together and you understand the relationships between the neighborhoods." "So now we could see, 'OK, this is why these behaviors are reflex-like, because they are direct circuits that go into the muscles. And this is how they integrate with other parts of the motor program.' Having the structure allows you to generate new models regarding how information is being processed and parceled out to lead to behaviors," Colón-Ramos says. Time-lapse of the outgrowth dynamics of the C. elegans nerve ring followed by a 3D rotation of the last timepoint to highlight the neuropil, which is the bright ring structure in the anterior part of the embryo (top). Embryo twitches during later development. Neuropil intensity increases in overtime, probably due to an increase in the number of neurons entering the neuropil. Images are deconvolved diSPIM maximum intensity projections. Credit: Mark Moyle et al, Nature, 2021 Reconstructing the Birth and Development of the Nerve Ring The brain is a product of development, starting with one embryonic cell division and ending with a complete organ. "An order emerges through time. So our next question was, how can you instruct the formation of a layered structure? How are all these decisions simultaneously occurring in hundreds of cells, but resulting in organized layers? How are the decisions coordinated through time and space?" Colón-Ramos says. "Layered structures are a fundamental unit of brain organization—the retina is a layer, and the cortex is a layer. If we could understand it for the worm, it would allow us to create models that might help us understand the development of layers in other vertebrate organs, like the eye," Colón-Ramos says. This part of the research began in 2014 when Colón-Ramos and Moyle began collaborating with microscope developers Shroff and Kumar at the MBL. "We started by building a microscope (the diSPIM) that let us look at the embryo with better spatial and temporal resolution than the tools of the time," Shroff says. They then identified every cell in the C. elegans embryo using lineaging approaches developed by co-author Zhirong Bao of Sloan Kettering Institute (these findings are catalogued at WormGUIDES.org). "This was a painful process, but very important to do," Shroff says. After years of sharing a lab at the MBL, numerous adjustments to the diSPIM system, integrations with other critically important technology, and plenty of frustration, the collaborators succeeded in resolving the developmental sequence of the C. elegans neuropil and revealing principles that guide its stratified organization (see video 2). "This would have been impossible without the long-term, gentle imaging of the diSPIM," Colón-Ramos says. "In developing the technology, many changes seemed incremental but in fact were very enabling, allowing us to do something we couldn't do before. Often the changes we needed fell between two disciplines with different vocabularies, and it required prolonged, focused, exhaustive conversations to identify them. That is what our collaboration at the MBL enabled."
10.1038/s41586-020-03169-5
Physics
Devitrification demystified: Scientists show how glass crystallizes in real-time
Divya Ganapathi et al. Structure determines where crystallization occurs in a soft colloidal glass, Nature Physics (2020). DOI: 10.1038/s41567-020-1016-4 Journal information: Nature Physics
http://dx.doi.org/10.1038/s41567-020-1016-4
https://phys.org/news/2020-09-devitrification-demystified.html
Abstract Glass is inherently unstable to crystallization. However, how this transformation occurs while the dynamics in the glass stay frozen at the particle scale is poorly understood. Here, through single-particle-resolved imaging experiments, we show that due to frozen-in density inhomogeneities, a soft colloidal glass crystallizes via two distinct pathways. In the poorly packed regions of the glass, crystallinity grew smoothly due to local particle shuffles, whereas in the well-packed regions, we observed abrupt jumps in crystallinity that were triggered by avalanches—cooperative rearrangements involving many tens of particles. Importantly, we show that softness—a structural-order parameter determined through machine-learning methods—not only predicts where crystallization initiates in a glass but is also sensitive to the crystallization pathway. Such a causal connection between the structure and stability of a glass has so far remained elusive. Devising strategies to manipulate softness may thus prove invaluable in realizing long-lived glassy states. Main Elucidating the microscopic underpinnings of devitrification—the transformation of a glass to a crystal—besides being of fundamental interest in many branches of science 1 , 2 , 3 , 4 , 5 , is essential for the formulation of stable glasses that find numerous applications in industry 6 , 7 . Unlike the more conventional process of crystallization from a supercooled liquid 8 , 9 , 10 , 11 , 12 , 13 , which requires macroscopic particle diffusion, the pathways through which a system that is structurally arrested at the particle scale crystallizes remain unclear, largely due to the slow and seemingly stochastic nature of the dynamics 14 , 15 . In a system of monodisperse hard spheres, simulations 14 have found that crystallization in poorly annealed glasses began soon after quench and was autocatalytic: local particle shuffles, without cage-breaking, helped transform amorphous regions into compact crystals that subsequently enhanced mobility in nearby regions, thus creating a positive feedback that aided further growth. Conversely, simulations on well-annealed/mature hard-sphere glasses have uncovered a qualitatively different behaviour 15 , 16 , 17 , 18 . Long quiescent periods were interspersed with abrupt increases in crystallinity, and these jumps were concurrent with avalanches in nearby regions, in which many tens to a few hundred particles participated in cooperative rearrangements. Avalanche particles underwent displacements about a third of their size, and these were mostly not the ones that crystallized. Since avalanches occurred even in a polydisperse system, where crystallization was well suppressed, it was concluded that avalanches mediate crystallization, and not vice versa. This crystallization pathway has now been found in simulations of glasses with Lennard-Jones 17 and soft 18 particle interactions as well. Some of these studies also found that devitrification is linked to the underlying structure. Not only does crystallization preferentially occur in regions that already possess a form of partial order called medium-range crystalline order (MRCO) 15 , 17 , avalanches themselves were found to be spatially correlated with ‘soft spots’: regions where rearrangements are likely to occur in a glass 19 . In fact, simulations have found avalanche initiator particles with lower values of local density and bond-orientational order than the rest of the system 17 . A correlation between structural and dynamical heterogeneities and crystal nucleation has indeed been found in scattering experiments on deeply supercooled colloidal hard-sphere liquids 20 . However, in these studies, no crystallization was seen in the glassy regime. Never has devitrification been directly observed in experiments at the single-particle level. Direct imaging of this process is not possible in atomic and molecular systems. While micrometre colloidal hard spheres have proven invaluable as experimental models in probing many condensed-matter phenomena 12 , 13 , in the context of devitrification, this is not the system of choice. Avalanches are thought to help the system navigate from metabasin to metabasin 21 , and because these transitions are rare events, capturing them already requires long and continuous imaging experiments. In hard spheres, where particles cannot overlap, numerical studies have found that avalanches are even rarer than in soft spheres where some small degree of overlap is possible 18 . The crystallization mechanism, nevertheless, was identical in both of these systems. Guided by these observations, here, we investigate the devitrification of colloidal glasses made of micrometre monodisperse soft particles. To capture the elusive avalanche events, we performed particle-resolved confocal microscope imaging of large systems for long times. In our two-dimensional (2D) imaging experiments, a horizontal slice in the bulk of the three-dimensional (3D) glass sample contained at least N ≈ 10 4 particles (depending on the particle volume fraction ϕ ), while in our 3D experiments, we imaged in excess of N ≈ 6 × 10 4 particles (see Materials and methods). Our experiments typically spanned 5 × 10 5 τ B to 2 × 10 6 τ B , where \({\tau }_{{\mathrm{B}}}=\frac{{\sigma }^{3}\uppi \eta }{8{k}_{{\mathrm{B}}}T}\) is the Brownian time of a freely diffusing particle. Here, η is the solvent viscosity, k B is the Boltzmann constant, T is the temperature and σ is the diameter of the particle. Even though soft inter-particle interactions probably hasten devitrification 18 , both the system size studied and the experiment duration exceeded simulations, even on hard spheres 15 , 17 . Since there can be appreciable particle overlap in soft spheres, the glass transition volume fraction in these systems is larger than in colloidal hard spheres. Recent experiments and simulations on soft colloids have identified a critical volume fraction, ϕ o , below which the structural relaxation time, τ α , grows in a manner identical to that in liquids of hard spheres 22 , 23 . Where these two systems differ is for ϕ ≥ ϕ o . In this regime, τ α is infinite for the hard-sphere case, while for soft spheres, it remains finite and grows weakly with ϕ due to intermittent internal stress relaxation events 23 . While there is as yet no consensus on whether ϕ o corresponds to the glass or the jamming transition of soft spheres 22 , the operational definition of a glass transition defined in ref. 23 as the particle volume fraction at which \(\frac{{\tau }_{\alpha }}{{\tau }_{{\mathrm{B}}}}>1{0}^{5}\) occurs at a density ϕ OpGl ⪅ ϕ o . This definition for the glass transition, which is similar in spirit to that used in atomic/molecular systems, sidesteps issues arising from defining the glass transition by subscribing to specific theories such as mode-coupling theory (MCT). In fact, recent experiments on colloidal hard spheres have shown that τ α remains finite even beyond the predicted MCT singularity 24 . To compare with previous numerical studies on devitrification, we first identified the regime of densities where our system is expected to show hard-sphere-like behaviour. Starting from a highly compressed amorphous state ( ϕ ≈ 0.91), we measured the radial distribution function, g ( r ), upon systematic dilution of the sample (Fig. 1a ). Previous studies have found that the competing contributions from entropy and energy result in a maximum in the first peak of g ( r ), g 1 , at ϕ = ϕ o (refs. 22 , 23 ). For our system, g 1 was observed to be a maximum at ϕ ≈ 0.82, which we identified with ϕ o (inset to Fig. 1a ). Samples with ϕ > ϕ o did not show any signs of crystallization over 5 × 10 6 τ B (15 days), and we turned our attention to the ϕ ≤ ϕ o regime (grey shaded region in the inset to Fig. 1a ). Fig. 1: Crystallization of a soft colloidal glass. a , Pair correlation function at various particle volume fractions. Inset shows the height of the first peak of g ( r ) as a function of ϕ . The maximum in g 1 is at ϕ o = 0.82, and the dashed line represents the upper bound for ϕ OpGl . Crystallites appeared for particle volume fractions only in the grey shaded region over the duration of the experiment. b , MSD scaled by σ 2 , shown in black, and self-intermediate scattering function ( F s ( q , t )) evaluated at the first peak of g ( r ), shown in violet, for ϕ = 0.74 (squares) and ϕ = 0.82 (circles). The dashed line represents 0.14 σ displacement. c , X ( t ) for ϕ = 0.74 from 3D imaging experiments (green line) and for ϕ = 0.82 from 2D imaging experiments (orange line). For ϕ = 0.82, X ( t ) is for a small area within our field of view where a new crystallite emerged during the experiment. The coloured lines are averages of X ( t ) over 300 s to smoothen out fluctuations in bond order due to cage rattling. The grey lines are instantaneous values of X ( t ). d – f , Bond-orientational order Q 6 of particles at the start ( d ), during ( e ) and end ( f ) of the experiment for the ϕ = 0.74 sample. Particles in purple are crystalline ( Q 6 > 0.25), and green ones are amorphous ( Q 6 ≤ 0.25). Full size image The glassy behaviour of the ϕ = 0.74 and ϕ = 0.82 samples is apparent in the single-particle mean-squared displacement (MSD) and the self-intermediate scattering function, with both exhibiting a plateau over many decades due to strong particle caging (Fig. 1b ). Both of these quantities were calculated over an initial time t ini = 4 × 10 4 s of the experiment, over which g ( r ) remained unchanged (Extended Data Fig. 1 ), and we did not observe formation of any crystal nuclei. Further, both of these samples are indeed in the glassy regime as \(\frac{{\tau }_{\alpha }}{{\tau }_{{\mathrm{B}}}}>1{0}^{5}\) . Since the ϕ = 0.63 sample is only deeply supercooled ( \(\frac{{\tau }_{\alpha }}{{\tau }_{{\mathrm{B}}}}\approx 1{0}^{4}\) , see Supplementary Fig. 2 ), the upper bound for ϕ OpGl is 0.74 (dashed line in the inset to Fig. 1a ). Even though the average particle displacements do not exceed 0.14 σ (dashed line in Fig. 1b ), which is the cage size of the particle, we found evidence of crystal nucleation and growth in both of the samples (see Supplementary Video 1 ). We quantified the evolution of crystallinity in the samples through the solid particle fraction X ( t ) as a function of time t (see Materials and methods for definitions of solid/crystalline particles). In Fig. 1c , we show X ( t ) for the ϕ = 0.74 and ϕ = 0.82 samples. For ϕ = 0.74 (green line), X ( t ) was small and grew very slowly over a relatively long span of nearly 4 × 10 5 τ B from the start of the experiment and then transitioned to a rather steep increase. Snapshots of the sample at the beginning and end of the experiment, with crystalline particles indicated in purple, are shown in Fig. 1d,f , respectively. As crystalline particles are better packed than amorphous ones, the volume that is freed when a substantial fraction of the sample has crystallized aids in further particle reorganization, which substantially speeds up the devitrification process at later times 14 , 15 . This trend in X ( t ) parallels the behaviour of deeply supercooled hard-sphere liquids in the ϕ > ϕ MCT regime, where ϕ MCT is the mode-coupling glass-transition volume fraction 4 , 14 . The lack of an energy barrier towards crystallization, as is expected in the deeply supercooled regime, is evident from the crystallite shapes, which appear to be more ramified than compact (Fig. 1e and Extended Data Fig. 2 ) 4 , 14 , 25 . At ϕ = 0.82, since crystallization was exceedingly slow, only a few small crystallites grew over the experiment duration (orange line in Fig. 1c and Supplementary Fig. 3 ), we turned our attention back to the ϕ = 0.74 sample. The slow but steady growth in X ( t ) immediately after quench at ϕ = 0.74 is consistent with numerical findings for the crystallization of a freshly prepared/poorly annealed glass 14 , but not with a ‘mature’ one where it is avalanche mediated 15 , 16 , 17 , 18 . This is not entirely surprising as particles are not efficiently packed in our laboratory-prepared glass as those in mature glasses simulated using a protocol that has no obvious parallel in experiment 15 , 16 , 17 , 18 . However, since avalanches are spatially localized events, typically involving a few hundred particles at most, it is also possible that the change in crystallinity they produced is being masked when X ( t ) is calculated for the entire system. To determine whether this was the case, we capitalized on our real-space imaging approach and calculated X ( t ) only for those spatial regions where we had observed individual crystallites to nucleate and grow. By doing so, we identified two distinct crystallization pathways (Fig. 2a ); trajectories representative of each of these is shown in Fig. 2b,c and Extended Data Fig. 3 . Both occurred with roughly equal probability, and importantly, different regions of the sample predominantly followed one of these pathways. For the region of interest (ROI) shown in Fig. 2d , X ( t ) ROI shows a step-like growth (Fig. 2b ), while for the ROI shown in Fig. 2e , the increase in X ( t ) ROI is smoother (Fig. 2c ). To distinguish between these pathways, we first normalized each X ( t ) profile by its maximum value, as the extent to which X ( t ) grew for each of the 12 ROIs over the experiment duration was different. We then quantified the normalized X ( t ) profile through two measures. In the first measure, for each ROI and at each time instant, we calculated the difference d between the normalized X ( t ) profile and a straight line connecting its end points. We then defined R = ∑ d 2 , where the summation runs over the entire trajectory. For step-like growth (Fig. 2b ), R is naturally larger than the smooth growth case (Fig. 2c ) where the deviation of the normalized X ( t ) from the straight line is small. In the second measure, for each ROI, we calculated the local slope of the normalized X ( t ) over a time interval d t = 2 × 10 4 s and then determined the maximum value of the slope, m max , for a given trajectory (Extended Data Fig. 4 ). In Fig. 2a , we plot m max versus R for each of the 12 ROIs identified in our field of view. We clearly see two distinct groups, each having six ROIs, with the smooth growth trajectories having lower values of both m max and R (open circles) in comparison with step-like growth ones (filled circles). The results were not sensitive to the cut-offs used to define bond order parameters (Supplementary Fig. 4 ). Importantly, we observed that the nature of cooperative particle displacements associated with these trajectories is also vastly different. Panels 1 and 3 in Fig. 2d show the 2D bond order parameter ψ 6 and the 2D magnitude of displacement for individual particles, Δ x , scaled by particle diameter σ , \(\frac{\Delta x}{\sigma }\) , before and after an abrupt jump in X ( t ) ROI , respectively (regions 1 and 3 in Fig. 2b ). Particle colours represent the magnitude of these quantities. Panel 2 shows \(\frac{\Delta x}{\sigma }\) during the jump (brown shaded region 2 in Fig. 2b ). Here, we see a substantial fraction of particles in the ROI (~150) undergoing cooperative displacements of magnitude \(\frac{\Delta x}{\sigma }>0.3\) , reminiscent of avalanches 15 , 17 , following which we see both the growth of an existing crystallite and the emergence of new ones (see Supplementary Video 2 ). Figure 2e , the equivalent of Fig. 2d for the three regions labelled in Fig. 2c , shows qualitatively different behaviour. The growth in X ( t ) is accompanied by local particle shuffles 14 , and cooperative displacements are more string-like (panel ii of Fig. 2e ). Further, these string-like displacements almost never occur in the crystal interior and are restricted to the bulk and the crystal/glass interface, with the latter aiding layer-by-layer crystal growth (see Supplementary Video 3 ). Fig. 2: Crystal growth pathways in a colloidal glass. a , m max versus R for each of the 12 crystallization trajectories X ( t ) for the ROIs within the field of view. The size of the ROI is chosen to be proportional to the final size of the crystal. Crystallization through avalanche-mediated growth (filled circles) and smooth growth (open circles) are spatially clustered, indicating the existence of two devitrification pathways. b , c , Time evolution of X ( t ) averaged over 6,000 s of a representative section of the field of view, which exhibits a step-like/avalanche growth ( b ) and smooth growth ( c ). Visualization of particle displacements in brown shaded region (2 in b ) and blue shaded region (ii in c ) depict the qualitative difference between collective particle motions associated with the two different pathways. d , e , ψ 6 and displacement maps over the windows labelled 1, 2 and 3 in b ( d ) and i, ii and iii in c ( e ). The displacement maps were obtained by tracking particle displacements over 6,000 s. Particle colours represent the magnitudes of ψ 6 and the scaled displacement. f , g , The probability distribution of scaled displacements averaged over all ROIs, where we observed avalanche-mediated ( f ) and smooth ( g ) crystal growths; \(P(\frac{\Delta x}{\sigma })\) for all particles is shown by black squares, while open and filled circles show particles that underwent large positive (Δ ψ 6 > 0.25) and negative (Δ ψ 6 < −0.25) changes in their bond order values, respectively. The vertical dashed lines indicate the avalanche threshold, Δ x / σ = 0.3. h , g ( r ) averaged over all ROIs, before crystallization, where avalanche (solid red line) and smooth (dotted blue line) growth later occurred. Full size image We gained quantitative insights into these growth pathways through the probability distribution of displacements, \(P(\frac{\Delta x}{\sigma })\) . These distributions were calculated by averaging over all avalanche (Fig. 2f ) and smooth growth (Fig. 2g ) events, six each, over a time window of 2 × 10 4 s—the typical duration of an avalanche event. For both of the pathways, while \(P(\frac{\Delta x}{\sigma })\) for the overall particle displacement distribution in the ROIs decayed monotonically (black squares), the distribution for those that underwent positive (open circles) or negative (filled circles) changes in their bond order, Δ ψ 6 , while peaked, is well below the avalanche threshold, Δ x / σ = 0.3 (vertical dashed line in Fig. 2f,g ) 17 . Further, the area under these curves yields the total number of particles participating in such events, and almost 85% of the particles that underwent large changes in bond order have displacements smaller than the avalanche threshold. This indicates that the particles that participate in either avalanches or string-like particle displacements are not the ones that primarily crystallize, which is in line with numerical results 15 , 17 . Thus, avalanches and string-like particle motions nudge nearby particles sufficiently and cause large changes in their bond order. Strikingly, the difference in area under the curve for +Δ ψ 6 and −Δ ψ 6 particles, as well as the difference in their peak heights, although finite for both pathways, is substantially larger when crystallization is avalanche mediated (Fig. 2f ). Thus, avalanche events often result in large and sudden increases in X ( t ). In contrast, string-like particle displacements promote crystal growth only marginally over disrupting them, as evidenced by the comparable areas and peak heights of +Δ ψ 6 and −Δ ψ 6 particles (Fig. 2g ), and although being more frequent, X ( t ) grows only slowly. Avalanches are expected to occur in regions where particles are better packed, and we next checked whether the crystallization pathway was correlated with density inhomogeneities. Since we know both where and how devitrification occurred at later times, we determined whether the local structure of these regions before crystallization was different for the two pathways. Figure 2h shows g ( r ) averaged over all ROIs where avalanche-mediated devitrification (solid line) and smooth growth (dashed line) later occurred. While the pair correlation functions are indistinguishable at large r , g 1 , which is proportional to the number of particles in the first nearest-neighbour shell, is larger for the former than for the latter and suggests that avalanches indeed occur in regions that are more densely packed. Importantly, our findings show that avalanche-mediated crystallization 15 , 17 is more generic than originally thought and occurs in poorly annealed glasses as well (see Supplementary Video 4 ). The observed correlation between structural heterogeneities and the nature of cooperative particle displacements suggests that the region where crystalline nucleation occurs in a glass may not be entirely random, but is instead correlated with the structure. Since two-point density correlators are too coarse to identify these regions a priori, that is before crystallization occurs, we sought other structural measures. Using machine-learning methods, recent simulations have discovered a structural-order parameter called softness that is predictive of dynamics in supercooled liquids and glasses. Following refs. 26 , 27 , we used particle trajectory data from our 2D imaging experiments to first construct a training set (~6% the total system size) comprising equal numbers of ‘soft’ particles—those that are about to rearrange—and ‘hard’—those that are not. For each of these particles, we quantified the local structural environment through M = 47 structure functions (see Materials and methods). Each function formed the orthogonal axis of an M -dimensional space \({{\mathbb{R}}}^{M}\) , and the local environment for each particle is thus a point in \({{\mathbb{R}}}^{M}\) . We next used the support-vector machines method to find the best hyperplane separating these two groups. The softness S i for particle i was defined as the shortest distance between its location in \({{\mathbb{R}}}^{M}\) and the hyperplane. Particles were labelled soft if S i > 0 and hard otherwise. We calculated S i for all particles in the field of view and for all frames over t ≤ t ini . The top 10% of the softest and hardest particles in each frame were identified and separately clustered if they were within a particle diameter of each other. We restricted our attention to cluster sizes N S ≥ 6. The top and bottom panels of Fig. 3a show the contour plots of the number of times, n , that soft and hard particle clusters appeared over t ini . These regions are spatially localized, and as expected, soft and hard patches show poor overlap. Figure 3a also shows crystalline particles (white spheres) at t = 2.4 × 10 5 s (experiment end) for a thin 3D volume centred around the 2D slice for which we had evaluated softness. Strikingly, even though softness was estimated over only the first sixth of the experiment duration, crystallites emerged primarily in regions with S i > 0. To gauge whether softness or MRCO was a better predictor of where crystallization occurred, for t ≤ t ini , we first identified the top 10% of particles with a high bond order value at each time instant as MRCO particles, clustered them and considered only cluster sizes ≥6 (Extended Data Fig. 5 ). We then binned the field of view into 15 σ × 15 σ boxes and calculated the time-averaged normalized density \({\hat{\rho }}_{i}\) of soft, hard and MRCO particle clusters in each box i over t ini (Fig. 3b , Materials and methods). For crystalline particles, \({\hat{\rho }}_{i}\) was calculated at the end of the experiment. Crystallinity has a stronger spatial correlation with soft particle clusters than with MRCO ones. The spatial configurational overlap between crystallinity and various particle cluster types was found to be 0.47 (soft), 0.17 (MRCO) and 0.02 (hard). This suggests that the structural propensity to reorganize at localized regions plays a more dominant role than precrystalline ordering in devitrification. Our findings were not overly sensitive to the cut-offs used for identifying MRCO particles (Supplementary Figs. 5 – 7 ). To determine whether softness was also sensitive to the crystallization route, we first examined whether soft particle cluster sizes in ROIs corresponding to each of these pathways were different. This was not the case, and the average soft particle cluster size varied between 9 and 12 particles for all ROIs. Next, we determined whether n and the average value of softness 〈 S 〉 of these clusters (mean S i of each cluster) averaged over t ini for the two pathways were different. In Fig. 3c , we plot n versus 〈 S 〉 for the same 12 ROIs discussed in Fig. 2 . Both of these quantities indeed forecast the crystallization pathway—ROIs that showed the smooth crystal growth (open circles) have a larger n and 〈 S 〉 than ROIs where crystallization was due to avalanches (filled circles). Thus, string-like particle displacements that were more frequent and responsible for smooth growth are associated with a larger n and 〈 S 〉, and sporadic avalanche events that resulted in steep changes in X ( t ) are associated with a smaller n and 〈 S 〉 (Fig. 2 ). Fig. 3: Softness predicts where crystallization occurs in a glass. Machine-learning results for the ϕ = 0.74 sample from two independent experiments ( a , d ). Results from the 2D imaging experiment are shown in a . Rendered coordinates in white represent crystalline particles at the end of the experiment. A 3D stack captured immediately after the 2D imaging experiment was used to carry out the 3D bond order analysis and identify crystalline particles in the imaging plane of the 2D experiment. Pink (top panel) and blue (bottom panel) patches are soft and hard particle clusters, respectively, obtained from machine-learning methods over the first sixth of the total experiment duration. Darker and lighter shades of pink and blue represent the number of times these clusters appeared (see colour bar). The growth in X ( t ) during this period was negligible. b , \({\hat{\rho }}_i\) for various particle types. The orange line represents MRCO clusters, green and grey lines represent soft and hard particle clusters, respectively. The clusters were identified over an initial time window of 40,000 s of the experiment. The purple line represents crystalline particles at the end of the experiment. Black dashed lines drawn at \({\hat{\rho }}_{i}=1\) indicate average densities. The vertical blue dashed lines represent boxes with a high degree of crystallinity. c , The avalanche growth (filled symbols) and smooth growth (open symbols) pathways are spatially clustered, indicating that softness can also forecast the crystallization route. The error bars represent s.e.m. in the measurement of average softness of the clusters. d , Crystalline particles (white) and soft (left, pink) and hard (right, blue) particle clusters for a representative 2D slice from a 3D stack obtained from a 3D imaging experiment. The trained machine from the 2D imaging experiment ( a ) was directly used to identify soft and hard particles. Crystallites predominantly emerged in regions with a high softness. Full size image That there is a causal link between structure and where crystallization subsequently occurs in a glass is apparent in Fig. 3d , where we present results from an independent 3D imaging experiment for ϕ = 0.74. Here, without taking recourse to particle trajectory information, we directly used the machine-learning model built earlier (2D imaging experiment at ϕ = 0.74, Fig. 3a ) to classify particles as either soft or hard. Figure 3d shows the contour plots of densities of soft and hard particle clusters over the first sixth of the experiment duration for a representative 2D slice of the 3D volume. Crystallites (shown in purple) predominantly emerge in regions with a high value of softness (Extended Data Fig. 6 ), strengthening our claims. Our experiments provide fundamental insights into the mechanisms by which a glass devitrifies. We found that due to local density inhomogeneities, crystallization in different regions of the glass proceeded either through local particle shuffles or the avalanche-mediated route. While both of these pathways operate simultaneously in our experiments, simulations have observed one or the other pathway, depending on the numerical protocol used to prepare the glass 14 , 15 , 17 . Discerning between these pathways, however, requires dissecting the growth of individual crystallites and is feasible only in real-space imaging experiments and not ensemble-averaged ones. Our most striking finding is how structure hidden in a glass, which can be captured by the softness parameter, decides its fate. A natural step moving forward would be to investigate whether the time evolution of the softness field also determines crystal growth fronts and hence the final shapes of the crystallites. From an industrial and technological perspective, it is plausible that present methods for making more stable glasses do so by inadvertently tuning the softness in a glass. The clear link between the structure and stability of a glass demonstrated here opens up new ways to control glass stability. Methods Experimental details Our experimental system consisted of poly( N -isopropylacrylamide) (PNIPAm) hydrogel microspheres of σ = 1.53 μm with polydispersity <5% suspended in water. The particles were fluorescently labelled with rhodamine 6G (99%, Sigma) for confocal imaging. To achieve the required quench to form a glass, the suspension of PNIPAm microspheres was concentrated by removing excess water through repeated centrifugation and by adding controlled volumes, typically 1 μl of 1 mM of NaOH solution, to facilitate swelling of particles. The suspension was then loaded in a cylindrical cell (5 mm in diameter and 5 mm in height) and sealed with oil to prevent changes in ϕ due to evaporation. The sample was stirred well to suppress the presence of precrystalline nuclei. We checked that our samples were indeed amorphous before starting our experiments by performing a bond order analysis of particles. Heterogeneous crystallization of monodisperse particles from the smooth walls of the container was suppressed by coating it with three layers of polydisperse PNIPAm particles, thus providing surface roughness at the length scale of the particle. All experiments were performed at 31.5 °C, which is well below the lower critical solution temperature of T LCST ≈ 35 °C of PNIPAm microgel particles in water with rhodamine 6G dye present (Supplementary Fig. 1 ). The samples were imaged using a Leica DMi8 SPII confocal microscope with 552 nm laser excitation. 2D imaging was performed on a horizontal section of the 3D sample at a distance of 25 μm from the amorphous boundary to avoid wall effects. The adaptive focus control feature of the microscope was employed to maintain the focus on the observed plane throughout the experiment. Frame rates were varied from 0.05 Hz to 0.008 Hz, depending on ϕ . The size of the field of view was 108 σ × 108 σ (4,096 pixels by 4,096 pixels) and consisted of ~11,000 particles. 3D stacks were recorded every 120 s for ϕ = 0.74 and ϕ = 0.82 over a volume of 56 σ × 56 σ × 13 σ , which contained ~60,000 particles. Depending on the dynamics of our system, the duration of our experiments was varied from 48 h to 129 h. To improve tracking, raw data were preprocessed in ImageJ to correct drifts using image registration. Particle trajectories of both 2D and 3D experiments were determined using standard MATLAB algorithms 28 . Codes developed in-house were used to perform subsequent analysis and create videos (Supplementary Section A ). For machine learning, MATLAB versions of support-vector machine-learning algorithms were used from the LIBSVM package 29 . Dynamics We calculated average MSD, \(\langle | \Delta {{\bf{r}}(t)}|^{2}\rangle\) , and the self-intermediate scattering function, F s ( q , t), over an initial time interval in which there was no noticeable change in the radial distribution function (Extended Data Fig. 1 ) using $$\langle | \Delta {{\bf{r}}(t)} | ^{2}\rangle =\langle\frac{1}{N}\mathop{\sum }\nolimits_{i = 1}^{N} | {\bf{r}}_{i}(t+t_{0})-{\bf{r}}_{i}(t_{0}){| }^{2}\rangle$$ (1) and $${F}_{{\mathrm{s}}}({\bf{q}},t)=\langle\frac{1}{N}\mathop{\sum }\nolimits_{i = 1}^{N} {{\mathrm{e}}}^{i{\bf{q}}\cdot({\bf{r}}_{i}(t+t_{0})-{\bf{r}}_{i}(t_{0}))}\rangle$$ (2) where N is the total number of particles, r i ( t ) is the position of the i th particle at t , and q corresponds to the wave vector at the first peak of the radial distribution function. Identification of crystalline/solid-like particles in 2D and 3D through bond order parameters Calculation of 2D bond order parameter Single-particle bond-orientational order in 2D is given by $${\psi }_{6k}=| \frac{1}{{N}_{{\mathrm{b}}}(k)}\mathop{\sum }\nolimits_{j = 1}^{{N}_{{\mathrm{b}}}(k)}{{\mathrm{e}}}^{i6{\theta }_{jk}}|$$ (3) where the summation is performed over the nearest neighbours ( N b ( k )), which consists of all of the particles within a distance of 1.4 σ from the k th particle; θ j k is the angle between a fixed axis ( x axis in our case) and the bond between j th and k th particles. A particle was labelled solid-like if its ψ 6 and that of at least two of its neighbours exceeded a threshold value of 0.75. Calculation of 3D bond order parameter Steinhardt bond-orientational order 30 of the l -fold symmetry (in our case, l = 6 for hexagonal symmetry) is defined by the q l m ( i ) of particle i : $${q}_{lm}(i)=\frac{1}{{N}_{{\mathrm{b}}}(i)}\mathop{\sum }\nolimits_{j = 1}^{{N}_{{\mathrm{b}}}(i)}{Y}_{lm}(\theta ({{\bf{r}}_{\bf{ij}}}),\phi ({{\bf{r}}_{\bf{ij}}}))$$ (4) where Y l m are the spherical harmonics, and θ ( r i j ) and ϕ ( r i j ) are the polar angles cast by the line joining i th and j th particles. The rotational invariant form of equation ( 4 ) used to describe the crystal structure is $${q}_{l}(i)=\sqrt{\frac{4\uppi }{2l+1}\mathop{\sum }\nolimits_{m = -l}^{m = l}| {q}_{lm}(i){| }^{2}}$$ (5) We followed Lechner and Dellago 30 and defined coarse-grained bond-orientational order over the first neighbour shell as $${Q}_{lm}(i)=\frac{1}{{\tilde{N}}_{{\mathrm{b}}}(i)}\mathop{\sum }\nolimits_{j = 1}^{{\tilde{N}}_{{\mathrm{b}}}(i)}{q}_{lm}(j)$$ (6) where the summation now includes the i th particle along with its neighbours, and \({\tilde{N}}_{{\mathrm{b}}}(i)={N}_{{\mathrm{b}}}(i)+1\) . The rotational invariant form, Q l , is obtained in the same manner as equation ( 5 ). A particle was labelled solid-like if its Q 6 and that of at least two of its neighbours exceeded a threshold value of 0.25 (refs. 17 , 30 ). We defined these cut-offs from the distribution of Q 6 values at the beginning and end of the experiment (Supplementary Fig. 6 ). Identification of particle rearrangements from trajectories A typical particle trajectory in a glass exhibits intermittent dynamics, as shown by rescaled displacements in Extended Data Fig. 7 . It is typically marked by long quiet periods interspersed with sudden large displacements that lead to rearrangements. Such events are identified using the ‘hop’ function 26 , 27 , which is computed from particle trajectories as $${p}_{{\mathrm{hop}},i}(t)={[{\langle {({\bf{r}}_{i}-{\langle {\bf{r}}_{i}\rangle }_{B})}^{2}\rangle }_{A}{\langle {({\bf{r}}_{i}-{\langle {\bf{r}}_{i}\rangle }_{A})}^{2}\rangle }_{B}]}^{1/2}$$ (7) where averages were calculated over the time intervals A ≡ [ t − δ t /2, t ] and B ≡ [ t , t + δ t /2]. The time interval δ t should correspond to the time required for particles to undergo displacements larger than cage size and is typically chosen to be at the end of the plateau region of the MSD to capture the particle cage-breaking events. Following ref. 27 , we used δ t = 6,000 s for ϕ = 0.74, which lies on the plateau region of our MSD, as shown by the black squares in Fig. 1b . Hopping events manifest as well-separated peaks in p hop, i ( t ), reflective of intermittent dynamics (Extended Data Fig. 7 ). To obtain the optimal threshold value, p th , that determines whether a peak in p hop is a rearrangement, we plotted the distribution of hop events in Extended Data Fig. 8a . The histogram of p hop, i ( t ) exhibits a change in slope from a saturating profile at low values, a regime dominated by cage-rattling events, to a fast-decaying profile associated with rare hopping events. The power-law fit to the tail of the distribution in the hopping regime was observed to deviate at a transition point p hop ≈ 0.02. Following ref. 27 , we consider p hop = 0.02 as the value p th in our experiments that best separates the two regimes. We then examined the dependence of average residence times, 〈 t R 〉, as a function of different threshold values of p hop in Extended Data Fig. 8b . A rapid increase in 〈 t R 〉 was observed for the lowest values of p hop , and it remained almost constant for p hop > 0.01. The chosen p th value lies in the region where 〈 t R 〉 does not change noticeably with p hop , thus providing reassurance on our choice of the threshold. The distribution of hopping distances for p th = 0.02 is shown in Extended Data Fig. 8c . The identified hopping durations in our experiments are mostly distributed around 6,000 s, as shown in Extended Data Fig. 8d , which is also the timescale over which avalanches were visualized in our experiments (Fig. 2d and Supplementary Video 2 ). We then determined hard and soft particles by analysing their p hop profiles. In our experiments, a particle is considered soft at t if p hop / σ 2 has a peak exceeding 0.1 at the next immediate time step t + δ t . Thus, a particle is soft if it is on the verge of a rearrangement. Hard particles are those that stay in the cage formed by the same set of neighbours for very long times, that is, several avalanche timescales. The quiescent period is quantified by a residence time defined as the time interval between two consecutive peaks in p hop / σ 2 (Extended Data Fig. 7 ). We distinguish genuine displacements from cage-rattling events by restricting our attention to only those peaks having p hop / σ 2 > 0.02, which is large compared with the cage size. Determination of softness using machine learning A challenge in glass physics is to connect the dynamics of the particle with its local structure. Considerable progress in this direction has been achieved recently by defining a ‘softness’ field associated with the underlying amorphous structure that is shown to be strongly correlated with particle dynamics 26 . The objective of using machine learning in our experiment is to identify whether a structural signature present in glass leads to crystal formation at later times. For this purpose, we considered an initial time period of about 11 h of the experiment duration (68 h) for ϕ = 0.74, where g ( r ) remained unchanged and showed features characteristic of the amorphous phase (Extended Data Fig. 1 ). We did not find evidence of any crystal nucleation over t ini , and the machine learning was also performed only over this duration. To obtain a softness field, we first created a training set consisting of equal numbers of hard and soft particles. These are randomly selected from all particles that satisfy the required criteria defined by p hop in t ini . We then characterized their local structural environment by the radial density function 26 , 27 , defined as $$G(i;\mu )={\sum }_{j\ne i}{{\mathrm{e}}}^{-{\left(\frac{{R}_{ij}-\mu }{0.1\sigma }\right)}^{2}}$$ (8) where R i j is the distance between particles and μ is a set of probing radii; G i probes the structure around the particle by looking at the density of particles at various radial distances from it. In our experiments, μ is varied from 0.4 σ to 5.0 σ in 0.1 σ intervals, resulting in a total of 47 structure functions for each particle. Hard particles are labelled y i = −1 and soft y i = 1. Thus, each particle is represented by ( G i , y i ), with G i collectively describing the local neighbourhood of the particle as a vector in 47-dimensional space, and y i indicating the class (either soft or hard) to which it belongs. A reasonable separation of these two sets of particles using machine learning indicates a close association of structure with its dynamics. For this, we employ a MATLAB version of a support-vector machine-training algorithm using the LIBSVM package 29 to identify the best hyperplane w ⋅ G − b = 0 that maximally separates hard and soft particles of the training set in the 47-dimensional space. Here, w is a normal vector and b /| w | is the offset from origin to the hyperplane. In practice, however, finding a plane that perfectly separates hard and soft particles is not possible. Nevertheless, an optimal separating hyperplane can still be obtained by minimizing $$\frac{1}{2}{{\bf{w}}}^{{\mathrm{T}}}\cdot {\bf{w}}+C\mathop{\sum }\nolimits_{i = 1}^{N}{\xi }_{i}$$ (9) subject to constraints y i ⋅ ( w T ⋅ ϕ ( G i ) + b )≥1 − ξ i and ξ i ≥ 0, where ξ i is the slack variable and the superscript ‘T’ indicates the transpose. The control parameter C is a trade-off between wrong classification of the particle and the margin size of the hyperplane (smallest distance from the particle in the training set). A larger value of C reduces incorrect classification. We obtained a training score of 72.5% for ϕ = 0.74, and the C value used in our experiment, ranging from one to ten, altered the score only by decimal place. The softness of each particle is given by S i = w ⋅ G i − b , which determines its distance from the dividing hyperplane. Thus, the machine learns to associate structure with rearrangements by observing the training set and predicts it for the rest of the particles by looking only into its local environment. The instantaneous softness values lie in the range [−3, 3]. Inclusion of another class of structure functions based on bond angles defined in ref. 26 neither improved the training score nor changed the softness field. The support-vector machines model for classification obtained above was then applied to 2D slices of our 3D experiment, which helped us to further strengthen our claims. Further, two-point correlation functions were not able to distinguish the top 10% hard and soft particles obtained from machine learning from the rest of the system (see Supplementary Fig. 8 ). Thus, the local structure around a particle in an amorphous solid is not apparent in two-body correlations alone, as is well known. Computing normal densities The entire field of view is gridded into 15 σ × 15 σ boxes. We then calculated local time-averaged densities ( ρ i ) of soft particle clusters in each box (averaged over t ini ), and normalized the box with respect to the density of the whole field of view ( ρ ). Thus, normalized density, \({\hat{\rho }}_{i}={\rho }_{i}/\rho >1\) , indicates relative excess density of soft particles. Similar \({\hat{\rho }}_{i}\) values were calculated for hard and MRCO particles. For crystalline particles, \({\hat{\rho }}_{i}\) values were averaged over 300 s at the end of the experiment to smoothen bond order parameter fluctuations due to cage rattling. Calculation of spatial configurational overlap for hard, soft and MRCO particles with crystalline particles To calculate the spatial configurational overlap between various particle types, we gridded the whole field of view into 1 σ boxes and evaluated whether a given box is occupied by hard, soft or MRCO particles for each snapshot of our observation over t ini . If a box is occupied in at least 15 such snapshots, then the value of occupancy for the box by particle type is 1 and 0 otherwise. We also evaluated the occupancy of crystalline particles at the end of the experiment by similarly gridding the field of view. We then calculated the correlation between the boxes occupied by crystalline particles and the corresponding boxes for the other particle types. For the 2D slices of the 3D experiments, the results of the analysis are discussed in the main text. The spatial configurational overlap between crystallinity and various particle cluster types for the 3D imaging experiments shown in Fig. 3d was found to be 0.29 (soft), 0.16 (MRCO) and 0.07 (hard). In addition, the overlap between soft and hard particle clusters was found to be 0.08. Data availability Source data are available for this paper from the corresponding authors upon reasonable request.
Glass is amorphous in nature—its atomic structure does not involve the repetitive arrangement seen in crystalline materials. But occasionally, it undergoes a process called devitrification, which is the transformation of a glass into a crystal—often an unwanted process in industries. The dynamics of devitrification remain poorly understood because the process can be extremely slow, spanning decades or more. Now, a team of researchers led by Rajesh Ganapathy, Associate Professor at the Jawaharlal Nehru Center for Advanced Scientific Research (JNCASR), in collaboration with Ajay Sood, DST Year of Science Chair and Professor at the Indian Institute of Science (IISc), and their Ph.D. student Divya Ganapathi (IISc) has visualized devitrification for the first time in experiments. The results of this study have been published in Nature Physics. "The trick was to work with a glass made of colloidal particles. Since each colloidal particle can be thought of as a substitute for a single atom, but being ten thousand times bigger than the atom, its dynamics can be watched in real-time with an optical microscope. Also, to hasten the process we tweaked the interaction between particles so that it is soft and rearrangements in the glass occurred frequently," says Divya Ganapathi. In order to make a glass, Divya Ganapathi and the team jammed the colloids together to reach high densities. The researchers observed different regions of the glass following two routes to crystallization: an avalanche-mediated route involving rapid rearrangements in the structure, and a smooth growth route with rearrangements happening gradually over time. To gain insights into these findings, the researchers then used machine learning methods to determine if there was some subtle structural feature hidden in the glass that apriori decides which regions would later crystallize and through what route. Despite the glass being disordered, the machine learning model was able to identify a structural feature called "softness" that had earlier been found to decide which particles in the glass rearrange and which do not. The researchers then found that regions in the glass which had particle clusters with large "softness" values were the ones that crystallized and that "softness" was also sensitive to the crystallization route. Perhaps the most striking finding emerging from the study was that the authors fed their machine learning model pictures of a colloidal glass and the model accurately predicted the regions that crystallized days in advance. "This paves the way for a powerful technique to identify and tune 'softness' well in advance and avoid devitrification," says Ajay Sood. Understanding devitrification is crucial in areas like the pharmaceutical industry, which strives to produce stable amorphous drugs as they dissolve faster in the body than their crystalline counterparts. Even liquid nuclear waste is vitrified as a solid in a glass matrix to safely dispose it of deep underground and prevent hazardous materials from leaking into the environment. The authors believe that this study is a significant step forward in understanding the connection between the underlying structure and stability of glass. "It is really cool that a machine learning algorithm can predict where the glass is going to crystallize and where it is going to stay glassy. This could be the initial step for designing more stable glasses like the gorilla glass on mobile phones, which is ubiquitous in modern technology," says Rajesh Ganapathy. The ability to manipulate structural parameters could usher in new ways to realize technologically significant long-lived glassy states.
10.1038/s41567-020-1016-4
Biology
Scientists create new genomic resource for improving tomatoes
The tomato pan-genome uncovers new genes and a rare allele regulating fruit flavor, Nature Genetics (2019). DOI: 10.1038/s41588-019-0410-2 , www.nature.com/articles/s41588-019-0410-2 Journal information: Nature Genetics
http://dx.doi.org/10.1038/s41588-019-0410-2
https://phys.org/news/2019-05-scientists-genomic-resource-tomatoes.html
Abstract Modern tomatoes have narrow genetic diversity limiting their improvement potential. We present a tomato pan-genome constructed using genome sequences of 725 phylogenetically and geographically representative accessions, revealing 4,873 genes absent from the reference genome. Presence/absence variation analyses reveal substantial gene loss and intense negative selection of genes and promoters during tomato domestication and improvement. Lost or negatively selected genes are enriched for important traits, especially disease resistance. We identify a rare allele in the TomLoxC promoter selected against during domestication. Quantitative trait locus mapping and analysis of transgenic plants reveal a role for TomLoxC in apocarotenoid production, which contributes to desirable tomato flavor. In orange-stage fruit, accessions harboring both the rare and common TomLoxC alleles (heterozygotes) have higher TomLoxC expression than those homozygous for either and are resurgent in modern tomatoes. The tomato pan-genome adds depth and completeness to the reference genome, and is useful for future biological discovery and breeding. Main Tomato is one of the most consumed vegetables worldwide with a total production of 182 million tons worth more than US$60 billion in 2017 ( ). A reference genome sequence was released 1 and has greatly facilitated scientific discoveries and molecular breeding of this important crop. Cultivated tomato ( Solanum lycopersicum L.) has experienced severe bottlenecks during its breeding history, resulting in a narrow genetic base 2 . However, modern cultivated tomatoes exhibit a wide range of phenotypic variation 3 and metabolic diversity 4 , mainly because of natural and human breeding-mediated introgressions from wild relatives 5 , in addition to spontaneous mutations that have also contributed to this seeming paradox 3 . Consequently, individual cultivars are expected to contain alleles or loci that are absent in the reference genome 6 . S. lycopersicum L. can be further divided into two botanical types: large-fruited tomatoes S. lycopersicum var. lycopersicum (SLL) and cherry-sized early domesticates S. lycopersicum var. cerasiforme (SLC). Following the release of the tomato reference genome, hundreds of diverse cultivated and wild tomato accessions have been resequenced, and the resulting data have been analyzed to reveal genomic changes through the history of tomato breeding. This has led to identifying specific genome regions targeted by human selection 7 , 8 , 9 , 10 . Notably, in these studies, reported genomic variation was revealed through mapping of short reads to the reference genome, an activity whose very nature ignores sequence information that is absent from the reference genome, precluding the discovery of previously unknown loci and highly divergent alleles. A pan-genome comprising all genetic elements from cultivated tomatoes and their wild progenitors is crucial for comprehensive exploration of domestication, assessment of breeding histories, optimal utilization of breeding resources and a more complete characterization of tomato gene function and potential. We constructed a tomato pan-genome using the ‘map-to-pan’ strategy 11 , based on resequencing data of 725 accessions belonging to the Lycopersicon clade, which consists of S. lycopersicum L. and its close wild relatives, Solanum pimpinellifolium (SP), and S. cheesmaniae and S. galapagense (SCG). The pan-genome captured 4,873 additional genes not in the reference genome. Comparative analyses using the constructed pan-genome revealed abundant presence/absence variations (PAVs) of functionally important genes under selection and identified a rare allele defined by promoter variation in the tomato lipoxygenase gene, TomLoxC . TomLoxC is known to influence fruit flavor by catalyzing the synthesis of lipid-derived C5 and C6 volatiles. Further characterization reveals a role of TomLoxC in apocarotenoid production. The rare allele of TomLoxC may have undergone negative selection in the early domesticates, followed by more recent reintroduction. The PAV dynamics presented here provide a case model of the profound impact of human selection on the gene repertoire of an important modern crop, in addition to a more complete picture of the genome potential of tomato that will guide breeding for targeted traits. Results Pan-genome of cultivated tomato and close wild relatives Genome sequences were collected/generated for a total of 725 tomato accessions in the Lycopersicon clade, including 372 SLL, 267 SLC, 78 SP and 8 SCG (3 S. cheesmaniae and 5 S. galapagense ) (Supplementary Tables 1 and 2 ). Among these accessions, genome sequences of 561 were available from previous reports 1 , 7 , 8 , 9 , 12 , 13 , 14 , whereas genomes of 166 accessions (of which 2 were also sequenced previously), including 121 SLC, 26 SP and 19 SLL, were sequenced in this study to obtain broader regional and global representation. Among the 725 accessions, 98 and 242 had sequence coverage of more than 20× and 10×, respectively. The genome for each accession was de novo assembled, producing a total of 306 Gb of contigs longer than 500 base pairs (bp) with an N50 value (the minimum contig length needed to cover 50% of the assembly) of 3,180 bp (Supplementary Table 2 , Supplementary Fig. 1 and Supplementary Note ). All assembled contigs were compared with the reference genome to identify previously unknown sequences. A total of 4.87 Gb of nonreference sequence with identity <90% to the reference genome was obtained (Supplementary Table 2 , Supplementary Fig. 2 and Supplementary Note ). After removing redundancies, 449,614 sequences with a total length of 351 Mb comprising the nonreference genome remained. Approximately 78.2% of the nonreference genome comprised repetitive elements, which was higher than that of the reference genome (63.5%) 1 . A total of 4,873 protein-coding genes were predicted in the nonreference genome (Supplementary Table 3 ). The reference ‘Heinz 1706’ genome contains 35,768 protein-coding genes (version ITAG3.2), of which 272 were potential contaminations and thus were removed (Supplementary Table 4 and Supplementary Note ). The tomato pan-genome, including reference and nonreference genome sequences, had a total size of 1,179 Mb and contained 40,369 protein-coding genes. Among the nonreference genes, 2,933 could be annotated with gene ontology (GO) terms or Pfam domains. A total of 332 nonreference genes were covered by ‘Heinz 1706’ reads with a coverage fraction greater than 95%, and 170 were fully covered, suggesting that they were not assembled in the reference genome (Supplementary Table 3 ). Among them were two well-characterized genes, GAME8 ( TomatoPan006500 ), which encodes a CYP72 family P450 protein involved in regulation of steroidal glycoalkaloid biosynthesis 15 , and PINII ( TomatoPan007410 ), which encodes a wound-inducible proteinase inhibitor 16 . In addition, several other well-characterized genes, including Hcr9-OR2A ( TomatoPan017870 , a homolog of Cf-9 involved in Cladosporium fulvum resistance 17 ), I2C-1 ( TomatoPan019380 , a disease resistance gene 18 ) and Pto ( TomatoPan028750 , a protein kinase gene conferring disease resistance 19 ), were not covered by any ‘Heinz 1706’ reads, suggesting their absence in the reference accession. Moreover, we found that 69.6% of the reference and 22.4% of the nonreference genes were expressed at >1 reads per kilobase (kb) of exon per million mapped reads (RPKM) in fruit pericarp tissues at the orange stage (about 75% ripe) in at least 1 of 397 accessions for which RNA-sequencing (RNA-Seq) data were available 4 . Gene expression analysis indicated generally lower expression levels of nonreference genes than reference genes (Supplementary Fig. 3a ), similarly to pan-genome analysis of rice 20 . Given that the tomato RNA-Seq data used emanated from a single tissue at one developmental stage, these expression frequencies represent a conservative estimate with many additional nonreference genes likely expressed in other tissues. PAVs in protein-coding genes PAVs in genes among the wild, early domesticates and modern tomato accessions can reveal genetic changes through breeding history. High-depth sequencing data are preferable for robust PAV calling and have been deployed in several previous plant pan-genome studies examining relatively small numbers of accessions 20 , 21 , 22 , 23 , 24 , 25 , 26 . However, if sequencing data are uniformly distributed across the genome, low-depth data can still effectively cover a large proportion of the genome and provide sufficient evidence for PAV calling. Based on our analysis, we limited our investigation to a total of 586 accessions (294 SLL, 225 SLC, 60 SP and 7 SCG) for PAV calling ( Supplementary Note and Supplementary Fig. 4 ). The total number of detected genes from the 586 accessions was 40,283, accounting for 99.97% of genes in the tomato pan-genome (40,369). Similarly to Gordon et al. 24 , we categorized genes in the tomato pan-genome according to their presence frequencies: 29,938 (74.2%) core genes shared by all the 586 accessions, and 3,232 softcore, 5,912 shell and 1,287 cloud genes defined as present in more than 99%, 1–99% and less than 1% of the accessions, respectively (Fig. 1a and Supplementary Table 5 ). The core and softcore groups contained highly conserved genes, whereas the shell and cloud groups contained the so-called flexible genes. Modeling of the pan-genome size by iteratively randomly sampling accessions suggested a closed pan-genome with a finite number of both pan and core genes (Fig. 1b ). The most striking feature of the tomato pan-genome was its high core gene content (74.2%), as compared with those of Arabidopsis thaliana 23 (70%), Brassica napus 25 (62%), bread wheat 26 (64%), rice 11 (54%), wild soybean 22 (49%) and Brachypodium distachyon 24 (35%). Only Brassica oleracea 21 was higher (81%), although it is noteworthy that this pan-genome was based on only eight cultivated and one wild accession, and would likely shrink in core gene representation if additional accessions were sequenced. Fig. 1: Pan-genome of tomato. a , Composition of the tomato pan-genome. b , Simulations of the increase of the pan-genome size and the decrease of core-genome size. Accessions were sampled as 10,000 random combinations of each given number of accessions. Upper and lower edges of the purple and green areas correspond to the maximum and minimum numbers of genes, respectively. Solid black lines indicate the pan- and core-genome curves fitted using points from all random combinations according to the models proposed by Tettelin et al. 41 . Full size image The reference genome contained the majority of highly conserved genes (99.6%) but only around one-third of the flexible genes. We also observed lower expression levels of the flexible genes compared with conserved genes (Supplementary Fig. 3b ), in line with reports in A. thaliana 23 and B. distachyon 24 . Moreover, conserved reference and nonreference genes displayed similar expression levels, whereas the flexible reference genes generally had higher expression levels than flexible nonreference genes (Supplementary Fig. 3c ). Within the flexible genome, the occurrence of reference and nonreference genes displayed distinct distribution patterns (Supplementary Fig. 5 ): most of the former were sporadically absent in a small number of accessions, whereas the majority of the latter could be found in only a few accessions. The largest groups of genes in the flexible genome included those involved in the oxidation-reduction process, regulation of transcription and defense response (Supplementary Fig. 6a ). Compared with the entire pan-genome, genes in the flexible genome were significantly enriched with those involved in biological processes, such as defense response, photosynthesis and biosynthetic processes (Supplementary Fig. 6b ). It thus could be anticipated that divergence within the flexible genes among different tomato accessions would be related to corresponding phenotypic and metabolic variations. Selection of gene PAVs during tomato breeding Genomes of wild accessions (SP and SCG) encoded significantly more genes than SLC, whereas SLC contained significantly more genes than SLL (Fig. 2a ), suggesting a general trend of gene loss during tomato domestication and subsequent improvement. Furthermore, more genes were lost during domestication than improvement. Phylogenetic and principal component analyses using the PAVs suggested that wild accessions clearly separated from domesticated accessions with only a few exceptions, and the two domesticated groups (SLC and SLL) separated but with clear overlaps (Fig. 2b,c ). Fig. 2: PAVs of genes in wild and cultivated tomatoes. a , Violin plots showing the number of detected genes in each group. Groups labeled with different letters indicate significant difference in gene contents at P < 0.01 (Tukey’s HSD test). Three lines (from the bottom to the top) in each violin plot show the location of the lower quartile, the median and the upper quartile, respectively. b , Principal component analysis based on PAVs. c , Maximum-likelihood tree and model-based clustering of the 586 accessions with different numbers of ancestral kinships ( K = 2, 3, 4 and 5) using the 10,345 identified PAVs. Full size image Clustering of tomato accessions based on gene PAVs could be explained by geographic origin and domestication stage (Fig. 2c , Supplementary Fig. 7 and Supplementary Note ). A small SP clade (SP2), nested in SLC, including nine accessions from the coastal region of northern Ecuador, possessed significantly fewer genes than the phylogenetically separated main SP clade (SP1), implying that environmental adaptation within SP may have taken place in this region. The continuing decrease of gene content and wild ancestral proportions of SLC accessions from Ecuador and Peru to Mesoamerica suggests that tomato domestication followed this trajectory. Similar gene content and homogeneous genetic structures were found in Mexican SLC and SLL, and older cultivars found in Europe and the rest of the world, supporting the completion of tomato domestication in Mexico with minimal gene loss during subsequent improvement. Modern breeding has left a conspicuous genetic signature on contemporary tomato genomes, because modern elite inbred lines and hybrid cultivars possess significantly higher gene content than SLL heirlooms. This could be at least partially attributed to the intense introgression of disease resistance and abiotic stress tolerance alleles from wild species into modern cultivars 5 , 27 . To identify gene PAVs under selection during the history of tomato breeding, we conducted two sets of comparisons of flexible gene frequencies, between SLC and SP for ‘domestication’ (Fig. 3a ) and between SLL heirlooms and SLC for ‘improvement’ (Fig. 3b ). Ten accessions that were positioned into an unexpected species group (Fig. 2c ) were excluded from the downstream analyses. For each comparison, genes with significantly different frequencies between the two groups were identified as selected genes. We treated genes with higher frequencies in SLC than SP, or in SLL heirlooms than SLC as possible favorable genes, and those with lower frequencies as possible unfavorable genes. We note that the selection or loss of any particular gene could be random or due to respective positive or negative selection. In total, we identified 120 favorable and 1,213 unfavorable genes during domestication, and 12 favorable and 665 unfavorable genes during improvement (Supplementary Table 5 ). These results suggest that more genes were selected against than selected for during both domestication and improvement of tomato. For genes favorable or unfavorable in one stage, most (94.9%) showed the same trend in the other stage (Fig. 3c,d ), suggesting the possibility of common and continued selection preferences from domestication to improvement. Fig. 3: Gene selection preference during tomato domestication and improvement. a , b , Scatter plots showing gene occurrence frequencies in SP and SLC ( a ) and in SLL heirlooms and SLC ( b ). c , d , Occurrence frequency patterns of putative selected genes during domestication ( c ) and improvement ( d ). e – g , Enriched GO terms in unfavorable genes during domestication ( e ) and improvement ( f ), and favorable genes during domestication ( g ). Full size image Enrichment analysis indicated that defense response was the most enriched group of unfavorable genes during both domestication and improvement, and especially for genes related to cell wall thickening (Fig. 3e,f ), which influences abiotic and biotic stress responses through fortification of the physical and mechanical strength of the cell wall. Cell wall modifications also can contribute to fruit firmness and flavor 28 , 29 . Aging and plant organ senescence were additional enriched classes of unfavorable genes, possibly reflecting selection for increased storability and shelf-life. Of the 120 favorable genes selected during domestication, 21 were related to oxidation-reduction processes (Fig. 3g ). The unfavorable and favorable genes selected during domestication also showed distinct molecular functions, with the former enriched for ADP binding and the latter for cofactor, coenzyme and flavin adenine dinucleotide binding (Fig. 3e–g ). No significantly enriched gene families were found in favorable genes during improvement. It is worth noting that among the unfavorable genes, seven were not full length (Supplementary Table 6 ). These included TomatoPan028690 , which corresponded to the truncated part of a fruit weight gene Cell Size Regulator ( CSR ) as previously reported 30 . TomatoPan028690 was detected in all SP, 88.6% of SLC and 14.4% of SLL heirlooms, supporting that the deletion allele arose during domestication and has been largely fixed in cultivated tomatoes. Another nonreference gene, TomatoPan005770 , corresponded to the 5′ part of a full-length gene encoding a UDP-glycosyltransferase, and the reference gene Solyc05g006140 corresponded to the 3′ portion (Supplementary Table 6 and Supplementary Fig. 8 ). UDP-glycosyltransferases have been reported to catalyze the glycosylation of plant secondary metabolites and play an important role in plant defense responses 31 . TomatoPan005770 has experienced strong negative selection during both domestication and improvement (present in all SP, 13.2% of SLC and 1.4% of SLL heirlooms), consistent with the loss of disease resistance in SLL heirlooms. Notably, for three of the seven genes, both truncated and full-length transcripts were expressed in orange-stage fruit (Supplementary Table 6 ), implying that these truncated genes might be functional, such as the gain-of-function truncation of CSR as reported in Mu et al. 30 . Selection of promoter PAVs during tomato breeding A total of 90,929 nonreference contigs could be localized to defined regions (with both ends aligned) or linked sites (one end aligned) on the ‘Heinz 1706’ genome (Supplementary Table 7 ). The majority of these sequences were found in intergenic regions, whereas only 8.7% (7,912) overlapped with reference genes, much lower than the genic content of the reference genome (18.0%), implying a functional constraint against these structure variations. There were 3,741 nonreference sequences localized in putative promoter regions (<1 kb to gene start positions) of 2,823 reference genes. To identify promoter sequences possibly under selection during tomato domestication and improvement, we checked PAV patterns of these promoters, as well as those in the reference genome (Supplementary Fig. 9a,b ). A total of 856 and 388 sequences were under selection during domestication and improvement, respectively (Supplementary Table 8 ). Similar to the selection pattern of protein-coding genes, domestication exerted greater influence on the promoter sequences than did improvement. Among these promoter sequences, 717 (83.8%) and 385 (99.2%) were unfavorable during domestication and improvement, respectively. A conserved selection preference from domestication to improvement was also observed for most unfavorable promoters, with 89.9% of them displaying a similar trend in frequency changes from SP to SLC and from SLC to SLL (Supplementary Fig. 9c,d ). For the 980 promoter sequences that were under selection in at least one of the two stages, we checked the expression of their downstream genes in the 397 accessions for which RNA-Seq data were available for orange-stage fruit 4 . Of these promoters, 240 had downstream genes with significantly different expression (adjusted P value < 0.01, two-tailed Student’s t -test) associated with their presence and absence (Supplementary Table 8 ), suggesting that human selection influenced fruit quality or additional phenotypes in some instances by targeting regulatory sequences. A rare promoter allele that modifies fruit flavor Aroma volatiles have long been known to provide some of the unique flavor components of tomato fruit 32 , 33 . Recent studies revealed the importance of specific volatiles to the overall liking of tomato fruit, as well as for aroma intensity and specific flavor characteristics 9 , 34 . In particular, short-chain alcohols and aldehydes derived from fatty acids, amino acids and carotenoids play crucial roles in determining consumer acceptance of tomato fruit 9 , 34 . Many of the favorable alleles at multiple loci have been lost in recent years as a result of breeding emphasizing production over quality traits 9 . Our pan-genome analysis identified an ~4-kb substitution in the promoter region of TomLoxC ( Solyc01g006540 ) ( Supplementary Note , Supplementary Table 9 and Supplementary Fig. 10 ), which encodes a 13-lipoxygenase previously shown to be essential for C5 and C6 green-leaf volatile production in tomato fruit 35 , 36 . The two identified alleles were 149 bp upstream of the transcriptional start site: a 4,724-bp allele present in the reference ‘Heinz 1706’ genome (reference allele) and a 4,151-bp nonreference allele captured in our pan-genome. The nonreference allele was present in 91.2% of SP, 15.1% of SLC, and 2.2% of SLL heirlooms, indicating strong negative selection during both domestication and improvement. Further analysis indicated that only six accessions (two SP and four SLC) contain the homozygous nonreference allele, whereas 95 (50 SP, 29 SLC, 5 heirloom SLL, 10 modern SLL and 1 SCG) contain both alleles and the remaining 473 possess the homozygous reference allele (Fig. 4a and Supplementary Table 9 ). The frequency of the nonreference allele was highest in SP (47.4%) and declined dramatically in SLC (8.4%) and SLL heirlooms (1.1%), but interestingly recovered in modern SLL cultivars (7.2%), most likely because of recent introgressions from wild into cultivated tomatoes. Gene expression analysis based on RNA-Seq data from orange-stage fruit revealed that accessions containing both alleles displayed significantly higher expression levels of TomLoxC than those homozygous for either the reference or nonreference allele (Fig. 4b and Supplementary Table 10 ). Fig. 4: Variation of TomLoxC expression under different promoter alleles. a , Proportion of accessions within each group that have different TomLoxC promoter alleles. The numbers of accessions used for SP, SLC, heirloom and modern SLL are 57, 219, 222 and 69, respectively. b , Expression levels of TomLoxC in orange-stage fruit of accessions with different promoter alleles. RNA-Seq data for four accessions with homozygous nonreference allele were generated under this study, and for the remaining accessions were obtained from Zhu et al. 4 . The numbers of accessions with homozygous reference, nonreference and heterozygous TomLoxC promoter alleles are 295, 5 and 43, respectively. Groups labeled with different letters indicate significant difference in gene contents at P < 0.01 (Tukey’s HSD test). c , Expression of TomLoxC during fruit development of LA2093 and NC EBR-1. n = 3 independent experiments for LA2093 and 4 for NC EBR-1. Br, breaker; Br + 3d, breaker plus 3 d; heterozygous, containing both alleles; MG, mature green; nonreference, homozygous nonreference allele; reference, homozygous reference allele; ripe, red ripe fruit. Asterisks (**) indicate significant difference (two-tailed Student’s t -test; α < 0.01) of TomLoxC expression between LA2093 and NC EBR-1. For each boxplot, the lower and upper bounds of the box indicate the first and third quartiles, respectively, and the center line indicates the median. Full size image Given the association of TomLoxC with fruit flavor, we performed quantitative trait locus (QTL) mapping for 65 volatiles, including those derived from nutritionally important molecules such as carotenoids, essential fatty acids and amino acids, using a recombinant inbred line (RIL) population (Supplementary Table 11 ). The RIL population was derived from a cross between LA2093, an SP accession containing the homozygous nonreference TomLoxC promoter allele, and NC EBR-1, an advanced breeding line harboring the homozygous reference allele 37 . LA2093 and NC EBR-1 displayed contrasting expression patterns of TomLoxC during fruit development (Fig. 4c ). We identified 116 QTLs for 56 volatiles across the 12 chromosomes ( Supplementary Note , Supplementary Figs. 11 and 12 and Supplementary Tables 12 – 17 ). Interestingly, 28 volatiles, including 19 fatty-acid-derived volatiles and 9 apocarotenoids, shared a QTL at the same location on chromosome 1 spanning a 153-kb interval (Fig. 5a ) containing 19 genes including TomLoxC , which had the highest expression levels in RILs and largest expression difference between the two parents (Supplementary Table 18 ). The NC EBR-1 allele was associated with high levels of all 28 volatiles in concert with elevated expression of TomLoxC (Supplementary Table 12 ). These results strongly suggest that TomLoxC is the candidate gene underlying this QTL and might additionally play a role in apocarotenoid biosynthesis. Fig. 5: Involvement of TomLoxC in apocarotenoid biosynthesis. a , QTL interval for apocarotenoids and fatty-acid-derived volatiles on chromosome 1. b , Expression levels of TomLoxC and SlCCD1B in ripe fruits of TomLoxC-AS ( TomLoxC antisense) and M82 plants. n = 3 independent experiments for M82 and 4 for TomLoxC-AS. c , Relative levels of apocarotenoids in ripe fruits of TomLoxC-AS and M82 plants. n = 3 independent experiments for M82 and 4 for TomLoxC-AS. d , e , Relative levels of apocarotenoids in Arabidopsis leaves of AtLOX2 mutants and the corresponding controls. n = 6 independent experiments for CS3748, CS3749 and col-0, and 11 for lox2-1 . Volatiles accumulated in significantly different levels (two-tailed Student’s t -test) in target plants compared with the controls are marked with asterisks (* α < 0.05 or ** α < 0.01). Apocarotenoids with QTL at the TomLoxC position are in red text and those without QTL are in black text. For each boxplot, the lower and upper bounds of the box indicate the first and third quartiles, respectively, and the center line indicates the median. Full size image To verify the involvement of TomLoxC in apocarotenoid biosynthesis, we determined levels of 11 apocarotenoids and fatty-acid-derived volatiles in ripe fruits of transgenic tomatoes in which TomLoxC expression was repressed 36 , and the expression of a previously known apocarotenoid biosynthesis gene, SlCCD1B ( Solyc01g087260 ), remained unchanged (Fig. 5b ). As expected, the majority of fatty-acid-derived volatiles showed significantly reduced levels in transgenic fruits (Supplementary Table 19 ). The levels of the nine apocarotenoids having a QTL at the TomLoxC position were also significantly reduced in transgenic fruits, whereas the levels of two other apocarotenoids without a QTL at this region, as well as their corresponding carotenoid substrates, were not affected (Fig. 5c and Supplementary Table 19 ). We further investigated apocarotenoid levels in two Arabidopsis mutants of the AtLOX2 gene, the closest homolog of TomLoxC . Both mutants showed significantly reduced levels of specific apocarotenoids (Fig. 5d,e ), further supporting the contribution of 13-lipoxygenases (for example, TomLoxC and AtLOX2) to apocarotenoid biosynthesis. Even though the involvement of LOX enzymes in volatile and nonvolatile apocarotenoid production was demonstrated in vitro in a co-oxidation mechanism coupled to fatty acid catabolism 38 ( Supplementary Note ), it is demonstrated here to be active in vivo. Furthermore, transgenic tomato fruits with decreased expression of SlHPL 35 , which follows LOX in C6 volatile biosynthesis, accumulated higher levels of C5 volatiles and cyclic apocarotenoids ( Supplementary Note and Supplementary Figs. 13 and 14 ). Because the C5, not the C6, pathway has been proposed to additionally involve a LOX activity, this further supports the co-oxidation hypothesis. Finally, transgenic tomato with reduced SlCCD1B expression showed only up to 60% reduction in apocarotenoid levels 36 . The existence of a non-carotenoid cleavage dioxygenase pathway to apocarotenoids might explain this residual accumulation of these compounds ( Supplementary Note ). Discussion We have constructed a pan-genome of cultivated tomato and its close relatives, which includes a 351-Mb sequence and 4,873 protein-coding genes not captured by the reference genome. The observation that 25.8% of genes in the pan-genome exhibit varying degrees of PAVs highlights the diverse genetic makeup of tomato with potential utility for future improvement. It is well known that cultivated tomatoes contain a narrow genetic base compared with their wild progenitors, although the specific lineages of SP contributing to domestication remain unknown. Here we show that at least part of this genetic diversity reduction could be attributed to substantial gene losses during domestication and improvement. Our PAV analysis suggests the loss of ~200 genes within SP took place in northern Ecuador, with gene losses continuing through subsequent domestication of SLC in South America and on to Mesoamerica. These findings point to northern Ecuador as a region for assessment of further accessions that may encompass additional genetic diversity useful for tomato breeding and in identifying more precisely the origins of domesticated tomatoes. Examination of the pan-genome further revealed that substantial gene content recovery has been achieved in modern commercial cultivars possibly because of intense introgression from diverse wild donors. Comparative analyses of the tomato pan-genome revealed extensive domestication- and improvement-associated loci and genes, with an evident bias toward those involved in defense response. It is unclear why these genes may have been disproportionally lost, although we speculate it could reflect a fitness cost of nonutilized defense genes (negative selection) or random loss caused by the absence of any positive selection force for their retention. Furthermore, it seems that selection against promoter regions that affect downstream gene expression had also shaped tomato domestication and improvement of genetic outcomes. Modern tomato breeding has primarily focused on yield, shelf-life and resistance to biotic and abiotic stresses 39 , often ignoring organoleptic/aroma quality traits that are difficult to select, resulting in decline of flavor-associated volatiles 9 . Because the reference genome is a modern processing tomato cultivar, at least some flavor-associated alleles may be absent in this accession. A nonreference allele of the TomLoxC promoter captured in the pan-genome represents a rare allele in cultivated tomatoes that reflects strong negative selection during domestication. Heterozygous TomLoxC promoter genotypes have the strongest expression in orange-stage fruit. Interestingly, the TomLoxC rare allele experienced a recovery in modern elite breeding lines (7.25% versus 1.13% in SLL heirlooms, all heterozygotes), consistent with its selection during modern breeding, possibly the consequence of selecting lines with superior stress tolerance in agricultural settings. In addition, QTL mapping pointed to TomLoxC as the cause of changed levels of flavor-associated lipid- and carotenoid-derived volatiles. Analysis of transgenic tomato fruit reduced in TomLoxC expression revealed a previously unknown alternative apocarotenoid production route, likely to be nonenzymatic, in addition to that initiated by carotenoid cleavage dioxygenases. Apocarotenoids are positively associated with flavor and overall liking of tomato fruit 9 , and are components of the tomato fruit aroma 40 . Because of their very low perception threshold 33 , apocarotenoids present an attractive target for improving tomato flavor at minimal metabolic expense. The tomato pan-genome harbors useful genetic variation that has not been available to researchers and breeders relying on the ‘Heinz 1706’ reference genome alone. We demonstrate here that such variation may have important phenotypic outcomes that could contribute to crop improvement. The constructed tomato pan-genome represents a comprehensive and important resource to facilitate mining of natural variation for future functional studies and molecular breeding. Methods Genome sequences of tomatoes in the Lycopersicon clade Genome sequencing data of 561 tomato accessions in the Lycopersicon clade published previously 1 , 7 , 8 , 9 , 12 , 13 , 14 , including species SLL, SLC, SP and SCG, were downloaded from the National Center for Biotechnology Information Sequence Read Archive database (Supplementary Table 1 ). Genome sequences of a total of additional 166 accessions were generated here, with two shared among the previously sequenced 561 accessions. Genomic DNA was extracted from a single seedling from each of these 166 accessions using Qiagen’s DNeasy 96 Plant Kit. Paired-end libraries with insert sizes of ~500 bp were constructed using the NEBNext Ultra DNA Library Prep kit (Illumina Inc.) according to the manufacturer’s instructions and sequenced on an Illumina NextSeq platform using the paired-end 2 × 150 bp mode. For quality evaluation, we also generated Illumina genome data of 45× coverage for the reference cultivar ‘Heinz 1706’. Pan-genome construction Raw Illumina reads were processed to consolidate duplicated read pairs into unique read pairs. The resulting reads were then processed to trim adapters and low-quality sequences using Trimmomatic 42 with parameters ‘SLIDINGWINDOW:4:20 MINLEN:50’. The final high-quality cleaned Illumina reads from each sample were de novo assembled using Megahit 43 with default parameters. The assembled contigs with lengths >500 bp were kept and then aligned to the tomato reference genomes, including the nuclear genome 1 (version SL3.0), chloroplast genome 44 (GenBank accession no.: NC_007898.3 ) and mitochondrial genome (SOLYC_MT_v1.50, ), using the nucmer tool in the Mummer package 45 . A reliable alignment was defined as a continuous alignment longer than 300 bp with sequence identity higher than 90%. Contigs with no reliable alignments were kept as unaligned contigs. For contigs containing the reliable alignments, if they also contained continuous unaligned regions longer than 500 bp, the unaligned regions were extracted as unaligned sequences. The unaligned contigs and unaligned sequences (>500 bp) were then searched against the GenBank nucleotide database using blastn 46 . Sequences with best hits from outside the green plants, or covered by known plant mitochondrial or chloroplast genomes, were possible contaminations and removed. The cleaned nonreference sequences from all accessions were combined. The redundant sequences were consolidated into unique contigs using CD-HIT 47 . To further remove redundancies, we performed all-versus-all alignments with nucmer and blastn, respectively. The resulting nonredundant sequences were subsequently aligned against the reference genome using blastn to ensure no sequences were redundant with the reference genome. In all of the above filtering steps, the sequence identity threshold was set to 90%. The final nonredundant nonreference sequences and the reference tomato genome 1 (version SL3.0) were merged as the pan-genome. The assembled contigs from the newly sequenced reads of the ‘Heinz 1706’ cultivar were aligned against the ‘Heinz 1706’ reference genome 1 , using the nucmer tool 45 , and sequences from the one-to-one alignment blocks were extracted and aligned with MUSCLE 48 , to validate the quality of the de novo assemblies. Putative assembly errors were identified based on sequence variants between the assembled contigs and the reference genome. Annotation of the tomato pan-genome A custom repeat library was constructed by screening the pan-genome using MITE-Hunter 49 and RepeatModeler ( ), and used to screen the nonreference genome to identify repeat sequences using RepeatMasker ( ). Protein-coding genes were predicted from the repeat-masked nonreference genome using MAKER2 (ref. 50 ). Ab initio gene prediction was performed using Augustus 51 and SNAP 52 . The ‘tomato’ model was selected for Augustus prediction, and SNAP was trained for two rounds based on RNA-Seq evidence according to MAKER2 instruction. RNA-Seq data of fruit pericarp tissues at the orange stage of 397 accessions reported in Zhu et al. 4 were used as transcript evidence. The raw RNA-Seq reads were processed to trim adapter and low-quality sequences using Trimmomatic 42 . Potential ribosomal RNA (rRNA) reads were filtered using SortMeRNA 53 . The final cleaned RNA-Seq reads were then mapped to the pan-genome using Hisat2 (ref. 54 ), and the resulting alignments were used to construct gene models using StringTie 55 . Furthermore, reads mapped to the nonreference genome were extracted and then de novo assembled for each individual accession using Trinity 56 . The assembled transcripts from all accessions were combined, and the redundant sequences were removed using CD-HIT 47 . The resulting nonredundant sequences were aligned to the nonreference genome using Spaln 57 . In addition, protein sequences of Arabidopsis, rice and all asterid species were downloaded from RefSeq and aligned to the nonreference genome using Spaln 57 . Finally, gene predictions based on ab initio approaches, and transcript and protein evidence were integrated using the MAKER2 pipeline 50 . A set of high-confidence gene models supported by transcript and/or protein evidence were generated by MAKER2. The remaining ab initio predicted gene models were checked against the InterPro domain database using InterProScan 58 . Gene models containing InterPro domains were recovered and added to the final predicted gene set. Predicted genes with deduced protein length shorter than 50 amino acids, or overlapping with repeat sequences for more than 50% of their transcript length were removed. Genes were functionally annotated by comparing their protein sequences against the GenBank nonredundant database and InterPro domain database. GO annotation and enrichment analysis were performed using the Blast2GO suite 59 . PAV analysis Genome reads from each accession were aligned to the pan-genome using BWA-MEM 60 with default parameters. The presence or absence of each gene in each accession was determined using SGSGeneLoss 61 . In brief, for a given gene in a given accession, if less than 20% of its exon regions were covered by at least two reads (minCov = 2, lostCutoff = 0.2), this gene was treated as absent in that accession, otherwise it was considered present. A maximum-likelihood phylogenetic tree was constructed based on the binary PAV data with 1,000 bootstraps using IQ-TREE 62 . Population structure based on the same PAV data was investigated using STRUCTURE 63 . Fifty independent runs for each K from 1 to 10 were performed with an admixture model at 50,000 Markov chain Monte Carlo (MCMC) iterations and a 10,000 burn-in period. The best K value was determined by the ‘Evanno’ method implemented in STRUCTURE HARVESTER 64 . Principal component analysis using the PAV data was performed with TASSEL5 (ref. 65 ). To identify genes under selection during domestication or improvement, their presence frequencies in each of the three groups (SLL heirlooms, SLC and SP) were derived. The significance of the difference of the presence frequencies for each gene between the two compared groups (SP versus SLC for domestication and SLC versus SLL for improvement) was determined using the Fisher’s exact test. The resulting raw P values of all genes in each of the two comparisons were then corrected via false discovery rate (FDR). Genes with significantly different frequencies (FDR < 0.001 and fold change >2) were identified as those under selection. GO enrichment analysis was performed for the favorable or unfavorable gene sets using the FatiGO package integrated in the Blast2GO suite 59 with a cutoff of FDR < 0.05. Anchoring of nonreference sequences and selection of promoter sequences For the nonreference sequences, if the ends of their source contigs had reliable and unique alignments to the reference genome (described earlier in this article), their defined genome positions could be assigned based on these alignments. For the remaining nonreference sequences, if they contained uniquely mapped hanging read pairs, that is, one read of the read pairs was uniquely mapped to the reference genome, their genomic positions on the reference genome could be deduced based on the alignments of these hanging read pairs. Because both of the earlier strategies were based on unique alignments, they might fail to localize sequences with extensive repeats on their ends. PAV patterns of promoters (<1 kb to gene start positions) in both reference and nonreference sequences were derived. For promoters in the nonreference sequences, only those connected to the downstream genes supported by three or more hanging read pairs were included in the analysis. A promoter sequence in a given accession was considered ‘present’ if at least 50% of its length was covered by two or more reads, whereas a promoter sequence was considered ‘absent’ if no more than 20% of its length was covered. For each promoter sequence, accessions not assigned with presence or absence were excluded from subsequent analyses. Based on their PAV patterns, the promoter sequences were analyzed to identify those under selection during domestication and improvement, using the same method for protein-coding genes. RNA sequencing, SNP calling and expression analysis A total of 146 F 10 RILs and their two parents, S. lycopersicum breeding line NC EBR-1 and SP accession LA2093, were grown in triplicates in an open field in Live Oak, Florida. From each plant, at least four fruits were harvested at the red ripe stage, and pericarp tissues were flash-frozen in liquid N 2 and then pooled. Total RNA was extracted using the QIAGEN RNeasy Plant Mini Kit following the manufacturer’s instructions (QIAGEN). RNA quality was evaluated via agarose gel electrophoresis, and the quantity was determined on a NanoDrop (Thermo Fisher Scientific). Strand-specific RNA-Seq libraries were constructed from the total RNA using the protocol described in Zhong et al. 66 , and sequenced on an Illumina HiSeq2500 platform with single-end 100-bp read length. At least three independent biological replicates were prepared for each sample. In addition, besides LA2093, RNA-Seq data were also generated from orange-ripe fruits of four additional accessions (BGV006231, BGV006859, BGV006904 and BGV006906) with the homozygous nonreference allele of TomLoxC promoter (Supplementary Table 10 ). Raw RNA-Seq reads were processed to remove adapter, low-quality and poly A/T tails using Trimmomatic 42 . Trimmed reads longer than 40 bp were kept and aligned to the SILVA rRNA database ( ) to filter out rRNA reads. The resulting high-quality cleaned reads were aligned to the reference ‘Heinz 1706’ genome (version SL3.0) using HISAT2 (ref. 54 ) allowing two mismatches. Following alignments, raw counts for each gene were derived and normalized to RPKM. To identify SNPs across the RILs and the two parents, we aligned the cleaned RNA-Seq reads to the reference ‘Heinz 1706’ genome using STAR 67 with the two-pass method and default parameters. Duplicated reads in each RNA-Seq library were marked using Picard ( ), and read alignments from biological replicates of the same samples were combined. SNPs were called using GATK (Genome Analysis Toolkit) 68 following the online Best Practices protocol with recommended parameters for RNA-Seq data ( ). Other than high-quality SNPs assigned as ‘PASS’ by GATK, SNPs were further filtered to retain only those with different homozygous genotypes in the two parents, missing rate <0.2 and minor allele frequency >0.05. Volatile and carotenoid analyses Volatiles were analyzed via solid-phase microextraction (SPME) coupled to gas chromatography mass spectrometry according to Tikunov et al. 69 with minor modifications. In brief, 1.5 g frozen tissue powder was incubated for 2 min at 30 °C, and 1.5 ml of 100 mM EDTA (pH 7.5) was added to each sample and then thoroughly vortexed. Subsequently, 2 ml of the resultant slurry was transferred to a 10-ml glass vial containing 2.4 g CaCl 2 , and 20 µl of 10 p.p.m. 2-octanone (Sigma-Aldrich) was added as the internal standard. Samples were sealed and stored at 4 °C for no more than 1 d before analysis. Samples were preheated to 50 °C for 5 min, and volatiles were sampled with a 1 cm long and 30/50 µm film thickness of divinylbenzene/Carboxen/polydimethylsiloxane SPME fiber (Supelco) at 50 °C for 30 min with 10 s agitation every 5 min. Volatiles were analyzed by gas chromatography–time of flight (TOF)–mass spectrometry (Pegasus 4D; LECO Corp.), using a CP-Sil 8 CB (30 m × 0.25 mm × 0.25 µm) fused-silica capillary column (Agilent). The SPME fiber was introduced to the gas chromatography inlet, which was set to 250 °C in splitless injection, and 10 min was allowed for thermal desorption. Helium was used as a carrier gas at a constant flow rate of 1 ml × min −1 in gas saver mode. The initial oven temperature was set to 45 °C for 5 min, then raised to 180 °C at a rate of 5 °C per minute, and then to 280 °C at 25 °C increase per minute and held for an additional 5 min. The TOF–mass spectrometry was operated in electron ionization (EI) mode with an ionization energy of 70 eV, and the electron multiplier voltage was set to 1,700 V. Mass spectrometry data from 41 to 250 m / z were stored at an acquisition rate of 8 spectra per second. Data processing was performed using LECO ChromaTOF software. To resolve retention indices, we injected a mixture of straight-chain alkanes (C6–C25) into the column under the same conditions. Calculated retention indices and mass spectra were compared with the NIST mass spectral database for compound identification. Relative quantification was done based on single ion area normalized to the internal standard. Carotenoids were extracted according to Alba et al. 70 and analyzed using super-critical fluid chromatography equipped with a diode array detector according to Gonda et al. 71 . Map construction and QTL mapping To generate a map of genomic bins composed of the genotype of every individual in the RIL population, we used SNPbinner 71 with default parameters except that emission probability was set to 0.99. QTL analysis was performed using R/qtl (ref. 72 ) with a script developed by Spindel et al. 73 . In brief, interval mapping was used for initial QTL detection, followed by multiple-QTL-model analysis in additive-only mode. Traits that were not normally distributed (as determined by the Shapiro–Wilk W test) were transformed by log 10 or square root, and outliers were removed to reach normal distribution. Traits that did not reach normal distribution after transformations were analyzed considering nonparametric models. Functional characterization of TomLoxC and AtLOX2 Antisense transgenic tomato plants with decreased TomLoxC expression described in Chen et al. 36 and the corresponding wild-type plants (M82) were grown in triplicate in a greenhouse in Ithaca, New York, with a 16-h light period at 20 °C (night) to 25 °C (day). The Arabidopsis lox2-1 mutant 74 carrying a point mutation causing a premature stop of AtLOX2 was obtained from Prof. Edward E. Farmer (University of Lausanne, Switzerland). Seeds of the AtLOX2 reduced expression line (CS3748) and the corresponding control (CS3749) were obtained from the Arabidopsis Biological Resource Center. Arabidopsis plants were grown in soil with a 16-h light period at 22 °C with 60% humidity and were harvested after 6 weeks. Each sample was composed of two plants from the same genotype to achieve sufficient plant material needed for the SPME analysis. Real-time PCR Total RNA was treated with DNase (Invitrogen), and complementary DNA was synthesized using ProtoScript II reverse transcriptase (New England Biolabs). Real-time PCR was carried out on an Applied Biosystems QuantStudio 6 Flex Real-Time PCR System using the SYBR Green master mix (Life Technologies). Primer sequences used for SlCCD1B , TomLoxC and SlRPL2 ( Solyc10g006580 ; the internal control) are listed in Supplementary Table 20 . Relative expression values were determined as 2 −(ΔΔCt) (ref. 75 ). Statistical analysis The statistical tests used are described throughout the article and in the figure legends. Specifically, Fisher’s exact test with FDR corrected for multiple comparisons was used to identify genes selected during domestication or improvement, and to identify enriched GO terms, we used Tukey’s honest significant difference (HSD) test to determine the significance of difference of detected gene counts among different tomato groups, TomLoxC expression levels among accessions with different promoter types and expression levels of genes belonging to different groups. The two-tailed Student’s t -test was performed to compare TomLoxC expression levels between NC EBR-1 and LA2093 at each fruit developmental stage, expression levels of TomLoxC and SlCCD1B between TomLoxC-AS and M82 fruits, relative levels of each volatile between mutants and corresponding wild-type controls, expression levels of genes between presence and absence of the promoters, and expression levels between reference and nonreference and between conserved and flexible genes. Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Raw genome and RNA-Seq reads have been deposited into the National Center for Biotechnology Information Sequence Read Archive under accession codes SRP150040 , SRP186721 and SRP172989 , respectively. The nonreference genome sequences and annotated genes of the tomato pan-genome and SNPs called from the RIL population are available via the Dryad Digital Repository ( ). Change history 23 May 2019 In the version of the article originally published, the URL in the ‘Data availability’ section was hyperlinked incorrectly. In addition, the copyright holder was listed as ‘The Author(s)’, but the copyright line should have read ‘This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply, 2019’. The errors have been corrected in the HTML and PDF versions of the article.
Tomato breeders have traditionally emphasized traits that improve production, like larger fruits and more fruits per plant. As a result, some traits that improved other important qualities, such as flavor and disease resistance, were lost. Researchers from Boyce Thompson Institute and colleagues from partnering institutions have created a pan-genome that captures all of the genetic information of 725 cultivated and closely related wild tomatoes, establishing a resource that promises to help breeders develop more flavorful and sustainable varieties. As described in a paper published in Nature Genetics on May 13, the researchers found 4,873 new genes and identified a rare version of a gene that can make tomatoes tastier. "The pan-genome essentially provides a reservoir of additional genes not present in the reference genome," said BTI faculty member Zhangjun Fei. "Breeders can explore the pan-genome for genes of interest, and potentially select for them as they do further breeding to improve their tomatoes."The first tomato genome sequence was a large modern variety published in 2012, revealing approximately 35,000 genes and facilitating crop improvement efforts. Since then, several hundred additional tomato genotypes have been sequenced.The current study is the first to mine all of these genome sequences—as well as another 166 new sequences generated by the researchers—to hunt for genes that were absent from the reference genome. "During the domestication and improvement of the tomato, people mostly focused on traits that would increase production, like fruit size and shelf-life," Fei said, "so some genes involved in other important fruit quality traits and stress tolerance were lost during this process." Indeed, the researchers found that genes involved in defense responses to different pathogens were the most common group of genes that were missing in the domesticated varieties of tomato. "These new genes could enable plant breeders to develop elite varieties of tomatoes that have genetic resistance to diseases that we currently address by treating the plants with pesticides or other cost-intensive and environmentally unfriendly measures," added James Giovannoni, a BTI faculty member and USDA scientist. The new tomato pan-genome found about 5,000 previously undocumented tomato genes, including for flavor. Credit: Agricultural Research Service-USDA Giovannoni and Fei are co-corresponding authors on the paper and adjunct professors in Cornell University's School of Integrative Plant Science. In addition to recovering these "lost" genes, the researchers also analyzed the pan-genome to find genes and gene mutations that are rare among the modern cultivars. This analysis identified a rare version of a gene, called TomLoxC, which contributes to a desirable tomato flavor. The rare version is present in 91.2% of wild tomatoes but only 2.2% of older domesticated tomatoes. "The rare version of TomLoxC now has a frequency of 7% in modern tomato varieties, so clearly the breeders have started selecting for it, probably as they have focused more on flavor in the recent decades," Giovannoni said. The researchers also discovered a new role for the TomLoxC gene. "TomLoxC appears, based on its sequence, to be involved in producing compounds from fats," said Giovannoni. "We found it also produces flavor compounds from carotenoids, which are the pigments that make a tomato red. So it had an additional function beyond what we expected, and an outcome that is interesting to people who enjoy eating flavorful tomatoes." Ultimately, the tomato pan-genome could benefit the economy and the consumer, according to Clifford Weil, program director of the NSF's Plant Genome Research Program, which supported the work. "How many times do you hear someone say that tomatoes from the store just don't quite measure up to heirloom varieties?" Weil asked. "This study gets to why that might be the case and shows that better tasting tomatoes appear to be on their way back." Tomatoes are one of the most consumed vegetables in the world—although technically we eat their fruit—with 182 million tons worth more than $60 billion produced each year. Tomatoes also are the second most-consumed vegetable in the U.S., with each American consuming an average of 20.3 pounds of fresh tomatoes and 73.3 pounds of processed tomatoes each year. Researchers from the University of Florida, Cornell, the U.S. Department of Agriculture, the Pennsylvania State University, the Polytechnic University of Valencia, the University of Georgia and the Chinese Academy of Agricultural Sciences also participated in the study. This research was supported by grants from the U.S. National Science Foundation, the European Research Area Network for Coordinating Action in Plant Sciences, the USDA-ARS and the U.S.-Israel Binational Agricultural Research and Development Fund.
10.1038/s41588-019-0410-2
Computer
AI-powered tool predicts cell behaviors during disease and treatment
scGen predicts single-cell perturbation responses, Nature Methods (2019). DOI: 10.1038/s41592-019-0494-8 Journal information: Nature Methods
http://dx.doi.org/10.1038/s41592-019-0494-8
https://techxplore.com/news/2019-07-ai-powered-tool-cell-behaviors-disease.html
Abstract Accurately modeling cellular response to perturbations is a central goal of computational biology. While such modeling has been based on statistical, mechanistic and machine learning models in specific settings, no generalization of predictions to phenomena absent from training data (out-of-sample) has yet been demonstrated. Here, we present scGen ( ), a model combining variational autoencoders and latent space vector arithmetics for high-dimensional single-cell gene expression data. We show that scGen accurately models perturbation and infection response of cells across cell types, studies and species. In particular, we demonstrate that scGen learns cell-type and species-specific responses implying that it captures features that distinguish responding from non-responding genes and cells. With the upcoming availability of large-scale atlases of organs in a healthy state, we envision scGen to become a tool for experimental design through in silico screening of perturbation response in the context of disease and drug treatment. Main Single-cell transcriptomics has become an established tool for the unbiased profiling of complex and heterogeneous systems 1 , 2 . The generated data sets are typically used for explaining phenotypes through cellular composition and dynamics. Of particular interest are the dynamics of single cells in response to perturbations, be it to dose 3 , treatment 4 , 5 or the knockout of genes 6 , 7 , 8 . Although advances in single-cell differential expression analysis 9 , 10 have enabled the identification of genes associated with a perturbation, generative modeling of perturbation response takes a step further in that it enables the generation of data in silico. The ability to generate data that cover phenomena not seen during training is particularly challenging and referred to as out-of-sample prediction. While dynamic mechanistic models have been suggested for predicting low-dimensional quantities that characterize cellular response 11 , 12 , such as a scalar measure of proliferation, they face fundamental problems. These models cannot easily be formulated in a data-driven way and require temporal resolution of the experimental data. Due to the typically small number of time points available, parameters are often hard to identify. Resorting to linear statistical models for modeling perturbation response 6 , 8 leads to low predictive power for the complicated non-linear effects that single-cell data display. In contrast, neural network models do not face these limits. Recently, such models have been suggested for the analysis of single-cell RNA sequencing (scRNA-seq) data 13 , 14 , 15 , 16 , 17 . In particular, generative adversarial networks (GANs) have been proposed for simulating single-cell differentiation through so-called latent space interpolation 16 . While providing an interesting alternative to established pseudotemporal ordering algorithms 18 , this analysis does not demonstrate the capability of GANs for out-of-sample prediction. The use of GANs for the harder task of out-of-sample prediction is hindered by fundamental difficulties: (1) GANs are hard to train for structured high-dimensional data, leading to high-variance predictions with large errors in extrapolation, and (2) GANs do not allow for the direct mapping of a gene expression vector x on a latent space vector z , making it difficult or impossible to generate a cell with a set of desired properties. In addition, for structured data, GANs have not yet shown advantages over the simpler variational autoencoders (VAE) 19 ( Methods ). To overcome these problems, we built scGen, which is based on a VAE combined with vector arithmetics, with an architecture adapted for scRNA-seq data. scGen enables predictions of dose and infection response of cells for phenomena absent from training data across cell types, studies and species. In a broad benchmark, it outperforms other potential modeling approaches, such as linear methods, conditional variational autoencoders (CVAE) 20 and style transfer GANs. The benchmark of several generative neural network models should present a valuable resource for the community showing opportunities and limitations for such models when applied to scRNA-seq data. scGen is based on Tensorflow 21 and on the single-cell analysis toolkit Scanpy 22 . Results scGen accurately predicts single-cell perturbation response out-of-sample High-dimensional scRNA-seq data is typically assumed to be well parametrized by a low-dimensional manifold arising from the constraints of the underlying gene regulatory networks. Current analysis algorithms mostly focus on characterizing the manifold using graph-based techniques 23 , 24 in the space spanned by a few principal components. More recently, the manifold has been modeled using neural networks 13 , 14 , 15 , 16 , 17 . As in other application fields 25 , 26 , in the latent spaces of these models, the manifolds display astonishingly simple properties, such as approximately linear axes of variation for latent variables explaining a major part of the variability in the data. Hence, linear extrapolations of the low-dimensional manifold could in principle capture variability related to perturbation and other covariates (Supplementary Note 1 and Supplemental Fig. 1 ). Let every cell i with expression profile x i be characterized by a variable p i , which represents a discrete attribute across the whole manifold, such as perturbation, species or batch. To start with, we assume only two conditions 0 (unperturbed) and 1 (perturbed). Let us further consider the conditional distribution \({\it{P}}\left( {x_i{\mathrm{|}}z_i,p_i} \right)\) , which assumes that each cell x i comes from a low-dimensional representation z i in condition p i . We use a VAE to model \({\it{P}}\left( {x_i{\mathrm{|}}z_i,p_i} \right)\) in its dependence on z i and vector arithmetics in the latent space of VAE to model the dependence on p i (Fig. 1 ). Fig. 1: scGen, a method to predict single-cell perturbation response. Given a set of observed cell types in control and stimulation, we aim to predict the perturbation response of a new cell type A (blue) by training a model that learns to generalize the response of the cells in the training set. Within scGen, the model is a variational autoencoder, and the predictions are obtained using vector arithmetics in the latent space of the autoencoder. Specifically, we project gene expression measurements into a latent space using an encoder network and obtain a vector δ that represents the difference between perturbed and unperturbed cells from the training set in latent space. Using δ , unperturbed cells of type A are linearly extrapolated in latent space. The decoder network then maps the linear latent space predictions to highly non-linear predictions in gene expression space. Full size image Equipped with this, consider a typical extrapolation problem. Assume cell type A exists in the training data only in the unperturbed ( P = 0) condition. From that, we predict the latent representation of perturbed cells ( P = 1) of cell type A using \(\hat z_{i,A,p = 1} = z_{i,A,p = 0} + \delta\) , where z i , A , p = 0 and \(\hat z_{i,A,p = 1}\) denote the latent representation of cells with cell type A in conditions P = 0 and P = 1, respectively, and δ , is the difference vector of means between cells in the training set in condition 0 and 1( Methods ). From the latent space, scGen maps predicted cells to high-dimensional gene expression space using the generator network estimated while training the VAE. To demonstrate the performance of scGen, we apply it to published human peripheral blood mononuclear cells (PBMCs) stimulated with interferon (IFN-β) 3 ( Methods ). As a first test, we study the predictions for stimulated CD4-T cells that are held out during training (Fig. 2a ). Compared with the real data, the prediction of mean expression by scGen correlates well with the ground truth across all genes (Fig. 2b ), in particular, those strongly responding to IFN-β and hence most differentially expressed (labeled genes in Fig. 2b and inset ‘top 100 DEGs’). To evaluate generality, we trained six other models holding out each of the six major cell types present in the study. Figure 2c shows that our model accurately predicts all other cell types (average R 2 = 0.948 and R 2 = 0.936 for all and the top 100 differentially expressed genes (DEGs), respectively). Moreover, the distribution of the strongest regulated IFN-β response gene ISG15 as predicted by scGen not only provides a good estimate for the mean but well predicts the full distribution (Fig. 2d , all genes in Supplementary Fig. 2a ). Fig. 2: scGen accurately predicts single-cell perturbation response out-of-sample. a , UMAP visualization 37 of the distributions of conditions, cell type and data split for the prediction of IFN-β stimulated CD4-T cells from PBMCs in Kang et al. 3 ( n = 18,868). b , Mean gene expression of 6,998 genes between scGen predicted and real stimulated CD4-T cells together with the top five upregulated DEGs ( R 2 denotes squared Pearson correlation between ground truth and predicted values). c , Comparison of R 2 values for mean gene expression between real and predicted cells for the seven different cell types in the study. Center values show the mean of R 2 values estimated using n = 100 random subsampling for each cell type and error bars depict s.d. d , Distribution of ISG15 : the top uniform response gene to IFN-β 32 between control ( n = 2,437), predicted ( n = 2,437) and real stimulated ( n = 3,127) cells of scGen when compared with other potential prediction models. Vertical axis: expression distribution for ISG15 . Horizontal axis: control, real and predicted distribution by different models. e , Similar comparison of R 2 values to predict unseen CD4-T stimulated cells. Center values show the mean of R 2 values estimated using n = 100 random subsampling for each cell type and error bars depict s.d. f , Dot plot for comparing control, real and predicted stimulation in predictions on the seven cell types from Kang et al. 3 . Full size image scGen outperforms alternative modeling approaches Aside from scGen, we studied further natural candidates for modeling a conditional distribution that is able to capture the perturbation response. We benchmark scGen against four of these candidates, including two generative neural networks and two linear models. The first of these models is the conditional variational autoencoder (CVAE) 20 (Supplementary Note 2 and Supplementary Fig. 3a ), which has recently been adapted to preprocessing, batch-correcting and differential testing of single-cell data 13 . However, it has not been shown to be a viable approach for out-of-sample predictions, even though, formally, it readily admits the generation of samples from different conditions. The second class of models are style transfer GAN (Supplementary Note 3 and Supplementary Fig. 3b ), which are commonly used for unsupervised image-to-image translation 27 , 28 . In our implementation, such a model is directly trained for the task of transferring cells from one condition to another. The adversarial training is highly flexible and does not require an assumption of linearity in a latent space. In contrast to other propositions for mapping biological manifolds using GANs 29 , style transfer GANs are able to handle unpaired data, a necessity for their applicability to scRNA-seq data. We also tested ordinary GANs combined with vector arithmetics similar to Ghahramani et al. 16 . However, for the fundamental problems outlined above, we were not able to produce any meaningful out-of-sample predictions using this setup. In addition to the non-linear generative models, we tested simpler linear approaches based on vector arithmetics in gene expression space and the latent space of principal component analyses (PCA). Applying the competing models to the PBMC data set, we observe that all other models fail to predict the distribution of ISG15 (all genes in Supplementary Fig. 2 ), in stark contrast to the performance by scGen (Fig. 2d ). The predictions from the CVAE and the style transfer GAN are less accurate compared to the predictions of scGen and linear models even yield incorrect negative values (Fig. 2e , Supplementary Fig. 2 and Supplementary Note 4 ). A likely reason for why CVAE fails to provide more accurate out-of-sample predictions is that it disentangles perturbation information from its latent space representation z in the bottleneck layer. Hence, the layer does not capture non-trivial patterns linking perturbation to cell type. A likely reason why the style transfer GAN is incapable of achieving the task is its attempt to match two high-dimensional distributions with much more complex models involved than in the case of scGen, which are notoriously more difficult to train. Some of these arguments can be better understood when inspecting the latent space distribution embeddings of the generative models. As the CVAE completely strips off all perturbation-variation, its latent space embedding does not allow perturbed cells to be distinguished from unperturbed cells (Supplementary Fig. 4a ). In contrast to CVAE representations, the scGen (VAE) latent space representation captures both information for condition and cell type (Supplementary Fig. 4c ), reflecting that non-trivial patterns across condition and cell type variability are stored in the bottleneck layer. Hyperparameters (Supplementary Note 5 ) and architectures are reported in Supplementary Tables 1 (scGen), 2 (style transfer GAN) and 3 (CVAE). scGen predicts response shared among cell types and cell-type-specific response Depending on shared or individual receptors, signaling pathways and regulatory networks, the perturbation response of a group of cells may result in expression level changes that are shared across all cell types or unique to only some. Predicting both types of responses is essential for understanding mechanisms involved in disease progression as well as adequate drug dose predictions 30 , 31 . scGen is able to capture both types of responses after stimulation by IFN-β when any of the cell types in the data is held out during training and subsequently predicted (Fig. 2f ). For this, we use previously reported marker genes 32 of three different kinds: cell-type-specific markers independent of the perturbation such as CD79A for B cells, perturbation response specific genes like ISG15 , IFI6 and IFIT1 expressed in all cell types, and genes of cell type-specific responses to the perturbation such as APOBEC3A for DC cells. Across the seven different held out perturbed cell types present in the data of Kang et al. 3 , scGen consistently makes good predictions not only of unperturbed and shared perturbation effects but also for cell type-specific ones. These findings not only hold for these few selected marker genes but for the top 10 most cell-type-specific responding genes and to the top 500 DEGs between stimulated and control cells (Supplementary Fig. 5a,b and Supplementary Note 6 ). The linear model, by contrast, fails to capture cell-type-specific differential expression patterns (Supplementary Fig. 5c,d ). scGen robustly predicts response of intestinal epithelial cells to infection We evaluate scGen’s predictive performance for two data sets from Haber et al. 4 ( Methods ) using the same network architecture as for the data of Kang et al. 3 . These data sets consist of intestinal epithelial cells after Salmonella o r Heligmosomoides polygyrus ( H . poly ) infections, respectively. scGen shows good performance for early transit-amplifying (TA.early) cells after infection with H . poly and Salmonella (Fig. 3a,b ), predicting each condition with high precision ( \({\it{R}}_{{\mathrm{all}}\,{\mathrm{genes}}}^2 = 0.98\) and \({\it{R}}_{{\mathrm{all}}\,{\mathrm{genes}}}^2 = 0.98\) , respectively). Figure 3c,d depicts similar analyses for both data sets and all occurring cell types—as before, the predicted ones are held out during training—indicating that scGen’s prediction accuracy is robust across most cell types. Again, we show that these results generalize to the top 10 most cell-type-specific responding genes out of 500 DEGs (Supplementary Fig. 6 ). Fig. 3: scGen models infection response in two data sets of intestinal epithelial cells. a , b , Prediction of early transit-amplifying (TA.early) cells from two different small intestine data sets from Haber et al. 4 infected with Salmonella ( n = 5,010) and helminth H . poly ( n = 5,951) after 2 and 10 days, respectively. The mean gene expression of 7,000 genes between infected and predicted cells for different cell types shows how scGen transforms control to predicted perturbed cells in a way that the expression of the top five upregulated and downregulated differentially expressed genes are similar to real infected cells. R 2 denotes squared Pearson correlation between ground truth and predicted values. c , d , Comparison of R 2 values for mean gene expression between real and predicted cells for all the cell types in two different data sets illustrates that scGen performs well for all cell types in different scenarios. Center values show the mean of R 2 values estimated using n = 100 random subsampling for each cell type and error bars depict s.d. Full size image To understand when scGen starts to fail at making meaningful predictions, we trained it on the PBMC data of Kang et al. 3 , but now with more than one cell type held out. This study shows that predictions by scGen are robust when holding out several dissimilar cell types (Supplementary Fig. 7a,b ) but start failing when training on data that only contains information about the response of one highly dissimilar cell type (see CD4-T predictions in Supplementary Fig. 7c ). Finally, similar to that shown for the differentiation of epidermal cells, we cannot only generate fully responding cell populations but also intermediary cell states between two conditions. Here, we do this for IFN-β stimulation and Salmonella infection (Supplementary Note 7 and Supplementary Fig. 8 ). scGen enables cross-study predictions To be applicable to broad cell atlases such as the Human Cell Atlas 33 , scGen needs to be robust against batch effects and generalize across different studies. To achieve this, we consider a scenario with two studies: study A, where cells have been observed in two biological conditions, for example, control and stimulation, and study B with the same setting as study A but only in control conditions. By jointly encoding the two data sets, scGen provides a model for predicting the perturbation for study B (Fig. 4 ) by estimating the study effect as the linear perturbation in the latent space. To demonstrate this, we use as study A the PBMC data set from Kang et al. 3 and as study B another PBMC study consisting of 2,623 cells that are available only in the control condition (Zheng et al. 34 ). After training the model on data from study A, we use the trained model to predict how the PBMCs in study B would respond to stimulation with IFN-β. Fig. 4: scGen accurately predicts single-cell perturbation across different studies. a , scGen can be used to translate the effect of a stimulation trained in study A to how stimulated cells would look in study B, given a control sample set. b , UMAP visualization of cell types for control and predicted stimulated cells ( n = 5,246) for study B (Zheng et al. 34 ) in two conditions where ISG15 , the top IFN-β response gene, is only expressed in stimulated cells. Colour scale indicates expression level of ISG15 . c , Average expression between control and stimulated F-Mono cells from study A (upper left), control from study B and stimulated cells from study A (upper right) and control from study B and predicted stimulated cells for study B (lower right). Red points denote top five DEGs for F-Mono cells after stimulation in study A. R 2 denotes squared Pearson correlation. Shaded lines depict 95% confidence interval for the regression estimate. The regression line is shown in blue. d , Comparison of R 2 values highlighted in panel c for F-Mono and all other cell types. Center values show the mean of R 2 values estimated using n = 100 random subsampling for each cell type and error bars depict s.d. Full size image As a first sanity check, we show that ISG15 is also expressed in the prediction of stimulated cells based on the Zheng et al. 34 study (Fig. 3b ). This observation holds for all other differential genes associated with the stimulation, which we show for FCGR3A+ -Monocytes (F-Mono) (Fig. 3c ): The predicted stimulated F-Mono cells correlate more strongly with the control cells in their study than with stimulated cells from study A while still expressing DEGs known from study A. Similarly, predictions for other cell types yield a higher correlation than the direct comparison with study A (Fig. 3d ). scGen predicts single-cell perturbation across species In addition to learning the variation between two conditions, for example. health and disease for a species, scGen can be used to predict across species. We trained a model on a scRNA-seq data set by Hagai et al. 5 comprised of bone marrow-derived mononuclear phagocytes from mouse, rat, rabbit and pig perturbed with lipopolysaccharide (LPS) for six hours. Similar to previously, we held out the LPS perturbed rat cells from the training data (Fig. 5a ). Fig. 5: scGen predicts perturbation response across different species. a , Prediction of unseen LPS perturbed rat phagocytes on control and stimulated scRNA-seq from mouse, rabbit and pig by Hagai et al. 5 ( n = 77,642). b , Mean gene expression of 6,619 one-to-one orthologs between species for predicted LPS perturbed rat cells plotted against real LPS perturbed cells, whereas highlighted points represent the top five DEGs after LPS stimulation in the real data. R 2 denotes squared Pearson correlation between ground truth and predicted values. c , Dot plot of top 10 DEGs after LPS stimulation in each species, with numbers indicating how many species have those responsive genes among their top 10 DEGs. Full size image In contrast to previous scenarios, two global axes of variation now exist in the latent space associated with species and stimulation. Based on this, we have two latent difference vectors: δ LPS , which encodes the variation between control and LPS cells, and δ species , which accounts for differences between species. Next, we predict LPS rat cells using \({\it{z}}_{{{i,\mathrm{rat,LPS}}}} = \frac{1}{2}\left( {z_{i,\mathrm{mouse,LPS}} + \delta _{\mathrm{species}} + {\it{z}}_{{{i,\mathrm{rat,control}}}} + \delta _{{\mathrm{LPS}}}} \right)\) (Fig. 5b ). This equation takes an average of the two alternative ways of reaching LPS perturbed rat cells (Fig. 5a ). All other predictions along the major linear axes of variation also yield plausible results for stimulated rat cells (Supplementary Fig. 9 ). In addition to the species-conserved response of a few upregulated genes, such as Ccl3 and Ccl4 , cells also display species-specific responses. For example, Il1a is highly upregulated in all species except rat. Strikingly, scGen correctly identifies rat cells as non-responding with this gene. Only the fraction of cells expressing Il1a increases at a low expression level (Fig. 5c ). Based on these early demonstrations, we hope to predict cellular responses to treatment in humans based on data from untreated humans and treated animal models. Discussion By adequately encoding the original expression space in a latent space, scGen achieves simple near-to-linear mappings for highly non-linear sources of variation in the original data, which explain a large portion of the variability in the data associated with, for instance, perturbation, species or batch. This allows the use of scGen in several contexts including perturbation prediction response for unseen phenomena across cell types, studies and species, for interpolating cells between conditions. Moreover, using the cell type labels from studies, scGen is able to correct for batch effects, performing equally well as state-of-art methods (Supplementary Note 8 and Supplementary Fig. 10–12 ). While we showed proof-of-concept for in silico predictions of cell type and species-specific cellular response, in the present work, scGen has been trained on relatively small data sets, which only reflect subsets of biological and transcriptional variability. Although we demonstrated the predictive power of scGen in these settings, a trained model cannot be expected to be predictive beyond the domain of the training data. To gain confidence in predictions, one needs to make realistic estimates for prediction errors by holding out parts of the data with known ground truth that are representative of the task. It is important to realize that such a procedure arises naturally when applying scGen in an alternating iteration of experiments, and retraining is based on new data and in silico prediction. By design, such strategies are expected to yield highly performing models for specific systems and perturbations of interest. It is evident that such strategies could readily exploit the upcoming availability of large-scale atlases of organs in a healthy state, such as the Human Cell Atlas 33 . We have demonstrated that scGen is able to learn cell type and species-specific response. To enable this, the model needs to capture features that distinguish weakly from strongly responding genes and cells. Building biological interpretations of these features, for instance, along the lines of Ghahramani et al. 16 or Way and Greene 35 , could help in understanding the differences between cells that respond to certain drugs and cells that do not respond, which is often crucial for understanding patient response to drugs 36 . Methods Variational autoencoders A variational autoencoder is a neural network consisting of an encoder and a decoder similar to classical autoencoders. Unlike classical autoencoders, however, VAEs are able to generate new data points. The mathematics underlying VAEs also differs from that of classical autoencoders. The difference is that the model maximizes the likelihood of each sample x i (more accurately, maximizes the log evidence sum of log likelihoods of all x i ) In the training set under a generative process as formulated in equation ( 1 ): $$P\left( {x_i{\mathrm{|}}\theta } \right) = \mathop {\smallint }\nolimits P\left( {x_i{\mathrm{|}}z_i;\theta } \right)P\left( {z_i{\mathrm{|}}\theta } \right){\mathrm{d}}z_i$$ (1) where θ is a model parameter that in our model corresponds to a neural network with its learnable parameters and z i is a latent variable. The key idea of a VAE is to sample latent variables z i that are likely to produce x i and using those to compute P ( x i | θ ) (ref. 38 ). We approximate the posterior distribution \(P\left( {z_i{\mathrm{|}}x_i,\theta } \right)\) using the variational distribution \(Q\left( {z_i{\mathrm{|}}x_i,\phi } \right)\) , which is modeled by a neural network with parameter ϕ , called the inference network (the encoder). Next, we need a distance measure between the true posterior \(P\left( {z_i{\mathrm{|}}x_i,\theta } \right)\) and the variational distribution. To compute such a distance, we use the Kullback–Leibler (KL) divergence between \(Q\left( {z_i{\mathrm{|}}x_i,\phi } \right)\) and \(P\left( {z_i{\mathrm{|}}x_i,\theta } \right)\) , which yields: $$\begin{array}{l}{\mathrm{KL}}\left(Q\left( z_i|x_I,\phi \right)||P\left(z_i|x_i,\theta \right)\right)\\ = E_{Q\left(z_i|x_i,\phi \right)}[{\mathrm{\log}}{Q}\left( z_i|x_i,\phi\right)-{\mathrm{\log }}P\left(z_i|x_i,\theta\right)]\end{array}$$ (2) Now, we can derive both \(P\left( {x_{i}|\theta } \right)\) and P ( x i | z i , θ ) by applying Bayes’ rule to P ( z i | x i , θ ), which results in: $$\begin{array}{*{20}{l}}{\mathrm{KL}\left( {Q\left( {z_i|x_i,\phi } \right)\parallel P\left( {z_i|x_i,\theta } \right)} \right)} & = & {E_{Q\left( {z_i|x_i,\phi } \right)}\left[ {\mathrm{log}Q\left( {z_i|x_i,\phi } \right) - {\mathrm{log}}P\left( {z_i|\theta } \right)} \right.} \\ & & {\left. { - \mathrm{log}P\left( {x_i|z_i,\theta } \right)} \right] + {\mathrm{log}}P\left( {x_i|\theta } \right)}\end{array}$$ (3) Finally, by rearranging some terms and exploiting the definition of KL divergence we have: $$\begin{array}{*{20}{l}} {\mathrm{log}P\left({x_i|\theta } \right) - {\mathrm{KL}}\left( {Q\left( {z_i|x_i,\phi } \right)} \parallel P\left( {z_i|x_i,\theta } \right)\right)} \hfill \\ {\quad = E_{Q\left( {z_i|x_i,\phi } \right)}\left[ {\mathrm{log}P\left( {x_i|z_i,\theta } \right)} \right] - {\mathrm{KL}}\left[ {Q\left( {z_i|x_i,\phi } \right)\parallel P\left( {z_i|\theta } \right)} \right]} \hfill \end{array}$$ (4) On the left-hand side of equation ( 4 ), we have the log-likelihood of the data denoted by log P ( x i | θ ) and an error term that depends on the capacity of the model. This term ensures that Q is as complex as P and assuming a high capacity model for \(Q\left( {z_i|x_i,\phi } \right)\) , this term will be zero 38 . Therefore, we will directly optimize \(\mathrm{log}P\left( x_i|\theta \right)\) : $$E_{Q\left( {z{_i}|x_i,\phi } \right)}[\mathrm{log}P\left( {x_i|z_{i},\theta } \right)] - {\mathrm{KL}}[Q\left( {z{_i}|x_i,\phi } \right)||P\left( {z_i|\theta } \right)]$$ (5) Equation ( 4 ) and ( 5 ) are also known as the evidence lower bound (ELBO). To maximize the equation ( 5 ), we choose the variational distribution \(Q\left( {z_i|x_i,\phi } \right)\) to be a multivariate \({\mathrm{Gaussian}}\,Q(z_i|x_i) = N(z_i;\mu _{\phi}(x_{i}),{\mathrm{\Sigma }}_\phi (x_{i} ))\) where \(\mu _{\phi}\left( {x_i} \right)\) and \({\mathrm{\Sigma }}_\phi \left( {x_i} \right)\) are implemented with the encoder neural network and \({\mathrm{\Sigma }}_\phi \left( {x_i} \right)\) is constrained to be a diagonal matrix. The KL term in equation ( 5 ) can be computed analytically since both prior ( \(P\left( {z_i{\mathrm{|}}\theta } \right)\) ) and posterior ( Q ( z i | x i , ϕ ) are multivariate Gaussian distributions. The integration for the first term in equation ( 5 ) has no closed-form and we need Monte Carlo integration to estimate it. We can sample Q( z i | x i , ϕ ) L times and directly use stochastic gradient descent to optimize equation ( 6 ) as the loss function for every training point x i from data set D : $${{\mathrm{Loss}}}\left( {x_i} \right) = \frac{1}{L}\mathop {\sum }\limits_{l = 1}^L {\mathrm{log}}P\left( {x_i|z_{{i,l}},\theta } \right) - \alpha {\mathrm{KL}}\left[Q\left( {z_i{\mathrm{|}}x_i,\phi } \right)||P\left( {z_i{\mathrm{|}}\theta } \right)\right]$$ (6) where the hyperparameter ( α ) controls how much the KL divergence loss contributes to learning. However, the first term in equation ( 6 ) only depends on the parameters of P , without reference to the parameters of variational distribution Q . Therefore, it has no gradient with respect to ϕ to be backpropagated. To address this, the reparameterization trick 19 has been proposed. This trick works by first sampling from ϵ ~ N (0, I) and then computing \(z_i = \mu _{\phi}\left( {x_i} \right) + {\mathrm{\Sigma }}_\phi ^{\frac{1}{2}}\left( x \right) \times {\it{\epsilon }}\) . Thus, we can use gradient-based algorithms to optimize equation ( 6 ). δ vector estimation To estimate δ , first, we extracted all cells for each condition. Next, for each cell type, we upsampled the cell type sizes to be equal to the maximum cell type size for that condition. To further remove the population size bias, we randomly downsampled the condition with a higher sample size to match the sample size of the other condition. Finally, we estimated the difference vector by calculating δ = avg( z condition=1 ) − avg( z condition=0 ) where z condition=0 and z condition=1 denote the latent representation of cells in each condition, respectively. Data sets and preprocessing Kang et al. 3 included two groups of control and stimulated PBMCs. We annotated cell types by extracting an average of the top 20 cluster genes from each of eight identified cell types in PMBCs 34 . Next, the Spearman correlation between each single cell and all eight cluster averages was calculated, and each cell was assigned to the cell type for which it had a maximum correlation. After identifying cell types, megakaryocyte cells were removed from the data set due to the high uncertainty of the assigned labels. Next, the data set was filtered for cells with a minimum of 500 expressed genes and genes that were expressed in five cells at least. Moreover, we normalized counts per cell, and the top 6,998 highly variable genes were selected. Finally, we log-transformed the data to facilitate a smoother training procedure. The final data include 18,868 cells. Count matrices are available with accession number GSE96583 . The Haber et al. 4 data set contained epithelial cell responses to pathogen infection (accession number GSE92332 ). In this data set, the responses of intestinal epithelial cells to Salmonella and parasitic helminth H . poly were investigated. These data included three different conditions: 1,770 Salmonella –infected cells; 2,711 cells 10 d after H . poly infection and a group of 3,240 control cells. Each set of data was normalized per cell and log-transformed and the top 7,000 highly variable genes were selected. The PBMC data set from Zheng et al. 34 was obtained from . After filtering cells, the data were merged with filtered PBMCs from Kang et al. 3 . The megakaryocyte cells were removed from the smaller data set. Next, the data were normalized, and we selected the top 7,000 highly variable genes. The merged data set was log-transformed and cells from Kang et al. 3 ( n = 16,893) were used for training the model. The remaining 2,623 cells from Zheng et al. 34 were used for the prediction. Pancreatic data sets ( n = 14,693) were downloaded from ftp://ngs.sanger.ac.uk/production/teichmann/BBKNN/objects-pancreas.zip . Comparisons to other batch correction methods were performed similar to previously 39 with 50 principal components. The data were already preprocessed and directly used for training the model. Mouse cell atlas data ( n = 114,600) were obtained from ftp://ngs.sanger.ac.uk/production/teichmann/BBKNN/MouseAtlas.zip . The data were already preprocessed and directly used for training the model. The LPS data set 5 (accession id E-MTAB-6754 ) was obtained from . The data were further filtered for cells, normalized and log-transformed. We used BiomaRt (v.84) to find ENSEMBL IDs of one-to-one orthologs in the other three species with mouse. In total 6,619 genes were selected from all species for training the model. The final data include 77,642 cells. Statistics All the differential tests to extract DEGs were performed using Scanpy’s rank_genes_groups function with Wilcoxon as the method parameter. Error bars were computed by randomly resampling 80% of the data with replacement 100 times and recomputing Pearson R 2 for each resampled data. The interval represents the mean of R 2 values plus/minus the standard deviation of those 100 R 2 values. We used the mean of 100 R 2 values for the magnitude of each bar. All the R 2 values were calculated by squaring the rvalue output of the scipy . stats . linregress function and denote squared Pearson correlation. Evaluation Silhouette width We calculated the silhouette width based on the first 50 PCs of the corrected data or the latent space of the algorithm if it did not return corrected data. The silhouette coefficient for cell i is defined as: \({{s}}\left( i \right) = \frac{{{\it{b}}\left( {\it{i}} \right){\it{ - a}}\left( {\it{i}} \right)}}{\mathrm{max}\{{\it{a}}\left( {\it{i}} \right),{\it{b}}\left( {\it{i}} \right)\}}\) , where a ( i ) and b ( i ) indicate the mean intra-cluster distance and the mean nearest-cluster distance for sample i , respectively. Instead of cluster labels, batch labels can be used to assess batch correction methods. We used the silhouette_score function from scikit-learn 40 to calculate the average silhouette width over all samples. Cosine similarity The cosine_similarity function from scikit-learn was used to compute cosine similarity. This function computes the similarity as the normalized dot product of X and Y defined as: \({\it{\mathrm{cosine}}}\_{\it{\mathrm{similarity}}}\left( {X,Y} \right) = \frac{{\langle X,Y\rangle }}{{\left\| X \right\|\left\| Y \right\|}}\) . Reporting Summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability All of the data sets analyzed in this manuscript are public and published in other papers. We have referenced them in the manuscript and they are downloadable at . Code availability The software is available at . The code to reproduce the results of the papers is also available at .
Large-scale atlases of organs in a healthy state are soon going to be available, in particular, the Human Cell Atlas. This is a significant step in better understanding cells, tissues and organs in healthy state and provides a reference when diagnosing, monitoring and treating disease. However, due to the sheer number of possible combinations of treatment and disease conditions, expanding these data to characterize disease and disease treatment in traditional life science laboratories is labor-intensive and costly, and therefore not scalable. Accurately modeling cellular response to perturbations (e.g., disease, compounds, genetic interventions) is a central goal of computational biology. Although models based on statistical and mechanistic approaches exist, no machine-learning-based solution viable for unobserved high-dimensional phenomena has yet been available. In addition, scGen is the first tool that predicts cellular response out-of-sample. This means that scGen, if trained on data that capture the effect of perturbations for a given system, is able to make reliable predictions for a different system. "For the first time, we have the opportunity to use data generated in one model system such as mouse and use the data to predict disease or therapy response in human patients," said Mohammad Lotfollahi, Ph.D. student (Helmholtz Zentrum München and Technische Universität München). scGen is a generative deep learning model that leverages ideas from image, sequence and language processing, and, for the first time, applies these ideas to model the behavior of a cell in silico. The next step for the team concerns the improving scGen to a fully data-driven formulation, increasing its predictive power to enable the study of combinations of perturbations. "We can now start optimizing scGen to answer more and more complex questions about diseases," said Alex Wolf, Team Leader, and Fabian Theis, Director of the Institute of Computational Biology and Chair of Mathematical Modeling of Biological Systems at Technische Universität München.
10.1038/s41592-019-0494-8
Chemistry
3-D printed structures that 'remember' their shapes
Qi Ge et al. Multimaterial 4D Printing with Tailorable Shape Memory Polymers, Scientific Reports (2016). DOI: 10.1038/srep31110 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep31110
https://phys.org/news/2016-08-d.html
Abstract We present a new 4D printing approach that can create high resolution (up to a few microns), multimaterial shape memory polymer (SMP) architectures. The approach is based on high resolution projection microstereolithography (PμSL) and uses a family of photo-curable methacrylate based copolymer networks. We designed the constituents and compositions to exhibit desired thermomechanical behavior (including rubbery modulus, glass transition temperature and failure strain which is more than 300% and larger than any existing printable materials) to enable controlled shape memory behavior. We used a high resolution, high contrast digital micro display to ensure high resolution of photo-curing methacrylate based SMPs that requires higher exposure energy than more common acrylate based polymers. An automated material exchange process enables the manufacture of 3D composite architectures from multiple photo-curable SMPs. In order to understand the behavior of the 3D composite microarchitectures, we carry out high fidelity computational simulations of their complex nonlinear, time-dependent behavior and study important design considerations including local deformation, shape fixity and free recovery rate. Simulations are in good agreement with experiments for a series of single and multimaterial components and can be used to facilitate the design of SMP 3D structures. Introduction Three-dimension (3D) printing technology allows the creation of complex geometries with precisely prescribed microarchitectures that enable new functionality or improved and even optimal performance. While 3D printing has largely emphasized manufacturing with a single material, recent advances in multimaterial printing enable the creation of heterogeneous structures or composites that have myriad scientific and technological applications 1 , 2 , 3 , 4 , 5 . Commercial printing systems with these capabilities have been used in many innovative applications, but are limited because their development has largely proceeded with objectives of creating printed components with reliable mechanical properties and applications have generally emphasized linear elastic behavior with small deformations where the innovation arises from the sophisticated geometry. Independently, soft active materials (SAMs) as a class of emerging materials with the capability of exhibiting large elastic deformation in response to environmental stimuli such as heat 6 , 7 , light 8 , 9 and electricity 10 , 11 , are enabling the creation of functional active components. SAMs including shape memory polymers (SMPs), hydrogels, dielectric elastomers have been used to fabricate biomedical devices 7 , 12 , 13 , wearable devices 14 , 15 , artificial muscles 10 , 11 and other smart products 16 , 17 , 18 . However, applications of SAMs are limited by the current manufacturing approaches which constrain active structures and devices to simple geometries, often created with a single material and they have yet to broadly exploit the potential of tailored microarchitectures. This picture is changing as 3D printing and SAMs are being integrated. The most notable example is the recently developed “4D printing” technology 2 , 3 in which the printed 3D structures are able to actively transform configurations over time in response to environmental stimuli. There are two types of SAMs mainly used to realize 4D printing: hydrogels that swell when solvent molecules diffuse into polymer network and shape memory polymers (SMPs) that are capable of fixing temporary shapes and recovering to the permanent shape upon heating. The examples of the hydrogel-based 4D printing include complex self-evolving structures actuated by multilayer joints 5 , active valves made of thermally sensitive hydrogel 19 , pattern transformation realized by heat-shrinkable polymer 20 and biominic 4D printing achieved by anisotropic hydrogel composites with cellulose fibrils 21 . However, the low modulus of hydrogels ranging from ∼ kPa to ∼ 100 kPa 19 , 21 and the solvent diffusion based slow response rates in the time scale of a few ten minutes, hours and even days 5 , 21 , 22 make the hydrogel based 4D printing not suitable for structural and actuation applications. Compared to hydrogels, SMPs have higher modulus ranging ∼ MPa to ∼ GPa 7 , 23 and faster response rates (in the scale of seconds to minutes depending on actuation temperature) 24 , 25 . The examples of the SMP based 4D printing include printed active composites where precisely prescribed SMP fibers were used to activate the complex shape change 2 , 3 , sequential self-folding structured where SMP hinges with different responding rates were deliberately placed at different positions 4 and multi-shape active composites where two SMP fibers with different glass transition temperatures 26 . To date, the SMP based 4D printing were mainly realized by commercial Polyjet 3D printer (Stratasys, Objet) which create materials with properties ranging between rigid and elastomeric by mixing the two base resins. The fact that users are not allowed to freely tune the thermomechanical properties beyond the realm of available resins impedes this 4D printing technology to advance to a wider range of applications. For example, the capability of 4D printed actuation is limited as the printed digital materials break at 10–25% 27 ; the printed structures could not be used in high temperature applications as the highest glass transition temperature of the available resin is about ∼ 70 °C 4 . In addition, this technology is not suitable for microscale applications as the lateral printing resolution is up to 200 μm inherently limited by the Polyjet printing method 28 . In this paper we report a new approach that enables high resolution multimaterial 4D printing by printing highly tailorable SMPs on a projection microstereolithography (PμSL) based additive manufacturing system. We synthesize photo-curable methacrylate based copolymer networks using commercially-available materials. By tuning the material constituents and compositions, the flexibility of the methacrylate based copolymer networks enables highly tailorable SMP thermomechanical properties including rubbery modulus (from ∼ MPa to ∼ 100 MPa), glass transition temperature (from ∼ −50 °C to ∼ 180 °C) and the failure strain (up to ∼ 300%). Methacrylate based SMPs with different constituents or compositions form strong interface bonds with each other and enable fabrication of 3D structures made of multiple SMPs that can exhibit new functionality resulting from their dynamic thermomechanical properties. The PμSL based additive manufacturing system with high lateral resolution up to ∼ 1 μm exploits a digital micro-display device as a dynamic photo mask to dynamically generate and reconfigure light patterns, which then converts liquid monomer resin into solid via photo-polymerization in a layer-by-layer fashion 29 , 30 , 31 . A high resolution, high contrast digital micro-display device ensures high resolution structure made of methacrylate based SMPs that require higher exposure energy than acrylate based polymers which have been frequently used for 3D printing but do not have SM effect. Multimaterial manufacturing is achieved via an automatic material exchanging mechanism integrated into the PμSL additive manufacturing system. In addition, a highly fidelity computational tool based on the understanding of the shape memory behavior is used to facilitate the design of SMP 3D structures by simulating important design considerations including local deformation, shape fixity and free recovery rate. We believe this novel approach will translate the SMP based 4D printing into a wide variety of practical applications, including biomedical devices 12 , 13 , 32 , 33 , deployable aerospace structures 34 , 35 , shopping bags 36 , 37 and shape changing photovoltaic solar cells 38 , 39 . Results Multimaterial additive manufacturing system We fabricate high resolution multimaterial shape memory structures on an additive manufacturing apparatus based on projection microstereolithography (PμSL) 29 , 30 , 31 . As shown schematically in Fig. 1a , a computer aid design (CAD) model is first sliced into a series of closely spaced horizontal two-dimension (2D) digital images. Then, these 2D images are transmitted to a digital micro display which works as a dynamic photo-mask 30 . Ultraviolet (UV) light produced from a light emitting diode (LED) array is spatially modulated with the patterns of the corresponding 2D images and illuminated onto the surface of photo-curable polymer solution. Once the material in the exposed area is solidified to form a layer, the substrate on which the fabricated structure rests is lowered by a translational stage, followed by projection of the next image to polymerize a new layer on top of the preceding one. This process proceeds iteratively until the entire structure is fabricated. In the current setup, the projection area is about 3.2 cm × 2.4 cm resulting in a pixel size of ∼ 30 μm × 30 μm. The lateral resolution can be further improved up to as high as ∼ 1 μm if a projection lens with high optical magnification is used 29 . The step-and-repeat method can be employed to extend printing area without compromising lateral resolution 30 . Multimaterial fabrication is enabled by automating polymer solution exchange during the printing process. Although many efforts 40 , 41 , 42 have been made to develop multimaterial fabrication systems by adding the automated polymer solution exchanging mechanisms into the “top-down” fabrication system (as shown schematically in Fig. 1a ) where the modulated UV light projected downwards to the polymer resin, the multimaterial fabrication system developed based on the “bottom-up” approach making the depth of transparent polymer solution containers independent of the part height helps to significantly reduce material contamination and improve efficiency of material use 43 , but requires precise control of the oxygen concentration to separate the printed parts from the transparent polymer solution containers without damaging them 43 , 44 . Figure 1 Schematics of multimaterial additive manufacture system. ( a ) A workflow illustrates the process of fabricating a multimaterial structure based on PμSL ( b ) Photo-curable shape memory polymer network is constructed by mono-functional monomer, Benzyl methacrylate (BMA) as linear chain builder (LCB) and multi-functional oligomers, Poly (ethylene glycol) dimethacrylate (PEGDMA), Bisphenol A ethoxylate dimethacrylate (BPA) and Di(ethylene glycol) dimethacrylate (DEGDMA) as crosslinkers. Full size image We fabricate shape memory structures using photo-curable methacrylate copolymers that form polymer networks via free radical photo-polymerization 45 , 46 , 47 . To understand the thermomechanical properties and shape memory (SM) effects of the materials and structures, we prepared polymer resins by using a mono-functional monomer, Benzyl methacrylate (BMA) as linear chain builder (LCB) and difunctional oligomers, Poly (ethylene glycol) dimethacrylate (PEGDMA), Bisphenol A ethoxylate dimethacrylate (BPA) and Di(ethylene glycol) dimethacrylate (DEGDMA) as crosslinkers that connect the linear chains to form a cross-linked network (shown in Fig. 1b ). Details about polymer resin preparations can be founded in Methods. More selections of LCBs and crosslinkers are suggested by Safranski and Gall 48 . Experimental Characterization The photo-curable methacrylate networks provide high tailorability of thermomechanical properties of printed SMPs. Among them, the glass transition temperature ( T g ), the rubbery modulus ( E r ) and the failure strain (ε f ) are the most critical properties for the design of active components as they dictate the shape recovery temperature and rate, constrained recovery stress and capability of shape change and/or actuation, respectively 24 , 48 , 49 , 50 , 51 . Figure 2a–c demonstrates that these thermomechanical properties can be tailored over wide ranges and still printed, by either controlling the concentration of crosslinker or using different crosslinkers. In Fig. 2a , for instance, for the copolymer network system consisting of BMA and crosslinker PEGDMA with molecular weight of 550 g/mole (denoted as B + P550), the T g starts from ∼ 65 °C where the copolymer network system consists of pure BMA and then decreases with increase in the crosslinker concentration (See Supplementary Materials S1.1 I and Fig. S1a ). The T g of the copolymer networks consisting of the crosslinker PEGDMA with molecular weight of 750 g/mole (denoted as B + P750) and the crosslinker BPA with molecular weight of 1700 g/mole (denoted as B + BPA) follows the same trend of the B + P550 copolymer networks (Fig. S1b,c), while the T g increases with increase in the crosslinker concentration of the copolymer network consisting of BMA and DEGDMA (denoted as B + DEG, Fig. S1d). The Couchman equation 52 can be used to guide material design with a desired T g by mixing the LCB and crosslinker with prescribed ratios: . Here, and are the glass transition temperatures of the respective pure-components and M 1 and M 2 are the corresponding mass fractions. In Fig. 2a , using the current LCB monomer, namely BMA and crosslinkers, namely PEGDMA, BPA and DEGDMA allows us to adjust T g from ∼ −50 °C to ∼ 180 °C, while more flexibility can be obtained by choosing different LCB monomers and crosslinkers 48 or even mixing more than one LCB monomers and crosslinkers to prepare the polymer resin 53 . Figure 2 Experimental characterization of methacrylate SMP networks. Highly tailorable glass transition temperature ( a ), rubbery modulus ( b ) and failure strain ( c ) are controlled by either changing the mixing LCB/crosslinker ratio or using different crosslinkers. ( d ) The temperature effect on the failure strain of the SMP consisting of 90% BMA and 10% BPA. ( e ) The normalized exposure energy to cure a thin layer varies with the crosslinker concentration as well as the molecular weight of crosslinker. ( f ) The investigation on the interface bonding of a printed composite with two components arranged in series (inset). Full size image Figure 2b shows that the rubbery modulus E r of the copolymer networks increases with increase in crosslinker concentration (see Supplementary Materials S1.2, Fig. S2 and Table S1 ), as expected from entropic elasticity 54 , E r = (3 ρRT )/ M c ; here, R is the gas constant, T is absolute temperature, ρ is polymer density and M c is average molecular weight between crosslinks. The ratio ρ/ M c is the crosslinking density of the polymer network which is affected by crosslinker concentration as well as the molecular weight of the crosslinker. Comparing the four network systems, the B + DEG network has the highest rubbery modulus at the same mass fraction of crosslinker, as the highest molar weight of DEGDMA leads to the highest crosslinking density (See Supplementary Materials S1.3 and Table S2 ) and the highest E r . The effect of crosslinker on failure strain ε f is shown in Fig. 2c ; these results were obtained from uniaxial tensile tests at temperatures 30 °C above each sample’s T g where sample stays the rubbery state to eliminate the effect of viscoelasticity (see Supplementary Materials S1.2, Fig. S2 and Table S1 ). The results shown in Fig. 2c suggest that in the SMP system consisting of the same LCB and crosslinker, the lower crosslinker concentration gives higher stretchability. Figure 2c also shows that the copolymer network system formed with a higher molecular weight crosslinker has higher stretchability. For example, for the copolymer systems consisting of 10% crosslinkers, the stretchability can be increased from ∼ 45% to ∼ 100% at 30 °C above sample’s T g by increase the molecular weight from 242.3 g/mol (DEGDMA) to 1700 g/mol (BPA) (See Supplementary Materials S1.2, Fig. S2 and Table S1 ). Figure 2d shows the temperature effect on the failure strain of an SMP sample consisting of 90% BMP and 10% BPA (See Supplementary Materials S1.4 and Fig. S3 ). The stretchability of this copolymer network is increased by decreasing the stretching temperatures and finally reaches the maximum of ∼ 330% at 40 °C where the temperature is close to the peak of the loss modulus indicating the highest energy dissipation. A more stretchable network can be achieved by further reducing the crosslinker concentration of BPA or replacing BPA with a crosslinker having higher molecular weight. Not only does the chemical composition affect the thermomechanical properties of the printed SMP systems, it also affects the photo-polymerization kinetics that determines the build rate during manufacturing. As shown in Fig. 2e , at a given UV light intensity, less exposure energy (time) is required to cure a layer of the same thickness when the crosslinker concentration increases (See Supplementary Materials S1.5 and Fig. S4a ). This is mainly attributed to the reaction diffusion-controlled termination during the polymerization of the methacrylate based copolymer system 55 , 56 , 57 . With more crosslinker, the crosslinking density of the polymer increases, which limits the propagation of free radicals that would otherwise reach each other to terminate the polymerization 57 . Figure 2e also shows that the copolymer network consisting of a lower molecular weight crosslinker (P550) requires less exposure energy (time) to polymerize a same thickness layer. This is primarily because the low molecular weight crosslinker (P550) contains more unreacted double bonds per unit mass than the high molecular weight crosslinker (P750) does. In addition, the increase of photo initiator reduces the exposure energy (time) to cure a same thickness layer (See Supplementary Materials S1.5 and Fig. S4b ). Moreover, it is necessary to note that under the same condition methacrylate based SMP has comparatively lower reactivity 57 than those acrylate based materials such as poly(ethylene glycol) diacrylate (PEGDA) and hexanediol diacrylate which have been frequently used to print 3D structure 29 , 30 , 31 , 44 , 58 , 59 . This comparatively slow but conversion-dependent 57 photo-polymerization kinetics makes the methacrylate based SMPs require higher (longer) exposure energy (time) to cure a layer of the same thickness than the acrylate based materials (See Supplementary Materials S1.5 and Fig. S4c ). Therefore, a high contrast digital micro display with moderate intensity of UV light is needed to avoid any unwanted curing on the unintended parts (Details about the digital micro display are described in Materials). In a printed component that consists of more than one material, the interface bonding between them significantly impacts the mechanical performance of the composite. In Fig. 2f , we investigated the interfacial bonding by uniaxially stretching a composite with two components arranged in series (Component A: 50% B + 50% P550 with T g = 31 °C and Component B: 90% B + 10% BPA T g = 56 °C) at a temperature 30 °C higher than T g of Component B, where the both components are in their rubbery state (See Supplementary Materials S1.6 and Fig. S5 ). The fact that the composite breaks at Component A which has a lower failure strain rather than at the interface indicates a strong interface bonding. The comparison of uniaxial tensile tests between the composite and the two components in Fig. 2e reveals that Components A and B form a strong covalently boned interface through which the composite transfers stress completely between the two components. Generally speaking, this strong interface bonding forms between the methacrylate based SMPs made of different compositions and constitutes. Shape Memory Behavior Two key attributes of SMPs are their ability to fix a temporary programmed shape ( fixity ) and to subsequently recover the original shape upon activation by a stimulus ( recovery ). Figure 3a shows a typical temperature-strain-time shape memory (SM) cycle that we used to investigate the fixity and recovery of SMP samples synthesized from different LCBs and crosslinkers that result in different T g s. Figure 3b presents representative strain-time curves of a SMP strip sample made of 80% BMA + 20% P750. The SMP was first stretched to a target strain e max (20%) with a constant loading rate e (0.001 s −1 ) at a programming temperature T D (63 °C) and then the temperature was decreased to a T L ( T L = 25 °C) with a cooling rate 2.5 °C/min. Once T L was reached, the specimen was held for 2 minutes and then the tensile force was removed. In the free recovery step, the temperature was increased to a recovery temperature T R (in Fig. 3b , T R = 35 °C, 40 °C, 50 °C and 60 °C, respectively) at the same rate of cooling and subsequently stabilized for another 20 min. (Details about the SM behavior testing are presented in Supplementary Material in S2.1 and Fig. S6 ). Figure 3 SM behavior of the (meth)acrylate based copolymer SMP network. ( a ) The SM behavior has been investigated by following a typical SM cycle: at Step I, a sample is deformed by e max at a programming temperature T D ; at Step II, the temperature is decreased from T D to T L while keeping the sample deformed by e max ; at Step III, after unloading, there is a deformation bounce back Δ e ; at Step IV, the free recovery is performed by heating the sample to a recovery temperature T R . ( b ) The representative SMP strain-time curves achieved by stretching a SMP sample (80% BMA and 20% P750) at 63 °C, unloading at 25 °C and heating to 63 °C, 50 °C, 40 °C and 35 °C, respectively. ( c ) Shape fixity as a function of programming temperature. ( d ) Shape recovery time ( t 0.95 ) as a function of recovery temperature. Full size image As shown in Fig. 3a,b , we use the small strain bounce back, Δ e , of the SMP after unloading to define the shape fixity, i.e., R f = ( e max − Δ e )/ e max . Figure 3c shows that the shape fixity is a function of the programming temperature T D (Details about T D are listed in Supplementary Material Table S4 ): the SMP keeps a high shape fixity (>90%) when T D is above or near the SMP’s T g and the shape fixity starts to drop dramatically when T D is 20 °C lower than the SMP’s T g . The phenomenon that the shape fixity is a function of T D agrees the previous study 24 and can be simulated by the recently developed multi-branch model which consists of an equilibrium branch corresponding to entropic elasticity and several thermoviscoelastic nonequilibrium branches to represent the multiple relaxation processes of the polymer 24 , 60 (Details about the multi-branch model are presented in Supplementary Material S2.2–2.3 ). The model predictions agree with the experiments well and provide underlying understanding of the effect of the programming temperature T D on the shape fixity R f (Details about model predictions are presented in Supplementary Material S2.4 and Figure S9 ). When T D is higher or near the T g , an SMP has a high shape fixity R f , as the time requires to relax all the nonequilibrium stresses is shorter than the time used for loading at T D and cooling to T L . When T D is decreased to a lower temperature the shape fixity R f decreases as the nonequilibrium stresses do not have sufficient time to relax. For example, in Fig. 3c , the simulation indicates that for the SMP of 90% BMA + 10% BPA with T g = 56 °C, R f is decreased to nearly zero when T D is 25 °C where the polymer chain mobility is significantly reduced and the unrelaxed nonequilibrium stresses are stored as elastic energy. Figure 3d indicates that the free shape recovery is a function of recovery temperature (Details about how to achieve the free recovery curve are presented in S2.1). We define the shape recovery ratio as R r = 1− e ( t )/( e max −Δ e ). We use the recovery time t 0.95 that corresponds to a 95% shape recovery ratio to measure the shape recovery rate at different recovery temperatures T R 24 . Within the lab scale experiment time (an hour), the SMP samples were able to realize the 95% shape recovery only at the recovery temperature T R more than 10 °C higher than the SMP’s T g (the measured t 0.95 are listed in Table S5). The multi-branch model is also used to simulate the free recovery at different recovery temperatures T R and predict the recovery time t 0.95 at different T R s (See Supplementary Material S2.4 ). Overall, t 0.95 increases exponentially with a decreasing T R and for different SMPs, at the same T R , the one with a higher T g requires a longer recovery time. Three dimensional printed structures with a single SMP Figure 4 shows the ability of our additive manufacturing system to create complex 3D structures that exhibit nonlinear large deformation SM behavior. As shown in Fig. 4a I, a spring was fabricated using a SMP with T g = 43 °C (80% B + 20% P750). We demonstrated the SM effect of the spring by stretching it to a straight strand at 60 °C. The straight strand configuration was fixed ( Fig. 4a II) after removing the external load at 20 °C. It recovered the original spring shape ( Fig. 4a II–V) after heating back to 60 °C. The complicated nonlinear large deformation SM behavior of the spring was investigated by following the typical SM cycle for a spring with a representative segment (See Supplementary Material S3.3 ). Figure 4b shows the force-displacement relation when the spring was stretched at 60 °C. The spring becomes extremely stiff as it approaches to its fully unfolded state. The finite element (FE) simulations present the deformation contours in the progress of stretching the spring. Regardless of the maximum deformation on the two ends, the highest principle engineering strain on the main body of the spring was in the range from 70–100% which is about two to three times higher than the failure strain of previously reported SMPs used for 4D printing 2 , indicating the enhanced mechanical performance which is a necessity for active structures. In “4D printing” where the fourth dimension is “time”, a key desire is to control of the actuation rate. For the printed shape memory structures, the actuation rate can be controlled by the recovery temperature 24 , 49 . Figure 4c shows the recovery ratio of the stretched spring at different recovery temperatures. Here, the recovery ratio of the SM spring is defined as , where d ( t ) is the end-to-end displacement during heating, d max is the maximum displacement before unloading at 20 °C and Δ d is the bounce back displacement after unloading. As seen in Fig. 4c , is highly dependent on the recovery temperature. At 60 °C, the spring was fully recovered into the initial shape within 3 mins. The recovery rate was significantly slower at 35 °C where only about 10% recovery took place after 20 mins holding. The SM behavior including the free recovery at different temperatures of the 3D printed spring can be simulated by implementing the multi-branch model into FE software ABAQUS (Simulia, Providence, RI, USA). In Fig. 4c , the FE simulation reproduced the free recovery behavior at different recovery temperatures, indicating that the multi-branch model can be used to design complex 4D printed structures that are made of SMPs and exhibit complex nonlinear large deformation thermomechanical behaviors. Details about FE simulation of SM behavior of this printed spring are described in Supplementary Material S3.3 and Supplementary Movies 1a–d . Figure 4 3D printed shape memory structures with single material. ( a ) A 3D printed shape memory spring (I) was programmed to a straight strand temporary configuration (II) and then recovered to its original shape upon heating (III–V). ( b ) Experimental characterization and FE simulation were performed to investigate the nonlinear deformation. ( c ) Experiments and simulations of the free recovery at different temperatures. ( d ) 3D printed SM Eiffel tower. ( e ) 3D printed SM stents. Full size image Figure 4d shows a more refined and complex 3D printed structure Eiffel Tower standing on a Singapore dollar. It was printed with the SMP made of 80% B + 20% P750 too. Following the SM cycle, a temporary bent shape ( Fig. 4d I) was achieved by bending the Eiffel tower at 60 °C and removing the external load after cooling to 25 °C. After heating back to 60 °C, the bent Eiffel tower gradually recovered its original straight shape ( Fig. 4d , Supplementary Movie 2 ). Figure 4e demonstrates one of the most notable applications of SMPs — cardiovascular stent. Although there have been various efforts directed at fabrication 12 , 32 , material and structural characterization 12 , 61 , 62 and simulations 32 , 63 , 64 , 65 , the design of the stents has been limited primarily by fabrication methods because traditional manufacturing approaches are usually complex, consisting of multiple time-consuming steps, to achieve the geometric complexity and resolution necessary for stents 12 , 32 . Our additive manufacturing system offers the ability to fabricate high resolution 3D shape memory structures with hardly any restriction of geometric complexity. Figure 4e I shows an array of stents printed in one batch with different geometric parameters including the height and the diameter of a stent, the number of joints, the diameter of ligaments and the angle between ligaments. In Fig. 4e II, a 3D printed stent was programmed into the temporary shape with a smaller diameter for minimally invasive surgery. After heating, the stent was recovered into the original shape with a larger diameter used to expand a narrowed artery. The finite element (FE) simulation shown in Fig. 4e II gives an insight into the local large deformation that occurs in the temporary shape and renders existing additive manufacturing systems and materials infeasible. The simulations provide a guide for the material selection based on the understanding of the thermomechanical properties from Fig. 2 . Three dimensional printed structures with multiple SMPs Figure 5 demonstrates the printing of a 3D printed structure with multiple SMPs - multimaterial grippers that have the potential to function as microgrippers 13 that can grab objects, or drug delivery devices 33 , 66 that can release objects. Figure 5a I shows a number of multimaterial grippers with different designs including different sizes and numbers of digits (comparing Fig. 5a II and III), multiple materials placing at different positions ( Fig. 5a III and IV) and different mechanisms of the grippers to enable different functionalities (the closed grippers in Fig. 5a III for grabbing objects and the open gripper in Fig. 5b V for releasing objects). In Fig. 5b , an as-printed closed (open) gripper was opened (closed) after programming and the functionality of grabbing (releasing) objects was triggered upon heating. Figure 5c shows time-lapsed images of a gripper grabbing an object ( Supplementary Movies 3 ). Figure 5 3D printed multimaterial grippers. ( a ) Multimaterial grippers were fabricated with different designs. ( b ) The demonstration of the transition between as printed shape and temporary shape of multimaterial grippers. ( c ) The snapshots of the process of grabbing an object. Full size image Compared to contemporary manufacturing approaches 13 , 33 , 66 that essentially realized the gripper deformation of folding or unfolding by creating strain mismatches between layers of a thin multilayer hinge with thickness from a few microns to a few hundred nanometers which is about 1000 times smaller than the size of the entire structure 13 , 33 , 66 , our approach is simple and straightforward enabling stiffer grippers with thick joints made of SMPs. Additionally, the capability of multimaterial fabrication enables us to print the tips of the grippers with the materials different from the SMPs constructing the joints and to design the stiffness of the tips based on that of the object to realize a safe contact. Details about material selections of the 3D printed grippers are described in Supplementary Material S4.1 . Finally, by controlling the dynamic properties of the different SMPs as investigated in Fig. 3d , we are able to design the time dependent sequential shape recovery 4 , 67 of a structure fabricated with multiple SMPs. In Fig. 6 , we demonstrate sequential shape recovery by printing a multimaterial flower whose inner and outer petals have different T g s (inners petal made of 90% B + 10% BPA with T g = 56 °C and outer petals made of 80% B + 20% P750 with T g = 43 °C). We first closed all the petals at 70 °C and then decreased the temperature to 20 °C. After removal of the external constraint, the flower was fixed at the temporary bud state ( Fig. 6a ) where both the inner and outer petals stayed closed. The sequential recovery was triggered by raising the temperature first to 50 °C at which only the outer petals opened. The inner petals with T g of 56 °C opened later after temperature was raised to 70 °C, completing the full shape recovery of the flower to its original blooming state ( Fig. 6c ). In Fig. 6d–f , a FE simulation (details can be founded in Supplementary Material S4.2 ) predicts this flower blooming process indicating that the multi-branch model can be used to design complex 4D printed structures that are made of multiple SMPs and exhibit sequential shape. Figure 6 The sequential recovery of a multimaterial flower. The multimaterial flower in the original shape ( c ) was first programmed into the temporary bud state at 20 °C ( a ). The outer petals opened first after heating to 50 °C ( b ) and then, the flower fully bloomed at 70 °C ( c ). ( d )–( f ) represent the FE simulations of the corresponding flower blooming process. Full size image Methods Development of multimaterial fabrication system To develop a high resolution multimaterial system based on PμSL, a CEL5500 LED light engine purchased from Digital Light Innovation (Austin, Taxes, USA) was used to work as the digital micro-display, a translation stage (LTS300) with 0.1 μm minimum achievable incremental movement and 2 μm backlash purchased from Thorlabs (Newton, New Jersey, USA) was used to work as the elevator, a stepper motor purchased from SparkFun Electornics (Niwot, Colorado, USA) controlled by Arduino UNO board works as a shaft to build the automated material exchange system. A custom LabView code was developed to control all the electronic components and automate the printing process. Material synthesis All the chemicals including the methacrylate based monomers and crosslinkers, photo initiator and photo absorbers were purchased from Sigma Aldrich (St. Louis, MO, USA) and used as received. Phenylbis (2, 4, 6-trimethylbenzoyl) phosphine oxide works as photo initiator mixed into the methacrylate based polymer resolution at the concentration of 5% by weight. Sudan I and Rhodamine B works as photo absorber fixed at concentration of 0.05% and 1% by weight, respectively. Printing and post-processing The designed 3D structures were first sliced into layers with a prescribed layer thickness (most structures here were sliced with 50 μm per layer). The custom LabVIEW with printing parameters which specify layer thickness, light intensity, exposure time, sends the sliced 2D images in order to digital micro display and controls the light irradiation of the digital micro displace and translational stage motion. Once the 3D structures were printed, they were rinsed by the ethanol solution to remove the extra unreacted polymer solution. After that, the 3D structures were placed into a UV oven (UVP, Ultraviolet Crosslinkers, Upland, CA, USA) for 10 min post-curing. Additional Information How to cite this article : Ge, Q. et al. Multimaterial 4D Printing with Tailorable Shape Memory Polymers. Sci. Rep. 6 , 31110; doi: 10.1038/srep31110 (2016).
Engineers from MIT and Singapore University of Technology and Design (SUTD) are using light to print three-dimensional structures that "remember" their original shapes. Even after being stretched, twisted, and bent at extreme angles, the structures—from small coils and multimaterial flowers, to an inch-tall replica of the Eiffel tower—sprang back to their original forms within seconds of being heated to a certain temperature "sweet spot." For some structures, the researchers were able to print micron-scale features as small as the diameter of a human hair—dimensions that are at least one-tenth as big as what others have been able to achieve with printable shape-memory materials. The team's results were published earlier this month in the online journal Scientific Reports. Nicholas X. Fang, associate professor of mechanical engineering at MIT, says shape-memory polymers that can predictably morph in response to temperature can be useful for a number of applications, from soft actuators that turn solar panels toward the sun, to tiny drug capsules that open upon early signs of infection. "We ultimately want to use body temperature as a trigger," Fang says. "If we can design these polymers properly, we may be able to form a drug delivery device that will only release medicine at the sign of a fever." Fang's coauthors include former MIT-SUTD research fellow Qi "Kevin" Ge, now an assistant professor at SUTD; former MIT research associate Howon Lee, now an assistant professor at Rutgers University; and others from SUTD and Georgia Institute of Technology. Ge says the process of 3-D printing shape-memory materials can also be thought of as 4-D printing, as the structures are designed to change over the fourth dimension—time. "Our method not only enables 4-D printing at the micron-scale, but also suggests recipes to print shape-memory polymers that can be stretched 10 times larger than those printed by commercial 3-D printers," Ge says. "This will advance 4-D printing into a wide variety of practical applications, including biomedical devices, deployable aerospace structures, and shape-changing photovoltaic solar cells." Need for speed Fang and others have been exploring the use of soft, active materials as reliable, pliable tools. These new and emerging materials, which include shape-memory polymers, can stretch and deform dramatically in response to environmental stimuli such as heat, light, and electricity—properties that researchers have been investigating for use in biomedical devices, soft robotics, wearable sensors, and artificial muscles. A shape-memory Eiffel tower was 3-D printed using projection microstereolithography. It is shown recovering from being bent, after toughening on a heated Singapore dollar coin. Credit: Qi (Kevin) Ge Shape-memory polymers are particularly intriguing: These materials can switch between two states—a harder, low-temperature, amorphous state, and a soft, high-temperature, rubbery state. The bent and stretched shapes can be "frozen" at room temperature, and when heated the materials will "remember" and snap back to their original sturdy form. To fabricate shape-memory structures, some researchers have looked to 3-D printing, as the technology allows them to custom-design structures with relatively fine detail. However, using conventional 3-D printers, researchers have only been able to design structures with details no smaller than a few millimeters. Fang says this size restriction also limits how fast the material can recover its original shape. "The reality is that, if you're able to make it to much smaller dimensions, these materials can actually respond very quickly, within seconds," Fang says. "For example, a flower can release pollen in milliseconds. It can only do that because its actuation mechanisms are at the micron scale." Printing with light To print shape-memory structures with even finer details, Fang and his colleagues used a 3-D printing process they have pioneered, called microstereolithography, in which they use light from a projector to print patterns on successive layers of resin. The researchers first create a model of a structure using computer-aided design (CAD) software, then divide the model into hundreds of slices, each of which they send through the projector as a bitmap—an image file format that represents each layer as an arrangement of very fine pixels. The projector then shines light in the pattern of the bitmap, onto a liquid resin, or polymer solution, etching the pattern into the resin, which then solidifies. "We're printing with light, layer by layer," Fang says. "It's almost like how dentists form replicas of teeth and fill cavities, except that we're doing it with high-resolution lenses that come from the semiconductor industry, which give us intricate parts, with dimensions comparable to the diameter of a human hair." The researchers then looked through the scientific literature to identify an ideal mix of polymers to create a shape-memory material on which to print their light patterns. They picked two polymers, one composed of long-chain polymers, or spaghetti-like strands, and the other resembling more of a stiff scaffold. When mixed together and cured, the material can be stretched and twisted dramatically without breaking. What's more, the material can bounce back to its original printed form, within a specific temperature range—in this case, between 40 and 180 degrees Celsius (104 to 356 degrees Fahrenheit). Credit: Massachusetts Institute of Technology The team printed a variety of structures, including coils, flowers, and the miniature Eiffel tower, whose full-size counterpart is known for its intricate steel and beam patterns. Fang found that the structures could be stretched to three times their original length without breaking. When they were exposed to heat within the range of 40 C to 180 C, they snapped back to their original shapes within seconds. "Because we're using our own printers that offer much smaller pixel size, we're seeing much faster response, on the order of seconds," Fang says. "If we can push to even smaller dimensions, we may also be able to push their response time, to milliseconds." "This is a very advanced 3-D printing method compared to traditional nozzle or ink-jet based printers," says Shaochen Chen, professor of nano-engineering at the University of California at San Diego, who was not involved in the research. "The method's main advantages are faster printing and better structural integrity." Soft grip To demonstrate a simple application for the shape-memory structures, Fang and his colleagues printed a small, rubbery, claw-like gripper. They attached a thin handle to the base of the gripper, then stretched the gripper's claws open. When they cranked the temperature of the surrounding air to at least 40 C, the gripper closed around whatever the engineers placed beneath it. "The grippers are a nice example of how manipulation can be done with soft materials," Fang says. "We showed that it is possible to pick up a small bolt, and also even fish eggs and soft tofu. That type of soft grip is probably very unique and beneficial." Going forward, he hopes to find combinations of polymers to make shape-memory materials that react to slightly lower temperatures, approaching the range of human body temperatures, to design soft, active, controllable drug delivery capsules. He says the material may also be printed as soft, responsive hinges to help solar panels track the sun. "Very often, excessive heat will build up on the back side of the solar cell, so you could use [shape-memory materials] as an actuation mechanism to tune the inclination angle of the solar cell," Fang says. "So we think there will probably be more applications that we can demonstrate."
10.1038/srep31110
Other
Reconstruction of trilobite ancestral range in the southern hemisphere
Fábio Augusto Carbonaro et al, Inferring ancestral range reconstruction based on trilobite records: a study-case on Metacryphaeus (Phacopida, Calmoniidae), Scientific Reports (2018). DOI: 10.1038/s41598-018-33517-5 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-33517-5
https://phys.org/news/2019-01-reconstruction-trilobite-ancestral-range-southern.html
Abstract Metacryphaeus is a calmoniid trilobite genus from the Devonian Malvinokaffric Realm, exclusive to the Gondwanan regions. It includes eleven species, which are for the first time included here in a single phylogenetic analysis. The resulting hypotheses establish relations among the Metacryphaeus species with few ambiguities, also suggesting the inclusion of both Plesiomalvinella pujravii and P. boulei within the genus, as originally considered. The results of palaeobiogeographic analyses employing the Dispersal-Extinction-Cladogenesis (DEC) model reinforce the hypothesis that Bolivia and Peru form the ancestral home of Metacryphaeus . The radiation of the genus to other Gondwanan areas took place during transgressive eustatic episodes during the Lochkovian–Pragian. The Lochkovian dispersal occurred from Bolivia and Peru to Brazil (Paraná and Parnaíba basins) and the Falklands, and Pragian dispersal occurred towards South Africa. Dispersal events from Bolivia and Peru to the Parnaíba Basin (Brazil) were identified during the Lochkovian–Pragian, suggesting the presence of marine connections between those areas earlier than previously thought. Introduction The Malvinokaffric Realm includes a plethora of trilobites, including the Calmoniidae, which is composed of several genera ( e.g ., Calmonia Clarke, 1913, Typhloniscus Salter, 1856, Plesioconvexa Lieberman, 1993, Punillaspis Baldis & Longobucco, 1977, Eldredgeia Lieberman, 1993, Clarkeaspis Lieberman, 1993, Malvinocooperella Lieberman, 1993, Wolfartaspis Cooper, 1982, Metacryphaeus Reed, 1907) reported from the Devonian rocks of Brazil, Argentina, Bolivia, Peru, Falkland Islands, and South Africa 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 . The present work focuses on the genus Metacryphaeus , which only includes Gondwanan species, namely: M. tuberculatus (Kozłowski, 1923), M. kegeli Carvalho et al ., 1997, M. meloi Carvalho et al ., 1997, M. parana (Kozłowski, 1923), M. giganteus (Ulrich, 1892), M. convexus (Ulrich, 1892), M. curvigena Lieberman, 1993, M. branisai Lieberman, 1993, M. caffer (Salter, 1856), M. australis (Clarke, 1913), and M. allardyceae (Clarke, 1913). During the 1990s, Lieberman 5 presented the first phylogenetic analysis of the group including Metacryphaeus , represented by M. parana , M. convexus , M. curvigena , M. branisai , M. giganteus , and M. tuberculatus , among other calmoniids (Fig. 1a ). Later, Carvalho et al . 6 (Fig. 1b ) conducted a phylogenetic study of that genus, represented by M. parana , M. australis , M. caffer , M. allardyceae , M. tuberculatus , and M. meloi . More recently, Abe & Lieberman 9 presented a palaeobiogeographical area-taxon cladogram including all Metacryphaeus species, based on the tree provided by Lieberman 5 , with the manual insertion of additional species (i.e., without carrying out a new phylogenetic analysis). Those phylogenetic studies did not include all the species of Metacryphaeus . Accordingly, this study provides a new phylogenetic analysis including all species, in order to perform a new palaeobiogeographic analysis for the distribution of the genus. Figure 1 Previous phylogenetic models including Metacryphaeus : ( a ) Lieberman 5 ; and ( b ) Carvalho et al . 6 . Full size image The genus Metacryphaeus occurs in many Gondwanan geological units of Devonian age, including those in Brazil, Bolivia, Falkland Islands, Peru, and South Africa, spanning the Pragian to the Givetian–Frasnian 3 , 4 , 5 , 6 , 11 , 12 (Figs 2 and 3 ). It has been suggested that the genus originated and diversified in small basins of the Malvinokaffric Realm in Bolivia and Peru 9 . The records in this area are from the Pragian to the Givetian, including M. giganteus , M. tuberculatus , M. parana , M. convexus , M. curvigena , and M. branisai . Figure 2 Palaeobiogeographic distribution of the genus Metacryphaeus during the Early and Middle Devonian. Areas were divided into (A–E) for the palaeobiogeographic analysis. Note: Plesiomalvinella boulei and P. pujravii have been reassigned here to Metacryphaeus . Full size image Figure 3 Chronostratigraphic distribution of Metacryphaeus . Note: Plesiomalvinella boulei and P. pujravii have been reassigned here to Metacryphaeus . Abbreviations: Loch., Lochkovian; Prag., Pragian; Emsi., Emsian; Eife., Efelian; Give., Givetian; Fras., Frasnian; Fame., Famennian. Full size image Results and Discussion Phylogeny The parsimony analysis resulted in two MPTs of 132 steps (consistency index = 0.41 and retention index = 0,52; Fig. 4 ). The only topological difference between these two trees is the placement of Metacryphaeus branisai . The strict consensus is presented in Fig. 5 , along with bootstrap probabilities and Bremer decay indices for each node. Figure 4 ( a , b ) Two most parsimonious trees (132 long and consistency index of 0.41) calculated in the present phylogenetic analysis. Full size image Figure 5 Strict consensus of the two MPTs with bootstrap values (using 1000 replicates; below) and Bremer support (above) indicated for each node. Full size image Plesiomalvinella boulei and P. pujravii were found deeply nested within a clade of Metacryphaeus species. Accordingly, those two species are here referred to that genus, as previously proposed by Wolfart 13 . Metacryphaeus (including M. boulei and M. pujravii ) is here supported by two synapomorphies: frontal lobe projecting beyond the cephalic anterior border in dorsal view (character 4) and uniformly divergent axial furrows from SO to the cephalic margin (character 19). In contrast to Lieberman 5 , Clarkeaspis gouldi (Lieberman, 1993) and C. padillaensis (Lieberman, 1993) were grouped into a clade supported by four synapomorphies (Figs 4 and 5 ): cephalic anterior border (cranidial) extended and pointed (characters 2 and 3); pentagonal glabella (character 6); 60 to 70% ratio between the basal glabellar width and the glabellar length (character 9). Clarkeaspis is here placed closer to Metacryphaeus (as its sister group) than in Lieberman 5 . The Metacryphaeus + Clarkeaspis clade is supported by a single synapomorphy (character 9, 0 → 1) and shows low bootstrap support (Fig. 5 ). The placement of Malvinocooperella pregiganteus (Lieberman, 1993) and Wolfartaspis cornutus (Wolfart 1968) as successive outgroups of the Metacryphaeus + Clarkeaspis (Figs 4 and 5 ), also differs from the arrangement seen in Lieberman 5 . The present analysis recovered the clades formed by Metacryphaeus giganteus + M. parana (Figs 4 and 5 ) and M. boulei + M. pujravii (Figs 4 and 5 ), previously recognized by Lieberman 5 . Synapomorphies of the M. giganteus + M. parana clade are: 60 to 70% ratio between the basal glabellar width and the glabellar length (character 9), convergently acquired in Clarkeaspis ; slender genal spine (character 36); dorsoventral height of the pygidium gradually decreasing posteriorly (character 39); 0.65 to 0.80 ratio between the maximum pygidial axial width and the maximum pygidial axial length (character 42). The M. boulei + M. pujravii clade is supported by six synapomorphies which are related to the presence of two symmetrical rows of sagittal spines on the posterior part of the glabella (character 15), the presence of one or two spines on L1 and L2 (characters 17 and 18), 0.15 and 0.25 ratio between the distance of posterior margin of the eyes to the axial furrow and the maximum glabellar width (character 25), the presence of four or five spines on the thoracic axial rings (character 37), and the prosopon covered by spines (character 48). Our study also recovered new hypotheses for the relationships of Metacryphaeus , including a clade formed by M. allardyceae , M. caffer , M. australis , M. meloi , M. kegeli , and M. tuberculatus . This is supported by four synapomorphies related to the shape and extension of the (cranidial) cephalic anterior border (characters 2 and 3), the ratio between the sagittal length of L1 and the glabellar sagittal length (character 14), and the incision of the occipital furrow medially (character 29). The clade including M. caffer , M. australis , M. meloi , M. kegeli , and M. tuberculatus is supported by four synapomorphies (Fig. 4a ): glabella posteriorly elevated and declined anteriorly to S3 (character 8); 65 to 75° α angle (character 22); rounded pygidial terminus (character 45); no spine on the pygidial terminus (character 46). Also, the clade formed by M. tuberculatus , M. meloi , and M. kegeli is supported by four synapomorphies related to L2 and L3 that do not merge distally (character 13), 55 to 64° β angles (character 23), the connection of S2 and the axial furrow (character 24), and the lack of connection between the anterior margin of the eyes and the axial furrow (character 26) (Fig. 5 ). Two synapomorphies support the M. caffer plus M. australis clade: characters 9 (reverted to the plesiomorphic condition) and 41, which are respectively related to a ratio greater than 80% between the basal glabellar width and the glabellar length, and to 0.25 to 0.35 ratios between the maximum pygidial axial width and the maximum pygidial width. The clade that includes all Metacryphaeus except for M. convexus , M. curvigena , and M. branisai (Fig. 4a ) is supported by three synapomorphies related to a 0.15 to 0.25 ratio between the distance from the posterior margin of the eyes to the axial furrow and the maximum glabellar width (character 25), occipital furrow weakly incised medially (character 29), and 130 to 160° γ angle (character 34). Three synapomorphies support the group formed by M. giganteus , M. parana , M. allardyceae , M. australis , M. caffer , M. meloi , M. kegeli , and M. tuberculatus (Fig. 4a ): 0.25 to 0.34 ratio between sagittal length of L1 glabellar lobe and glabellar sagittal length (character 14), 0.3 to 0.4 ratio between the maximum exsagittal eyes length and the glabellar sagittal length (character 27), 0.60 to 0.80 ratio between maximal sagittal pygidial length and maximal transverse pygidial width (character 40). The position of Metacryphaeus branisai is variable in the two MPTs (Fig. 4 ), probably because its pygidium is unknown, implying the non-codification for characters 38 to 47. In the phylogeny modelled by Lieberman 5 , Metacryphaeus convexus and M. curvigena are not considered sister taxa to all other Metacryphaeus . Instead, M. curvigena is considered the sister taxon to M. branisai and M. convexus the sister taxon to both (Fig. 1a ). In our analysis, the clade formed by M. convexus and M. curvigena is supported by five synapomorphies: inclination of 10–20° of S3 in relation to SO (character 12); L2 and L3 not merged distally (character 13); cephalic axial furrows deep and broad (characters 20 and 21); evident connection between S2 and the axial furrow. Likewise, the affinities of M. meloi and M. kegeli are supported by four synapomorphies. This is interesting because these species are endemic to the Parnaíba Basin (Brazil), as is their sister-taxon M. tuberculatus , the only other species of the genus known to that basin. Palaeobiogeography Likelihood Ratio Test supports DEC M2 ( w and j set as free parameters) as the best-fit model to our data (Table 1 ). The palaeobiogeographic reconstructions differ only slightly for the two MPTs, so we focus the discussion on the first MPT. The summary of biogeographic stochastic mapping (BSM) counts (Table 2 ) shows a predominance of dispersals among range change events (33.6% of total events) and, among those, founder events (19.6%) are slightly more frequent than anagenetic dispersals (14.1%). Vicariance was very uncommon according to our model, accounting only for 3.9% of the events (Table 2 ). Most dispersals occurred from Bolivia and Peru (A) to other areas, more frequently to the Paraná (B) and Parnaíba (E) basins (Table 3 ). Table 1 Pairwise comparison of the results of the ancestral area reconstructions of nested DEC models on tree 1. Full size table Table 2 Summary of BSM (Biogeographic Stochastic Mapping) counts based on DEC M2 model showing the mean, standard deviations (SD), and percentage of different types of biogeographic events. Full size table Table 3 Counts (and standard deviations in parentheses) of dispersal events averaged across 100 biogeographics stochastic mappings based on the biogeographic history of Metacrypheus according to DEC M2 model. Full size table All three models estimate a 100% probability for Bolivia and Peru (A) as the ancestral area for the Metacryphaeus clade, as well as for most of its internal clades (Fig. 6 ; Supplementary Supple 3 ). The earliest Metacryphaeus records in this area are from the early Pragian 4 , 5 , but three range changes were estimated to have occurred earlier, during the late Lochkovian (Fig. 6 ): 1- the ancestor of M. parana and M. giganteus expanded its occurrence to encompass the Paraná Basin (B), with the former species maintaining this broader distribution and the latter restricted to B (subset sympatry) - in an alternative scenario, the ancestor of this clade is present only in Bolivia and Peru (A), with M. parana expanding its range to also the Paraná Basin (B); 2- M. allardyceae dispersed to the Falklands area (D); 3- the ancestor of M. australis and M. caffer dispersed to the Paraná Basin (B). During the early Pragian, M. caffer dispersed from the Paraná Basin to South Africa (C). It is interesting to note that those dispersal and expansion events likely occurred before the transgressive events on western Gondwana 14 , 15 , 16 , 17 dated between the late Pragian and the early Emsian (Fig. 6 ). Those areas (A, B, C, D) were eventually connected by transgressive-regressive cycles (Fig. 6 ), which promoted the faunal similarity observed among the Malvinokaffric fauna of the Early Devonian 15 , 18 . Figure 6 Ancestral area reconstructions based on DEC M2 model on the tree 1 (top), sea-level changes curves from Lochkovian to Frasnian (middle) based on Haq & Schutter 55 , and Lower Devonian palaeomap of Southern Gondwana (bottom) modified from Torsvik & Cocks 56 . Arrows on the palaeomap indicate inferred Lochkovian (full arrow) and Pragian (dashed arrow) dispersal routes for Metacryphaeus taxa. Additional abbreviations: DML, Dronning Maud Land, Antarctica; EWM, Ellsworth-Whitmore Mountains, Antarctica; MT, Mexican terranes; P, Precordillera Terrane, Argentina; Pat., Patagonia. Full size image The last common ancestor of Metacryphaeus meloi , M. kegeli , and M. tuberculatus , and the node including only the latter two taxa were reconstructed with two almost equal probable ranges, either restricted to Bolivia and Peru (A) or a joint distribution (Fig. 6 ) also including the Parnaíba and Paraná basins (ABE). These different ancestral range reconstructions imply distinct processes of range changes, respectively: 1 - successive dispersals from Bolivia and Peru to the other areas (for an ancestral with distribution restricted to A), 2 - distribution expansions inferred as founder events (for an ancestral widely distributed in ABE). Nevertheless, in all cases M. meloi and M. kegeli became restricted to the Parnaíba Basin (E), whereas M. tuberculatus maintained (or reached) a widespread distribution (ABE), even though its earliest records, dated as late Eifelian and early Givetian, do not include the Parnaíba Basin 4 , 5 , 6 , 11 , 12 . Alternatively, but with lower statistical support, the ancestral range reconstruction hypothesized for the clades M. meloi + ( M. tuberculatus + M. kegeli ) and M. tuberculatus + M. kegeli could be AB, encompassing only their older records. This would imply expansion events towards the Parnaíba Basin (E) after the arrival of ancestors in the Paraná Basin (B). The arrival of Metacryphaeus in the Parnaíba Basin may have occurred via two alternative routes (Fig. 6 ). A northern route (surrounding the northern margin of the South American continent) would impose no continental (landmass) barriers, but there would be climatic barriers related to the warmer waters the animals would need to overcome, as the Malvinokaffric Realm marks cooler areas. Also, faunas of this age on the northern margin of South-America belong to other realms, which lack Metacryphaeus . On the other hand, a route through the Amazon Basin (Fig. 6 ) would have presented no climatic or faunal barriers ( cf . 15 , 18 , 19 ). Even a continental barrier might not have been in place, as there were transgression events possibly connecting that basin to Bolivia and Peru. The lack of fossils of this age in the Amazon Basin, which could confirm such a dispersal route, is related to the depositional gap present in the upper Lochkovian and lower Emsian of the basin ( cf . 20 , 21 , 22 , 23 , 24 , 25 ). This absence of Lochkovian–lower Emsian rocks is also observed in the Parnaíba Basin 20 , 21 , 24 , which hinders palaeobiogeographical inferences related to the presence/absence of Metacryphaeus in the Lower Devonian of this basin. Other trilobite genera also have a broad Gondwanan distribution during the Devonian, e.g . the calmoniid Eldredgeia , with occurrences in the Bolivia, Brazil (Amazon and Parnaíba basins), and South Africa, and the homalonotid Burmeisteria , with records in the Brazil (Amazon, Parnaíba, and Paraná basins), Falkland Islands, South Africa, and Ghana 1 , 15 , 19 , 26 . Furthermore, the distribution of the brachiopods Tropidoleptus carinatus (Conrad, 1839) and Australocoelia palmata (Mooris & Sharpe, 1846), and the crinoids Exaesiodiscus Moore & Jeffords, 1968, Laudonomphalus Moore & Jeffords, 1968, Monstrocrinus Schmidt, 1941, and Marettocrinus Le Menn 15 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , also reinforce that connections between the Bolivian-Peruvian region and the Amazon, Parnaíba, and Paraná basins were recurrent by the Middle Devonian (e.g. 15 , 26 ). However, the dispersal and range expansion events highlighted in our biogeographic analyses (except that related to M. caffer dispersal from the Paraná Basin to South Africa) occurred during the late Lochkovian (Fig. 6 ). As such, our data suggest an earlier connection between all those Gondwanan regions, allowing Metacryphaeus trilobites to expand into the Paraná and Parnaíba basins via southeastern and northern/northeastern routes, respectively (Fig. 6 ). Another interesting fact is the diversification of Metacryphaeus in South America occurring earlier than its dispersal to South Africa (where it is represented by M. caffer ). This was temporally the latest dispersal of the genus, taking place during the Pragian, and a separate event from the dispersal of M. allardyceae in the same direction (to the Falkland Islands), which occurred earlier. Methods Phylogenetic analysis The phylogenetic analysis conducted here was based on the phylogeny of Lieberman 5 , with extra characters and species added to the data matrix. The added species were Metacryphaeus australis , M. caffer , M. kegeli , M. meloi , and M. allardyceae , as to encompass all valid species of the genus. Other ingroup taxa were defined according to the phylogenetic hypothesis of Lieberman 5 consisting of Plesiomalvinella boulei , P. pujravii , Wolfartaspis cornutus , Malvinocooperella pregiganteus , Clarkeaspis gouldi , and C. padillaensis . Also, according to Lieberman 5 , Kozlowskiaspis ( K .) superna Braniša & Vaněk, 1973 was used to root of the phylogenetic trees. Among the 48 characters employed here (see Appendix 1 ), 33 were taken or modified from Lieberman 5 and 15 are new (characters 7, 9, 12, 13, 14, 22, 23, 24, 25, 26, 27, 34, 40, 41, and 42), although based on characters used in phylogenetic analyses of other trilobite groups ( e.g . 35 , 36 , 37 , 38 , 39 , 40 , 41 , 42 ). The morphological elements of the exoskeleton are shown in Fig. 7 and all morphological relations/angles used in the 15 newly proposed characters were measured as indicated in Fig. 8 . Figure 7 Schematic drawing showing the major exoskeleton elements of the dorsal surface of Metacryphaeus . Full size image Figure 8 Measurements used: mgwwfl = maximum glabelar width without consider the frontal lobe; mfll = maximum frontal lobe length; mcl = maximum cephalic length; dpmeaf = distance of posterior margin of the eyes to the axial furrow; mele = maximum exsagittal length of the eyes; bgtw = basal glabellar transverse width; gsl = glabellar sagittal length; gslwfl = glabellar sagittal length without consider the frontal lobe; L1sl = sagittal length of L1 glabellar lobe; mtpaw = maximum transverse pygidial axis width; mtpw = transverse maximum pygidial width; mspl = sagittal maximum pygidial length; mpal = maximum pygidial axis length; α = angle between the axial furrow and the furrow of cephalic posterior border; β = angle between the cephalic posterior border furrow and a line traced from the posterior margin of the axial furrow to the anterior margin of the eyes; γ = angle between a straight line traced adjacent to the lateral genae (from the contact with the cephalic posterior furrow) and a line traced from the anterior part of the genae (from the contact of the axial furrow) in direction to the medial-posterior part of the genae; Ω = S3 inclination in relation to SO. Full size image Among the characters taken from Lieberman 5 , some scores were changed for some taxa based on our own interpretations. This is the case for characters 5 (changed from 0 to 1 in Malvinocooperella pregiganteus , Metacryphaeus giganteus , and Me. branisai ), 18 (changed from 0 to 1 in Me. branisai ), and 19 (changed from 0 to 1 in Me. giganteus ). Other characters from Lieberman 5 , e.g . characters 9, 10, 12, 13, 19, 23, 24, and 34, were not used here because they either have too much variation between individuals of the same species or can be easily affected by taphonomic deformation. Some characters from Lieberman 5 were split into two or more characters, as in case of characters 2 and 3 (=character 1 of Lieberman 5 ), 20 and 21 (=character 18 of Lieberman 5 ), 30 and 31 (=character 25 of Lieberman 5 ), 35 and 36 (=character 29 of Lieberman 5 ), and 45, 46, and 47 (=character 36 of Lieberman 5 ), following a contingential approach 43 . Characters 1 to 36 are related to the cephalon, character 37 to the thorax, 38 to 47 to the pygidium, and 48 to the prosopon (Appendix 1 and 2 ). All characters are related to the dorsal surface of the exoskeleton and were treated as ordered. The data matrix was analyzed in search of the Most Parsimonious Trees (MPTs) using the software TNT version 1.1 44 . A heuristic search was conducted with 1,000 replicates, random addition of taxa (random seed 0), Tree Bisection and Reconnection (TBR) as branch swapping algorithm, and “hold” of 10 trees per replica. The recovered MPTs were summarized in a strict consensus tree. Bremer 45 decay indices and bootstrap proportions 46 were calculated using scripts incorporated in TNT. The data matrix was compiled in NEXUS format using the software Mesquite version 3.03 (702) and the tree images were generated with the software FigTree version 1.4.2. Palaeobiogeographical analysis We conducted palaeobiogeographic analyses to explore the distribution dynamic and biogeographical events that affected Metacryphaeus distribution through time in five areas pre-defined based on the known occurrences genus: Bolivia and Peru (A); Paraná Basin, Brazil (B); South Africa (C); Falkland Islands (D); and Parnaíba Basin, Brazil (E). Bolivia and Peru were treated as a single area due to their geographical proximity, strong palaeontological association, and co-occurrence of endemic species 2 , 47 . Only fossil taxa with accurate occurrence data and taxonomic identification were included. For this reason, taxa with doubtful assignation ( cf ., aff .) were not considered in our analyses ( e.g . 48 ). Ancestral area reconstructions were conducted using R (R Development Core Team 2013) package BioGeoBEARS 49 , which allows comparing the likelihood of our data given distinct models, choosing that with better fit 50 . We tested three nested models based on the LAGRANGE Dispersal-Extinction-Cladogenesis (DEC) model 51 , 52 : M0 contains the default parameters of the DEC models 49 ; M1 has the addition of the free parameter w ; and M2 has the addition of the free parameters w and j . The free parameter w is a multiplier of the dispersal matrices and when set to 1 ( e.g . in M0) the probabilities of dispersal events are based solely on the dispersal matrices and equal across all events 53 . The founder-event parameter j (included only in M2 and set to 0 in M0 and M1) allows range changes to areas distinct to that of the ancestor during a cladogenetic event 49 . We employed the Likelihood Ratio Test (LRT) to select the best model. We used time-calibrated versions of the two MPTs, dividing them into two time slices, Silurian to Lower Devonian (430–395 Ma) and Middle to Upper Devonian (395–382 Ma). Based on that, we conducted a time-stratified analyses using time-specific dispersal multiplier and area matrices (see Supplementary supple 2 and supple 3 ). This allowed changing the distances and probabilities between the areas along these periods, simulating the continental transformations. We also conducted a biogeographic stochastic mapping (BSM) on BioGeoBEARS 54 to estimate the number and type of biogeographical events. We conducted the BSM only for the first MPT, as the ancestral area reconstruction of both MPTs differ only slightly, and employed the parameters of the best-fit model of the ancestral area reconstruction 53 . The mean and standard deviation of event counts of 100 BSMs were used to estimate the frequencies of range change between the considered areas and of each kind of biogeographic event. Conclusions This work provides new phylogenetic hypotheses for the relationships of all species within the genus Metacryphaeus , including the identification of the clades composed of (1) M. caffer and M. australis , (2) M. tuberculatus , M. meloi , and M. kegeli , (3) M. tuberculatus and M. kegeli , (4) M. curvigena and M. convexus , the latter two as sites clades. The position of M. branisai varied in the two recovered MPTs, probably due to the unknown pygidium for this specie. As Plesiomalvinella pujravii and P. boulei were positioned within the Metacryphaeus clade, these species were reinserted in that genus, as originally suggested by Wolfart 13 . Finally, the genus Clarkeaspis represents the immediate outgroup to Metacryphaeus . The results of the palaeobiogeographic analyses with DEC models reinforce the interpretations of Lieberman 5 and Abe & Lieberman 9 that Metacryphaeus originated in the Lower Devonian of Bolivia and Peru, where they are represented by a higher taxonomic diversity. The radiation of Metacryphaeus to other Gondwanan regions probably occurred during the transgressive events in the Lochkovian–Pragian. In the Lochkovian, dispersals would have occurred to the Paraná Basin, in Brazil ( M. parana, M. australis , M. tuberculatus ), as well as to the Falklands area ( M. allardyceae ) and the Parnaíba Basin ( M. meloi, M. kegeli , M. tuberculatus ). Pragian dispersal events were reconstructed only towards South Africa ( M. caffer ). The ancestral area reconstructions for Metacryphaeus show dispersal events occurring earlier than expected, i.e . during the Early Devonian, even though the faunal similarities of Bolivia and Peru with the Parnaíba and Amazon basins are more prominent in the Middle Devonian, with the sharing of brachiopod ( Tropidoleptus and Australocoelia ), crinoid ( Exaesiodiscus , Laudonomphalus , Monstrocrinus , and Marettocrinus ), and other trilobite ( Eldredgeia and Burmeisteria ) taxa. The results presented here indicate that these areas were also somehow connected during the beginning of the Devonian, as to allow the dispersal of Metacryphaeus . Data Availability The datasets analyzed during the current study are available in: .
The first appearance of trilobites in the fossil record dates to 521 million years ago in the oceans of the Cambrian Period, when the continents were still inhospitable to most life forms. Few groups of animals adapted as successfully as trilobites, which were arthropods that lived on the seabed for 270 million years until the mass extinction at the end of the Permian approximately 252 million years ago. The longer ago organisms lived, the more rare are their fossils and the harder it is to understand their way of life. Paleontologists face a daunting task in establishing evolutionary relationships in time and space. Surmounting the difficulties inherent in the investigation of a group of animals that lived such a long time ago, Brazilian scientists affiliated with the Biology Department of São Paulo State University's Bauru School of Sciences (FC-UNESP) and the Paleontology Laboratory of the University of São Paulo's Ribeirão Preto School of Philosophy, Science and Letters (FFCLRP-USP) have succeeded for the first time in inferring paleobiogeographic patterns among trilobites. Paleobiogeography is a branch of paleontology that focuses on the distribution of extinct plants and animals and their relations with ancient geographic features. The study was conducted by Fábio Augusto Carbonaro, a postdoctoral researcher at UNESP's Bauru Macroinvertebrate Paleontology Laboratory (LAPALMA) headed by Professor Renato Pirani Ghilardi. Other participants included Max Cardoso Langer, a professor at FFCLRP-USP, and Silvio Shigueo Nihei, a professor at the same university's Bioscience Institute (IB-USP). The researchers analyzed the morphological differences and similarities of the 11 species of trilobites described so far in the genus Metacryphaeus; these trilobites lived during the Devonian between 416 million and 359 million years ago (mya) in the cold waters of the sea that covered what is now Bolivia, Peru, Brazil, the Malvinas (Falklands) and South Africa. The Devonian Period is subdivided into seven stages. Metacryphaeus lived during the Lochkovian (419.2-410.8 mya) and Pragian (410.8- 407.6 mya) stages, which are the earliest Devonian stages. The results of the research were published in Scientific Reports and are part of the project "Paleobiogeography and migratory routes of paleoinvertebrates of the Devonian in Brazil", which is supported by São Paulo Research Foundation -FAPESP and Brazil's National Council for Scientific and Technological Development (CNPq). Ghilardi is the project's principal investigator. "When they became extinct in the Permian, 252 million years ago, the trilobites left no descendants. Their closest living relatives are shrimps, and, more remotely, spiders, scorpions, sea spiders and mites," Ghilardi said. Trilobite fossils are found abundantly all over the world, he explained—so abundantly that they are sometimes referred to as the cockroaches of the sea. The comparison is not unwarranted, because anatomically, the trilobites resemble cockroaches. The difference is that they were not insects and had three longitudinal body segments or lobes (hence the name). In the northern hemisphere, the trilobite fossil record is very rich. Paleontologists have so far described 10 orders comprising over 17,000 species. The smallest were 1.5 millimeters long, while the largest were approximately 70 cm long and 40 cm wide. Perfectly preserved trilobites can be found in some regions, such as Morocco. These can be beautiful when used to create cameos or intaglio jewelry. Trilobite fossils from Brazil, Peru and Bolivia, in contrast, are often poorly preserved, consisting merely of the impressions left in benthic mud by their exoskeletons. "Although their state of preservation is far from ideal, there are thousands of trilobite fossils in the sediments that form the Paraná basin in the South region of Brazil, and the Parnaíba basin along the North-Northeast divide," said Ghilardi, who also chairs the Brazilian Paleontology Society. According to Ghilardi, their poor state of preservation could be due to the geological conditions and climate prevailing in these regions during the Paleozoic Era, when the portions of dry land that would one day form South America were at the South Pole and entirely covered by ice for prolonged periods. During the Devonian, South America and Africa were connected as part of the supercontinent Gondwana. South Africa was joined with Uruguay and Argentina in the River Plate region, and Brazil's southern states were continuous with Namibia and Angola. Parsimonious analysis The research began with an analysis of 48 characteristics (size, shape and structure of organs and anatomical parts) found in some 50 fossil specimens of the 11 species of Metacryphaeus. "In principle, these characteristics serve to establish their phylogeny—the evolutionary history of all species in the universe, analyzed in terms of lines of descent and relationships among broader groups," Ghilardi said. Known as a parsimonious analysis, this method is widely used to establish relationships among organisms in a given ecosystem, and in recent years, it has also begun to be used in the study of fossils. According to Ghilardi, parsimony, in general, is the principle that the simplest explanation of the data is the preferred explanation. In the analysis of phylogeny, it means that the hypothesis regarding relationships that requires the smallest number of characteristic changes between the species analyzed (in this case, trilobites of the genus Metacryphaeus) is the one that is most likely to be correct. The biogeographic contribution to the study was made by Professor Nihei, who works at IB-USP as a taxonomist and insect systematist. The field of systematics is concerned with evolutionary changes between ancestries, while taxonomy focuses on classifying and naming organisms. "Biogeographic analysis typically involves living groups the ages of which are estimated by molecular phylogeny, or the so-called molecular clock, which estimates when two species probably diverged on the basis of the number of molecular differences in their DNA. In this study of trilobites, we used age in a similar manner, but it was obtained from the fossil record," Nihei said. "The main point of the study was to use fossils in a method that normally involves molecular biogeography. Very few studies of this type have previously involved fossils. I believe our study paves the way for a new approach based on biogeographic methods requiring a chronogram [a molecularly dated cladogram] because this chronogram can also be obtained from fossil taxa such as those studied by paleontologists, rather than molecular cladograms for living animals." As a vertebrate paleontologist who specializes in dinosaurs, Langer acknowledged that he knows little about trilobites but a great deal about the modern computational techniques used in parsimonious analysis, on which his participation in the study was based. "I believe the key aspect of this study, and the reason it was accepted for publication in as important a journal as Scientific Reports, is that it's the first ever use of parsimony to understand the phylogeny of a trilobite genus in the southern hemisphere," he said. Gondwanan dispersal The results of the paleobiogeographical analyses reinforce the pre-existing theory that Bolivia and Peru formed the ancestral home of Metacryphaeus. "The models estimate a 100 percent probability that Bolivia and Peru formed the ancestral area of the Metacryphaeus clade and most of its internal clades," Ghilardi said. Confirmation of the theory shows that parsimonious models have the power to suggest the presence of clades at a specific moment in the past even when there are no known physical records of that presence. In the case of Metacryphaeus, the oldest records in Bolivia and Peru date from the early Pragian stage (410.8-407.6 mya), but the genus is believed to have evolved in the region during the Lochkovian stage (419.2-410.8 mya). Parsimony, therefore, suggests Metacryphaeus originated in Bolivia and Peru some time before 410.8 mya but not earlier than 419.2 mya. In any event, it is believed to be far older than any known fossils. According to Ghilardi, the results can be interpreted as showing that the adaptive radiation of Metacryphaeus to other areas of western Gondwana occurred during episodes of marine transgression in the Lochkovian-Pragian, when the sea flooded parts of Gondwana. "The dispersal of Metacryphaeus trilobites during the Lochkovian occurred from Bolivia and Peru to Brazil—to the Paraná basin, now in the South region, and the Parnaíba basin, on the North-Northeast divide—and on toward the Malvinas/Falklands, while the Pragian dispersal occurred toward South Africa," he said. Fossil trilobites have been found continuously in the Paraná basin in recent decades. Trilobites collected in the late nineteenth century in the Parnaíba basin were held by Brazil's National Museum in Rio de Janeiro, which was destroyed by fire in September 2018. "These fossils haven't yet been found under the rubble and it's likely that nothing is left of them. They were mere shell impressions left in the ancient seabed. Even in petrified form, they must have dissolved in the blaze," Ghilardi said.
10.1038/s41598-018-33517-5
Space
Researchers and supercomputers help interpret the latest LIGO findings
E. Troja et al, The X-ray counterpart to the gravitational-wave event GW170817, Nature (2017). DOI: 10.1038/nature24290 Journal information: Nature
http://dx.doi.org/10.1038/nature24290
https://phys.org/news/2017-10-supercomputers-latest-ligo.html
Abstract A long-standing paradigm in astrophysics is that collisions—or mergers—of two neutron stars form highly relativistic and collimated outflows (jets) that power γ-ray bursts of short (less than two seconds) duration 1 , 2 , 3 . The observational support for this model, however, is only indirect 4 , 5 . A hitherto outstanding prediction is that gravitational-wave events from such mergers should be associated with γ-ray bursts, and that a majority of these bursts should be seen off-axis, that is, they should point away from Earth 6 , 7 . Here we report the discovery observations of the X-ray counterpart associated with the gravitational-wave event GW170817. Although the electromagnetic counterpart at optical and infrared frequencies is dominated by the radioactive glow (known as a ‘kilonova’) from freshly synthesized rapid neutron capture (r-process) material in the merger ejecta 8 , 9 , 10 , observations at X-ray and, later, radio frequencies are consistent with a short γ-ray burst viewed off-axis 7 , 11 . Our detection of X-ray emission at a location coincident with the kilonova transient provides the missing observational link between short γ-ray bursts and gravitational waves from neutron-star mergers, and gives independent confirmation of the collimated nature of the γ-ray-burst emission. Main On 17 August 2017 at 12:41:04 universal time ( ut ; hereafter T 0 ), the Advanced Laser Interferometer Gravitational-Wave Observatory (LIGO) detected a gravitational-wave transient from the merger of two neutron stars at a distance 12 of 40 ± 8 Mpc. Approximately two seconds later, a weak γ-ray burst (GRB) of short duration (<2 s) was observed by the Fermi Gamma-ray Space Telescope 13 and INTEGRAL 14 . The low luminosity of this γ-ray transient was unusual compared to the population of short GRBs at cosmological distances 15 , and its physical connection with the gravitational-wave event remained unclear. A vigorous observing campaign targeted the localization region of the gravitational-wave transient, and rapidly identified a source of bright optical, infrared and ultraviolet emission in the early-type galaxy NGC 4993 16 , 17 . This source was designated ‘SSS17a’ by the Swope team 16 , but here we use the official IAU designation, AT 2017gfo. AT 2017gfo was initially not visible at radio and X-ray wavelengths. However, on 26 August 2017, we observed the field with the Chandra X-ray Observatory and detected X-ray emission at the position of AT 2017gfo ( Fig. 1 ). The observed X-ray flux (see Methods) implies an isotropic luminosity of 9 × 10 38 erg s −1 if located in NGC 4993 at a distance of about 40 Mpc. Further Chandra observations, performed between 1 and 2 September 2017, confirmed the presence of continued X-ray activity, and hinted at a slight increase in luminosity to L X,iso ≈ 1.1 × 10 39 erg s −1 . At a similar epoch the onset of radio emission was also detected 18 . Figure 1: Optical/infrared and X-ray images of the counterpart of GW170817. a , Hubble Space Telescope observations show a bright and red transient in the early-type galaxy NGC 4993, at a projected physical offset of about 2 kpc from its nucleus. A similar small offset is observed in less than a quarter of short GRBs 5 . Dust lanes are visible in the inner regions, suggestive of a past merger activity (see Methods). b , Chandra observations revealed a faint X-ray source at the position of the optical/infrared transient. X-ray emission from the galaxy nucleus is also visible. PowerPoint slide Full size image The evolution of AT 2017gfo across the electromagnetic spectrum shows multiple components dominating the observed emission. Simple modelling of the optical–infrared photometry as a blackbody in linear expansion suggests mildly relativistic (≥0.2 c , where c is the speed of light in vacuum) velocities and cool (<10,000 K) temperatures. We find a hot blue component, mainly contributing at optical wavelengths, and a colder infrared component, which progressively becomes redder ( Extended Data Fig. 1 ). The low peak luminosity ( M V ≈ −16) and featureless optical spectrum ( Fig. 2 ) disfavour a supernova explosion (see Methods), while the broad (Δ λ / λ ≈ 0.1) features in the infrared spectra are consistent with expectations for rapidly expanding dynamical ejecta 9 , 10 , rich in lanthanides and actinides formed via rapid neutron capture nucleosynthesis (the r-process). The overall properties of the host galaxy, such as its stellar mass, evolved stellar population and low star formation (see Methods), are consistent with the typical environment of short GRBs and in line with the predictions for compact binary mergers 5 . When combined, these data point to a kilonova emission, consisting of the superposition of radioactive-powered emission from both neutron-rich dynamical ejecta expanding with velocity v ≈ 0.2 c and a slower, sub-relativistic wind with a higher electron fraction 19 . The former component radiates most of its energy in the infrared, while the latter dominates the optical and ultraviolet spectrum. The optical/infrared data set therefore provides convincing evidence that AT 2017gfo was a kilonova produced by the merger of two compact objects, at a time and location consistent with GW170817. Figure 2: Optical and infrared spectra of the kilonova associated with GW170817. The optical spectrum, acquired on 21 August 2017 ( T 0 + 3.5 d) with the Gemini South 8-m telescope, is dominated by a featureless continuum with a rapid turnover above a wavelength of about 0.75 μm. At later times, this feature is no longer visible. Near-infrared spectra, taken with the Hubble Space Telescope between 22 and 28 August 2017, show prominent broad (Δ λ / λ ≈ 0.1) features and a slow evolution towards redder colours. These spectral features are consistent with the ejection of high-velocity, neutron-rich material during a neutron-star merger. The colour-coded numbers indicate the epoch of each spectrum. A spectrum of the broad-lined type Ic supernova SN 1998bw (8 days post-maximum; arbitrarily rescaled) is shown for comparison. Error bars are 1 σ . PowerPoint slide Full size image Our Chandra observations at T 0 + 9 d revealed the onset of a new emission component at X-ray energies. Although the basic model for kilonovae does not predict detectable X-ray emission, previous candidate kilonovae were all associated with an X-ray brightening. This led to the suggestion that the power source of the infrared transient may be thermal re-emission of the X-ray photons rather than radioactive heat 20 . However, in these past cases 20 , 21 , 22 , the X-ray luminosity was comparable to or higher than the optical/infrared component, whereas in our case the infrared component is clearly dominant and 20 times brighter than the faint X-ray emission. These different luminosities and temporal behaviours suggest that the X-ray emission is instead decoupled from the kilonova. The interaction of the fast-moving ejecta with the circumstellar material may produce detectable emission 23 . An ambient density n > 10 3 cm −3 would be required to explain the observed onset at about T 0 + 9 d, but neither the optical nor the X-ray spectra show any evidence for absorption from this dense intervening medium. After a binary neutron-star merger, X-rays could be produced by a rapidly rotating and highly magnetized neutron star. However, none of the current models 21 , 24 can reproduce persistent emission over the observed timescales of around two weeks. Fallback accretion 25 of the merger ejecta could account for such long-lived faint X-ray emission; however, the predicted thermal spectrum should not be visible at radio frequencies. Instead, a more likely explanation, also supported by the detection of a radio counterpart, is that the observed X-rays are synchrotron afterglow radiation from the short GRB 170817A. By assuming that radio and X-ray emission belong to the same synchrotron regime, we derive a spectral slope of β ≈ 0.64, consistent with the index measured from the X-ray spectrum (see Methods) and with typical values of GRB afterglow spectra 15 . Therefore, our detection of X-ray emission at the same position as AT 2017gfo (see Methods) shows that the short GRB and the optical/infrared transient are co-located, establishing a direct link between GRB 170817A, its kilonova and GW170817. In the standard GRB model 26 , the broadband afterglow emission is produced by the interaction of the jet with the surrounding medium. For an observer on the jet axis, the afterglow appears as a luminous ( L X,iso > 10 44 erg s −1 ) fading transient visible across the electromagnetic spectrum from the first few minutes after the burst. This is not consistent with our observations. If the observer is instead viewing beyond the opening angle θ j of the jetted outflow, relativistic beaming will weaken the emission in the observer’s direction by orders of magnitude. The afterglow only becomes apparent once the jet has spread and decelerated sufficiently that the beaming cone of the emission includes the observer 7 , 10 . Therefore, an off-axis observer sees that the onset of the afterglow is delayed by several days or weeks. In our case, the slow rise of the X-ray emission suggests that our observations took place near the peak time t peak of the off-axis afterglow light curve, predicted to follow t peak ∝ n −1/3 ( θ v − θ j ) 2.5 , where E k,iso is the isotropic-equivalent blastwave energy. The off-axis angle Δ θ is therefore constrained as Δ θ = θ v − θ j ≈ 13° × × × n 2/15 , where t peak is given in units of 15 d, E k,iso is in units of 10 50 erg and n is in units of 10 −3 cm −3 . In Fig. 3a we show that our dataset can be reproduced by a standard short GRB afterglow 15 with the only difference being the viewing angle: on-axis ( θ v ≪ θ j ) in the commonly observed scenario, and off-axis ( θ v > θ j ) in our case. The synthetic light curves were produced from two-dimensional jet simulations 27 , but the key features of these curves are general to spreading ejecta seen off-axis (see Methods for further details; also Extended Data Fig. 2 ). Our observations therefore independently confirm the collimated nature of GRB outflows 28 . Figure 3: Multi-wavelength light curves for the counterpart of GW170817. a , Temporal evolution of the X-ray and radio counterparts of GW170817 compared to the model predictions (thin solid lines) for a short GRB afterglow viewed at an angle θ v ≈ 28°. The thick grey line shows the X-ray light curve of the same afterglow as seen on-axis, falling in the typical range 15 of short GRBs (vertical dashed line). Upper limits are 3 σ . b , Temporal evolution of the optical and infrared transient AT 2017gfo compared with the theoretical predictions (solid lines) for a kilonova seen off-axis with viewing angle θ v ≈ 28°. For comparison with the ground-based photometry, Hubble Space Telescope measurements (squares) were converted to standard filters. Our model includes the contribution from a massive, high-speed wind along the polar axis ( M w ≈ 0.015 M ⊙ , v ≈ 0.08 c ) and from the dynamical ejecta ( M ej ≈ 0.002 M ⊙ , v ≈ 0.2 c ). The presence of a wind is required to explain the bright and long-lived optical emission, which is not expected otherwise (see dashed line). PowerPoint slide Source data Full size image Interestingly, all three observed electromagnetic counterparts (GRB, kilonova and afterglow) separately point at a substantial offset of the binary orbital plane axis relative to the observer, independent of any constraint arising directly from the gravitational-wave event. The initial γ-ray emission is unusually weak, being orders of magnitude less luminous than typical short GRBs. This suggests a large angle between the jet and the observer. The standard top-hat profile that is usually adopted to describe GRB jets cannot easily account for the observed properties of GRB 170817A (see Methods). Instead, a structured jet profile, where the outflow energetics and Lorentz factor vary with the angle from the jet axis, can explain both the GRB and afterglow properties ( Extended Data Fig. 3 ). Alternatively, the low-luminosity γ-ray transient may not trace the prompt GRB emission, but come from a broader collimated, mildly relativistic cocoon 29 . Another independent constraint on the off-axis geometry comes from the spectral and temporal evolution of the kilonova light curves ( Fig. 3b ). The luminous and long-lived optical emission implies that the observer intercepts a substantial contribution from the wind component along the polar axis, which would be shielded by the lanthanide-rich ejecta for an edge-on observer along the equatorial plane ( Fig. 4 ). A comparison between the kilonova models 30 and our optical-infrared photometry favours an off-axis orientation, in which the wind is partially obscured by the dynamical ejecta, with an estimated inclination angle anywhere between 20° and 60° ( Extended Data Fig. 4 ), depending on the detailed configuration of the dynamical ejecta. Taking into account the uncertainties in the model, such as the morphologies of the ejecta and the possible different types of wind, this is in good agreement with the orientation inferred from afterglow modelling. The geometry of the binary merger GW170817 ( Fig. 4 ), here primarily constrained through electromagnetic observations, could be further refined through a joint analysis with the gravitational-wave signal. Figure 4: Schematic diagram for the geometry of GW170817. Following the neutron-star merger, a small amount of fast-moving neutron-rich ejecta (red shells) emits an isotropic kilonova peaking in the infrared. A larger mass neutron-free wind along the polar axis (blue arrows) produces kilonova emission peaking at optical wavelengths. This emission, although isotropic, is not visible to edge-on observers because it is only visible within a range of angles and otherwise shielded by the high-opacity ejecta. A collimated jet (black solid cone) emits synchrotron radiation visible at radio, X-ray and optical wavelengths. This afterglow emission outshines all other components if the jet is seen on-axis. However, to an off-axis observer, it appears as a low-luminosity component delayed by several days or weeks. PowerPoint slide Full size image The discovery of GW170817 and its X-ray counterpart shows that the second generation of gravitational-wave interferometers will enable us to uncover a new population of weak and probably off-axis GRBs associated with gravitational-wave sources, thus providing an unprecedented opportunity to investigate the properties of these cosmic explosions and their progenitors. This paves the way for multi-messenger (that is, electromagnetic and gravitational-wave radiation) modelling of the different aspects of these events, which may potentially help to break the degeneracies that exist in the models of neutron-star mergers when considered separately. Methods X-ray imaging with the Chandra X-ray Observatory Chandra observed the counterpart of GW170817 at four different epochs. The first observation, performed at T 0 + 2.2 d, did not detect X-ray emission from AT 2017gfo (ref. 31 ). Our observations (Principal Investigator (PI): E. Troja) were performed at T 0 + 9 d and T 0 + 15 d for total exposures of 50 ks and 47 ks, respectively. Data were reduced and analysed using standard analysis tools within CIAO v. 4.9 with calibration database CALDB v. 4.7.6. In both epochs we detect X-ray emission at the same position as the optical/infrared transient (see below) at a statistically significant level (false positive probability P < 10 −7 ). The source was detected with similarly high significance in a later 47 ks observation at T 0 + 16 d (ref. 32 ). Photon events from the afterglow were selected using a circular extraction region of radius 1 arcsec, while the background level of 2.3 × 10 −6 counts arcsec −2 s −1 was estimated from nearby source-free regions. In the 0.5–8.0 keV energy band, we measured 12 total counts in our first epoch and 17 total counts in the second epoch. To estimate the source flux, we analysed the spectra 33 within XSPEC v.12.9.1. We used an absorbed power-law model with the absorbing column fixed at the Galactic value N H = 8.76 × 10 20 cm −2 , and minimized the Cash statistics to find our best fit parameters. The joint fit of the two spectra yielded a photon index Г = 1.3 ± 0.4 and unabsorbed X-ray fluxes of (4.0 ± 1.1) × 10 −15 erg cm −2 s −1 at T 0 + 9 d and (5.0 ± 1.0) × 10 −15 erg cm −2 s −1 at T 0 + 15 d in the 0.3–10 keV energy band. All the quoted errors are at the 68% confidence level. Our results therefore suggest the presence of a slowly rising X-ray emission with F X ∝ t 0.5 . By assuming a similar background level and source spectral shape, we estimate an upper limit to the X-ray flux of 3.7 × 10 −15 erg cm −2 s −1 (95% confidence level) at T 0 + 2.2 d, consistent with our findings and the upper limits from Swift and NuSTAR 17 . Hubble Space Telescope observations We obtained several epochs of imaging and near-infrared grism spectroscopy (PI: E. Troja) with the Hubble Space Telescope 34 . Images were taken with both the infrared and the UVIS detectors of the Wide-Field Camera 3 (WFC3). Data were reduced in a standard fashion using the Hubble Space Telescope CalWF3 standard pipeline 35 , and the astrodrizzle processing 36 . Fluxes were converted to magnitudes using WFC3 zero points 37 , 38 . Our final photometry is shown in Fig. 3b . We performed relative astrometry between our WFC3/F160W image and our Chandra observations. We identified five common point-like sources (in addition to the gravitational-wave counterpart AT 2017gfo) and excluded those next to the edge of the field of view and with poor signal-to-noise ratio. The remaining three sources were used to register the Chandra image onto the Hubble Space Telescope frame. The corrected X-ray position of AT 2017gfo is offset from the infrared position by 0.14″ ± 0.22″ (68% confidence level). The probability of finding an unrelated X-ray source at such a small offset is <10 −5 for field objects 39 as well as for an unrelated X-ray binary within the galaxy 40 . Pre-explosion imaging 41 disfavours the presence of a globular cluster at the transient location 42 . Spectroscopic frames were processed with the Hubble Space Telescope CalWF3 standard pipeline. To estimate any possible contribution from the nearby host galaxy, we fitted a second-order polynomial (modelling the galaxy) and a Gaussian (modelling the source) as a function of the y coordinate. We smoothed the resultant contamination model with a Savitzky–Golay filter to remove any high-frequency structure. We then subtracted the background and refitted the remaining source flux with a Gaussian. Finally, we combined the four images (per epoch per grism) using a 3 σ -clipped average, rejecting pixels associated with the bad-pixel masks and weighting by the inverse variance. Extended Data Fig. 5 illustrates this process. Optical and infrared imaging with Gemini-South We obtained several epochs of optical and infrared imaging (PI: E. Troja) of the gravitational-wave counterpart AT 2017gfo, starting on 21 August 2017. Optical data were acquired with the Gemini Multi-Object Spectrograph (GMOS) mounted on the 8-m Gemini South telescope, and reduced using standard Gemini/Image Reduction and Analysis Facility (IRAF) tasks. We performed point spread function (PSF)-fitting photometry using custom Python scripts after subtracting a Sersic function fit to remove the host galaxy flux. Errors associated with the Sersic fit were measured by smoothing the fit residuals, and then propagated through the PSF fitting. The resulting griz photometry, shown in Fig. 3b , was calibrated to Pan-STARRS 43 using a common set of field stars for all frames. Infrared images (JHKs bands) were acquired with the Flamingos-2 instrument. Data were flat-fielded and sky-subtracted using custom scripts designed for the RATIR project ( ). Reduced images were aligned and stacked using SWarp ( ). The PSF photometry was calculated, after host galaxy subtraction, and calibrated to a common set of 2MASS 44 sources, using the 2MASS zero points to convert to the AB system. Optical imaging with KMTNet Three Korea Microlensing Telescope Network (KMTNet) 1.6-m telescopes 45 observed the counterpart of GW170817 nearly every night starting on 18 August 2017 at three locations, the South African Astronomical Observatory in South Africa, the Siding Spring Observatory in Australia, and the Cerro-Tololo Inter-American Observatory in Chile. The observations were made using the B, V, R and I filters. Data were reduced in a standard fashion. Reference images taken after 31 August were used to subtract the host galaxy contribution. Photometry was performed using SExtractor, and calibrated using the AAVSO Photometric All-Sky Survey (APASS) catalogue. Our final photometry is shown in Fig. 3b . We also include publicly released data from refs 46 , 47 , 48 , 49 , 50 . Optical spectroscopy with Gemini We obtained optical spectroscopy (PI: E. Troja) of the gravitational-wave counterpart AT 2017gfo with GMOS beginning at 23:38 ut on 20 August 2017. A series of four spectra, each 360 s in duration, were obtained with both the R400 and B600 gratings (see also ref. 49 ). We employed the 1.0″ slit for all observations. All data were reduced with the Gemini IRAF (v. 1.14) package following standard procedures. The resulting spectrum of AT 2017gfo is plotted in Fig. 2 . The spectrum exhibits a relatively red continuum, with a turnover around 7,500 Å. The lack of strong absorption features is consistent with the low estimated extinction along the sightline 51 , E B−V = 0.105, and suggests that there is no substantial intrinsic absorption. No narrow or broad features, such as those that are typically observed in all types of core-collapse supernova, are apparent. We attempted to spectroscopically classify the source using the SuperNova IDentification (SNID) code 52 , with the updated templates for stripped-envelope supernovae. No particularly good match was found, even using this expanded template set. In this case SNID often defaults to classifications of type Ib/c (typically of the broad-lined sub-class), owing to the broad (and therefore typically weaker) nature of the features. For comparison in Fig. 2 we plot the spectrum of the prototypical broad-lined type Ic supernova SN 1998bw 53 . It is evident the source is not a good match. Even after removing the continuum (‘flattening’), the match to mean spectral templates of broad-lined type Ic supernovae 54 is quite poor. Radio observations with the Australia Telescope Compact Array We observed the target with the Australia Telescope Compact Array at three different epochs ( T 0 + 14.5 d, T 0 + 20.5 d and T 0 + 28.5 d) at the centre frequencies 16.7 GHz, 21.2 GHz, 43 GHz and 45 GHz in continuum mode (PI: E. Troja). The data were reduced with the data reduction package MIRIAD 55 using standard procedures. Radio images were formed at 19 GHz and 44 GHz via the Multi Frequency Synthesis technique. No detection was found at the position of the optical/infrared transient; our upper limits are shown in Fig. 3a . During the time interval covered by our observations, detections of the radio afterglow at 3 GHz and 6 GHz were reported 18 , 49 at a level of about 35 μJy. Properties of the host galaxy NGC 4993 In terms of morphology, NGC 4993 shows an extended, disturbed feature and prominent dust lanes in the inner region ( Fig. 1a ), suggestive of a minor merger in the past. From the Ks-band images we derive an absolute magnitude of M K ≈ −22 AB mag and a stellar mass of log( M/M ⊙ ) ≈ 10.9, calculated by assuming a stellar-mass-to-light ratio of the order of unity 56 . Structural parameters were derived from our F110W and F160W image using GALFIT. A fit with a single Sersic component yields an index of 5.5, an ellipticity of about 0.12, and an effective radius R e ≈ 3.4 kpc. The lack of emission lines in our spectra suggests little to no ongoing star formation at the location of the neutron-star merger, consistent with the low ultraviolet luminosity M F275W > −7.5 AB mag (95% confidence level) in the vicinity of the transient. Indeed, the measured Lick indices 57 with Hβ = 1.23 and [MgFe] = 3.16 and the modelling of the spectral energy distribution suggest an old (>2 billion years), evolved stellar population of solar or slightly sub-solar metallicity ( Extended Data Fig. 6 ). The overall properties of NGC 4993 are therefore consistent with an early-type galaxy, and within the range of galaxies harbouring short GRBs 5 . In the nuclear region of NGC 4993, our radio observations show a persistent and relatively bright radio source with flux 420 ± 30 μJy at 19 GHz. The same source is not visible at 44 GHz, indicating a steep radio spectrum. The central radio emission suggests the presence of a low-luminosity active galactic nucleus contributing to the X-ray emission from the galaxy centre ( Fig. 1b ). Active galactic nucleus activity in a GRB host galaxy is rarely observed, but is not unprecedented 58 in nearby short GRBs. Off-axis GRB modelling We interpret the radio and X-ray emission as synchrotron radiation from a population of shock-accelerated electrons. By assuming that radio and X-rays belong to the same synchrotron regime, we derive a spectral slope of 0.64, consistent with the value measured from the X-ray spectrum β = Г − 1 = 0.3 ± 0.4. This corresponds to the spectral regime between the injection frequency v m and the cooling frequency v c for a non-thermal electron population with power-law index 2.3, close to its typical value of GRB afterglows 59 . The presence of a cooling break between radio and X-rays would imply a lower value for the power-law index. The apparent flattening of the X-ray light curve, and the fact that the two observations adjacent to the radio detection are upper limits, suggest that the detections were close to a temporal peak of the light curve. We assume that the radio and X-ray detections correspond to afterglow emission from a GRB jet observed at an angle, with the observer placed at an angle θ v outside the initial jet opening angle θ j ( Fig. 4 ). We test two implementations of this assumption for consistency with the data, a semi-analytic simplified spreading homogeneous shell model 11 and light curves derived from a series of high-resolution two-dimensional relativistic hydrodynamics simulations 27 . Standard afterglow models 60 contain at least six free variables: θ j , θ v , the isotropic equivalent jet energy E iso , ambient medium number density n 0 , the magnetic field energy fraction ε B and the accelerated electron energy fraction ε e . These are too many parameters to be constrained by the observations. We therefore take ‘standard’ values for model parameters ( ε B ≈ 0.01, ε e ≈ 0.1, n 0 ≈ 10 −3 , θ j ≈ 15°), and choose E iso and θ v to match the observations. We caution that the displayed match demonstrates only one option in a parameter space that is degenerate for the current number of observational constraints. A key feature of interest is the peak time, which is plausibly constrained by the current observations. This scales according to t peak ∝ ( E iso / n 0 ) 1/3 Δ θ 2.5 , which follows from complete scale-invariance between curves of different energy and density 61 , and from a survey of off-axis curves for different Δ θ using the semi-analytical model. Note that the scaling applies to the temporal peak, and not to the moment t start when the off-axis signal starts to become visible, where t start ∝ Δ θ 8/3 (similar to a jet break). The scaling of 2.5 is slightly shallower and reflects the trans-relativistic transition as well. From our model comparisons to data, we infer an offset of Δ θ ≈ 13°. If a dense wind exists directly surrounding the jet, a cocoon of shocked dense material and slower jet material has been argued to exist and emerge with the jet in the form of a slower-moving outflow 62 , 63 . When emitted quasi-isotropically, or seen on-axis, cocoon afterglows are, however, expected to peak at far earlier times (hours after T 0 ) than currently observed 64 , 65 . A more complex initial shape of the outflow than a top hat, such as a structured jet 66 with a narrow core and an angle for the wings that is smaller than the observer angle, will have one additional degree of freedom. It is not possible to distinguish between the fine details of the various models: at the time of the observations, top-hat jets, structured jets and collimated cocoon-type outflows are all decelerating and spreading blast waves segueing from relativistic origins into a non-relativistic stage, and all are capable of producing a synchrotron afterglow through a comparable mechanism. At this stage we cannot rule out a broad flat X-ray/radio peak or additional brightening due to jet structure, nor a subsequent emergent contribution from kilonova ejecta interaction with the ambient medium. Origin of the γ-ray emission For a standard top-hat GRB jet 67 , the peak energy E peak and the total energy release E iso scale as a and a − 3 , where a −1 ≈ 1 + Γ 2 Δ θ 2 and Δ θ > 1/ Γ . By assuming typical values of E iso ≈ 2 × 10 51 erg, E peak ≈ 1 MeV, and a Lorentz factor of Γ ≈ 100 to avoid opacity due to pair production and Thomson scattering 26 , the expected off-axis γ-ray emission would be much fainter than GRB 170817A. This suggests that the observed γ-rays might come from a different and probably isotropic emission component, such as precursors 68 seen in some short GRBs or a mildly relativistic cocoon 64 . A different configuration is the one of a structured jet, where the energetics and Lorentz factor of the relativistic flow depend upon the viewing angle 66 , 69 . In this case, the observed flux is dominated by the elements of the flow pointing close to the line of sight. For a universal jet, a power-law dependence is assumed with E γ,iso ( θ v ) ∝ ( θ v / θ c ) −2 , where θ c is the core of the jet. For a Gaussian jet, the energy scales as E γ,iso ( θ v ) ∝ exp( ). Owing to its substantial emission at wide angles, a universal jet fails to reproduce the afterglow data ( Extended Data Fig. 3 ). A Gaussian jet with standard isotropic energy E γ,iso ≈ 2 × 10 51 erg can instead reproduce the observed energetics 13 , 14 of GRB 170817A ( E γ,iso ≈ 5 × 10 46 erg when θ v ≈ 4 θ c . The same jet can also describe the broadband afterglow data ( Extended Data Fig. 3 ), thus representing a consistent model for the prompt and afterglow emissions. Kilonova modelling Our kilonova (or macronova) calculations are based on the approach developed by ref. 30 . We use the multigroup, multidimensional radiative Monte Carlo code SuperNu 70 , 71 , 72 ( ) with the set of opacities produced by the Los Alamos suite of atomic physics codes 73 , 74 , 75 . For this paper, we build upon the range of two-dimensional simulations 30 using the class A ejecta morphologies and varying the ejecta mass, velocity, composition and orientation as well as the model for energy deposition in post-nucleosynthetic radioactive decays. Our nuclear energy deposition is based on the finite-range droplet model (FRDM) of nuclear masses. Kilonova light-curves can be roughly separated into two components: an early peak dominated by the wind ejecta (where by ‘wind’ we indicate the entire variety of secondary post-merger outflows, with many elements in the atomic mass range between the iron peak up through the second r-process peak) and a late infrared peak that is powered by the lanthanide-rich (main r-process elements) dynamical ejecta. The luminous optical and ultraviolet emission 17 require a large wind mass ( M w > (0.015–0.03) M ⊙ ) and a composition with moderate neutron richness (‘wind 2’ with Y e = 0.27 from ref. 30 ). A large fraction of these ejecta consists of first peak r-process elements. The late-time infrared data probe the properties of the dynamical ejecta ( Y e < 0.2), arguing for a mass of M ej ≈ (0.001–0.01) M ⊙ . This ejecta is primarily composed of the main r-process elements lying between the second and third r-process peaks (inclusive). Within the errors of our modelling, the low inferred ejecta mass combined with the high rate of neutron-star mergers inferred from this gravitational-wave detection is in agreement with the neutron-star merger being the main site of the r-process production 76 . However, our models seem to overproduce the first peak r-process relative to the second and third peaks. This could be due to the model simplifications in the treatment of ejecta composition, or it could be because this particular event is not standard for neutron-star mergers. Another, more plausible, source of error comes from the uncertainties in nuclear physics, such as the nuclear mass model used in the r-process nucleosynthesis calculation. Our baseline nuclear mass model (FRDM 77 ) tends to underestimate the nuclear heating rates, compared to other models such as the DZ31 model 78 . Specifically, in the latter model the abundances of trans-lead elements can dramatically alter the heating at late times 76 , 79 . Combined differences in the heating rate and thermalization translate to nearly a factor of 10 in the nuclear energy deposition at late times 79 ( t > 2 days). We have therefore adjusted the heating rate in the dynamical ejecta to compensate for this effect. If this nuclear heating rate is too high, then we are underestimating the mass of the dynamical ejecta. The opacity of the lanthanide-rich tidal ejecta is dominated by a forest of lines up to the near-infrared, causing most of the energy to escape beyond 1 μm, and one indicator of ejecta dictated by lanthanide opacities is a spectrum peak above 1 μm that remains relatively flat in the infrared. However, standard parameters for the ejecta predict a peak between 5 and 10 days. To fit the early peak (about 3 days) requires either a lower mass, or higher velocities. Our best-fit model has a tidal/dynamic ejecta mass of M ej ≈ 0.002 M ⊙ and median velocity (approximately v peak /2) of 0.2 c . Extended Data Fig. 4 shows our synthetic light curves for different viewing angles. In the on-axis orientation, the observer can see both types of outflows, while in the edge-on orientation the wind outflow is completely obscured. The system orientation most strongly affects the behaviour in the blue optical bands, while the infrared bands are largely unaffected. The observed slow decline in the optical bands for this event is best fitted by moderate-latitude viewing angles (about 20°–60°). Data availability All relevant data are available from the corresponding author on reasonable request. Data presented in Fig. 3b are included as Source Data with the online version of the paper.
Astrophysicist Chris Fryer was enjoying an evening with friends on August 25, 2017, when he got the news of a gravitational-wave detection by LIGO, the Laser Interferometer Gravitational-wave Observatory. The event appeared to be a merger of two neutron stars—a specialty for the Los Alamos National Laboratory team of astrophysicists that Fryer leads. As the distant cosmic cataclysm unfolded, fresh observational data was pouring in from the observation—only the fifth published since the observatory began operating almost two years ago. "As soon as I heard the news, I knew that understanding all of the implications would require input from a broad, multi-disciplinary set of scientists," said Fryer, who leads Los Alamos' Center for Theoretical Astrophysics. Fryer's colleagues, Ryan Wollaeger and Oleg Korobkin, outlined a series of radiation transport calculations and were given priority on Los Alamos' supercomputers to run them. "Within a few hours, we were up and running." They soon discovered the LIGO data showed more ejected mass from the merger than the simulations accounted for. Other researchers at Los Alamos began processing data from a variety of telescopes capturing optical, ultraviolet, x-ray, and gamma-ray signals at observatories around the world (and in space) that had all been quickly directed to the general location of the LIGO discovery. The theorists tweaked their models and, to their delight, the new LIGO data confirmed that heavy elements beyond iron were formed by the r-process (rapid process) in the neutron-star merger. The gravitational wave observation was having a major impact on theory. They also quickly noticed that, within seconds of the time of the gravitational waves, the Fermi spacecraft reported a burst of gamma rays from the same part of the sky. This is the first time that a gravitational wave source has been detected in any other way. It confirms Einstein's prediction that gravitational waves travel at the same speed as gamma rays: the speed of light. When neutron stars collide The gravitational wave emission and related electromagnetic outburst came from the merger of two neutron stars in a galaxy called NGC 4993, about 130 million light-years away in the constellation Hydra. The neutron stars are the crushed remains of massive stars that once blew up in tremendous explosions known as supernovas. With masses 10 and 20 percent greater than the sun's and a footprint the size of Washington, D.C., the neutron stars whirled around each other toward their demise, spinning hundreds of times per second. As they drew closer like a spinning ice skater pulling in her arms, their mutual gravitational attraction smashed the stars apart in a high-energy flash called a short gamma-ray burst and emitted the tell-tale gravitational wave signal. Although short gamma-ray bursts have long been theorized to be produced through neutron star mergers, this event—with both gamma-ray and gravity wave observations—provides the first definitive evidence. With Los Alamos's cross-disciplinary, multi-science expertise, the Los Alamos team was geared up and ready for just such an event. Laboratory researcher Oleg Korobkin is the lead theory author on a paper released yesterday in Science, while the Lab's Ryan Wollaeger is the second theory author on a paper released yesterday in Nature. Beyond that theory work, though, Los Alamos scientists were engaged in a broad range of observations, astronomy, and data analysis tasks in support of the LIGO neutron-star discovery. Because the Laboratory's primary mission centers on the nation's nuclear stockpile, Los Alamos maintains deep expertise in nuclear physics and its cousin astrophysics, the physics of radiation transport, data analysis, and the computer codes that run massive nuclear simulations on world-leading supercomputers. In other words, the Laboratory is a logical partner for extending LIGO discoveries into theories and models and for confirming the conclusions about what the observatory discovers.
10.1038/nature24290
Medicine
Research finds how the brain decides between effort and reward
M. C. Klein-Flugge et al. Neural Signatures of Value Comparison in Human Cingulate Cortex during Decisions Requiring an Effort-Reward Trade-off, Journal of Neuroscience (2016). DOI: 10.1523/JNEUROSCI.0292-16.2016 Journal information: Journal of Neuroscience
http://dx.doi.org/10.1523/JNEUROSCI.0292-16.2016
https://medicalxpress.com/news/2016-09-brain-effort-reward.html
Abstract Integrating costs and benefits is crucial for optimal decision-making. Although much is known about decisions that involve outcome-related costs (e.g., delay, risk), many of our choices are attached to actions and require an evaluation of the associated motor costs. Yet how the brain incorporates motor costs into choices remains largely unclear. We used human fMRI during choices involving monetary reward and physical effort to identify brain regions that serve as a choice comparator for effort-reward trade-offs. By independently varying both options' effort and reward levels, we were able to identify the neural signature of a comparator mechanism. A network involving supplementary motor area and the caudal portion of dorsal anterior cingulate cortex encoded the difference in reward (positively) and effort levels (negatively) between chosen and unchosen choice options. We next modeled effort-discounted subjective values using a novel behavioral model. This revealed that the same network of regions involving dorsal anterior cingulate cortex and supplementary motor area encoded the difference between the chosen and unchosen options' subjective values, and that activity was best described using a concave model of effort-discounting. In addition, this signal reflected how precisely value determined participants' choices. By contrast, separate signals in supplementary motor area and ventromedial prefrontal cortex correlated with participants' tendency to avoid effort and seek reward, respectively. This suggests that the critical neural signature of decision-making for choices involving motor costs is found in human cingulate cortex and not ventromedial prefrontal cortex as typically reported for outcome-based choice. Furthermore, distinct frontal circuits seem to drive behavior toward reward maximization and effort minimization. SIGNIFICANCE STATEMENT The neural processes that govern the trade-off between expected benefits and motor costs remain largely unknown. This is striking because energetic requirements play an integral role in our day-to-day choices and instrumental behavior, and a diminished willingness to exert effort is a characteristic feature of a range of neurological disorders. We use a new behavioral characterization of how humans trade off reward maximization with effort minimization to examine the neural signatures that underpin such choices, using BOLD MRI neuroimaging data. We find the critical neural signature of decision-making, a signal that reflects the comparison of value between choice options, in human cingulate cortex, whereas two distinct brain circuits drive behavior toward reward maximization or effort minimization. cingulate cortex cost-benefit decision making fMRI motor cost physical effort value comparison Introduction Cost-benefit decisions are a central aspect of flexible goal-directed behavior. One particularly well-studied neural system concerns choices where costs are tied to the reward outcomes (e.g., risk, delay) ( Kable and Glimcher, 2007 ; Boorman et al., 2009 ; Philiastides et al., 2010 ). Much less is known about choices tied to physical effort costs, despite their ubiquitous presence in human and animal behavior. The intrinsic relationship between effort and action may engage neural circuits distinct from those involved in other value-based choice computations. There is growing consensus that different types of value-guided decisions are underpinned by distinct neural systems, depending on the type of information that needs to be processed (e.g., Rudebeck et al., 2008 ; Camille et al., 2011b ; Kennerley et al., 2011 ; Pastor-Bernier and Cisek, 2011 ; Rushworth et al., 2012 ). For example, activity in the ventromedial prefrontal cortex (vmPFC) carries a signature of choice comparison (chosen-unchosen value) for decisions between abstract goods or when costs are tied to the outcome ( Kable and Glimcher, 2007 ; Boorman et al., 2009 ; FitzGerald et al., 2009 ; Philiastides et al., 2010 ; Hunt et al., 2012 ; Kolling et al., 2012 ; Clithero and Rangel, 2014 ; Strait et al., 2014 ). By contrast, such value difference signals are found more dorsally in medial frontal cortex when deciding between exploration versus exploitation ( Kolling et al., 2012 ). Choices requiring the evaluation of physical effort rest on representations of the required actions and their energetic costs, and thus likely require an evaluation of the internal state of the agent. This is distinct from choices based solely on reward outcomes ( Rangel and Hare, 2010 ). Indeed, the proposed network for evaluating motor costs comprises brain regions involved in action planning and execution, including the cingulate cortex, putamen, and supplementary motor area (SMA) ( Croxson et al., 2009 ; Kurniawan et al., 2010 ; Prévost et al., 2010 ; Burke et al., 2013 ; Kurniawan et al., 2013 ; Bonnelle et al., 2016 ). Neurons in anterior cingulate cortex (ACC) encode information about rewards, effort costs, and actions ( Matsumoto et al., 2003 ; Kennerley and Wallis, 2009 ; Luk and Wallis, 2009 ; Hayden and Platt, 2010 ), and integrate this information into an economic value signal ( Hillman and Bilkey, 2010 ; Hosokawa et al., 2013 ). Moreover, lesions to ACC profoundly impair choices of effortful options and between action values ( Walton et al., 2003 , 2006 , 2009 ; Schweimer and Hauber, 2005 ; Kennerley et al., 2006 ; Rudebeck et al., 2006 , 2008 ; Camille et al., 2011b ). While these studies highlight the importance of motor-related structures in representing effort information, it remains unclear whether computations in these regions are indeed related to comparing effort values (or effort-discounted net values), the essential neural signature, which would implicate these areas in decision making. Indeed, these regions could simply represent effort, which is then passed onto other regions for value comparison processes. A number of questions thus arise. First, is information about reward and effort compared in separate neural structures, or is this information fed to a region that compares options based on their integrated value? Second, do regions that preferably encode reward or effort have a direct influence on determining choice? Finally, assuming separate neural systems are present for influencing choices based on reward versus effort, how does the brain arbitrate between these signals when reward and effort information support opposing choices? Here we used a task designed to identify signatures of a choice comparison for effort-based decisions in humans using fMRI and to test whether different neural circuits “drive” choices toward reward maximization versus energy minimization. We show that the neural substrates of effort-based choice are distinct from those computing outcome-related choices: well-known reward and effort circuits centered on vmPFC and SMA bias choices to be more driven by benefits or motor costs, respectively, with a region in cingulate cortex integrating cost and benefit information and comparing options based on these integrated subjective values. Materials and Methods Participants. Twenty-four participants with no history of psychiatric or neurological disease, and with normal or corrected-to-normal vision took part in this study (mean age 28 ± 1 years, age range 19–38 years, 11 females). All participants gave written informed consent and consent to publish before the start of the experiment; the study was approved by the local research ethics committee at University College London (1825/003) and conducted in accordance with the Declaration of Helsinki. Participants were reimbursed with £15 for their time; in addition, they accumulated average winnings of £7.16 ± 0.11 during each of the two blocks of the task (the maximum winnings per block were scaled to £8; the resulting average total pay was £29.32). Three participants were excluded from the analysis: one for failing to stay awake during scanning and two due to excessive head movements (summed movement in any direction and run >40 mm). All analyses were performed on the remaining 21 participants. Behavioral task. Participants received both written and oral task instructions. They were asked to make a series of choices between two options, which independently varied in required grip force (effort) and reward magnitude (see Fig. 1 A ). The reward magnitude was shown as a number (range: 10–40 points; approximately corresponding to pence) and required force levels were indicated as the height of a horizontal bar (range: 20%–80% of the participant's maximum grip force). Each trial comprised an offer, response, and outcome phase; a subset of 30% of trials also contained an effort production phase. During the offer phase, participants decided which option to choose but they were not yet able to indicate their response. There were two trial types (50% each): ACT (action) and ABS (abstract). In ACT trials, the two choice options were presented to the left and right of fixation, and thus in a horizontal or action space configuration in which the side of presentation directly related to the hand with which to choose that option. In ABS trials, choice options were shown above and below fixation, and thus in a vertical or goods space arrangement that did not reveal the required action. In both conditions, stimuli were presented close to the center of the screen and participants did not need to move their eyes to inspect them. To maximally distinguish the hemodynamic response from the offer and response phase, the duration of the offer phase varied between 4 and 11 s (Poisson distributed; mean 6 s). The response phase started when the fixation cross turned red. In ACT trials, the arrangement of the two choice options remained the same; in ABS trials, the two options at the top and bottom were switched to the left and right of fixation (with a 50/50% chance), thus revealing the required action mapping. Choices were indicated by a brief squeeze of a grip device (see below for details) on the corresponding side (maximum response time: 3 s; required force level: 35% of maximum voluntary contraction [MVC]). ACT and ABS trials were merged for all analyses because no significant differences were found for the tests reported in this manuscript. On 70% of trials, no effort was required: as soon as participants indicated their choice, the unchosen option disappeared, and the message “no force” was displayed for 500 ms. The next trial commenced after a variable delay (intertrial interval: 2–13 s; Poisson distributed; mean: 5 s). On the remaining 30% of trials, a power grip of 12 s was required (effort). Again, the unchosen option disappeared, but now a thermometer appeared centrally and displayed the target force level of the chosen option. Participants were given online visual feedback about the applied force level using changing fluid levels in the thermometer. On successful application of the required force for at least 80% of the 12 s period, a green tick appeared (500 ms; outcome phase; delay preceding outcome: 0.5–1.5 s uniform) and the reward magnitude of the chosen option was added to the total winnings. Otherwise, the total winnings remained unchanged (red cross: 500 ms). Because participants were almost always successful in applying the required force on effort trials (accuracy: 99.30 ± 0.004%; only 4 participants made any mistakes), there was no confound between effort level and risk/reward expectation. The sensitivity of the grip device was manipulated between trials (high or low). A high gain meant that the grippers were twice as sensitive as for a low gain, and thus the same force deviation doubled the rate of change in the thermometer's fluid level. While this manipulation was introduced to study interactions between mental and physical effort, none of our behavioral or fMRI analyses revealed any significant effects of gain during the choice phase, which is the focus of the present paper. To summarize, our task involved several important features: (1) as our aim was to specifically examine value comparison mechanisms during effort-based choice, we manipulated both options' values and thus the expected values of the two offers had to be computed and compared online in each trial, unlike in previous experiments ( Croxson et al., 2009 ; Kurniawan et al., 2010 , 2013 ; Prévost et al., 2010 ; Burke et al., 2013 ; Bonnelle et al., 2016 ); (2) the decision process and the resulting motor response were separated in time (see Fig. 1 A ). This enabled us to examine the value comparison in the absence of, and not confounded with, processes related to action execution; (3) both reward and effort levels were varied parametrically rather than in discrete steps, and orthogonally to each other, thereby granting high sensitivity for the identification of effort and reward signals, respectively; (4) efforts were only realized on a subset of trials, ensuring that decisions were not influenced by fatigue ( Klein-Flügge et al., 2015 ). Importantly, however, at the time of choice, participants did not know whether a given trial was real or hypothetical; therefore, the optimal strategy was to treat each trial as potentially real; and (5) the duration of the grip on effort trials (12 s) had been determined in pilot experiments and ensured that force levels were factored into the choice process. Moreover, the fixed duration of grip force also meant that effort costs were not confounded with temporal costs. Scanning procedure. Before scanning, force levels were adjusted to each individual's grip strength using a grip calibration. Participants were seated in front of a computer monitor and held a custom-made grip device in both hands. Each participant's baseline (no grip) and MVC were measured over a period of 3 s, separately for both hands. The measured values were used to define individual force ranges (0%–100%) for each hand, which were then used in the behavioral task, both prescanning and during scanning. Before entering the scanner, participants completed a training session consisting of one block of the behavioral task (112 trials, ∼30 min). This gave them the opportunity to experience different force levels and to become familiar with the task. Importantly, it also ensured that decisions made subsequently in the scanner would not be influenced by uncertainty about the difficulty of the displayed force levels. In the scanner, participants completed two blocks of the task (overall task duration ∼60 min; 224 choices). Generation of choice stimuli. Because our main question related to the encoding of value difference signals during effort-based choices, the generation of suitable choice stimuli was a key part of the experimental design. Choice options were identical for every individual and were chosen such that they would minimize the correlation between the fMRI regressors for chosen and unchosen effort, reward magnitude, and value (obtained mean correlations after scanning: effort: −0.23; reward magnitude: 0.11; value: 0.43; see Fig. 1 C ). We also ensured that left and right efforts, reward magnitudes, and values were decorrelated to be able to identify action value signals (effort: 0.28; reward magnitude: 0.05; value: 0.07). We simulated several individuals using a previously suggested value function for effort-based choice ( Prévost et al., 2010 ). Stimuli were optimized with the following additional constraints: either the efforts or the reward magnitudes had to differ by at least 0.1 on each trial, the range of efforts and reward magnitudes was [0.2 to 0.8] × MVC or 0–50 points, respectively, and the overall expected value for both hands was comparable. Furthermore, in 85% of trials, the larger reward was paired with the larger effort level, and the smaller reward with the smaller effort level, making the choice hard, but on 15% of trials, the larger reward was associated with the smaller effort level (“no-brainer”). The two choice sets that minimized the correlations between our regressors of interest were used for the fMRI experiment. A third stimulus set was saved for the behavioral training before scanning. Preliminary fMRI analyses revealed that we had overlooked a bias in our stimuli. In the last third of trials of the second block, the overall offer value ((magnitude1/effort1 + magnitude2/effort 2)/2) decreased steadily, leading to skewed contrast estimates. Therefore, the last 40 trials were discarded from all analyses. We refer to choices in this study as “effort-based” to highlight the distinction from purely outcome/reward-based choices or choices involving other types of costs (e.g., delay-based). But of course, in our task, all choices were effort- as well as reward-based. Recordings of grip strength. The grippers were custom-made and consisted of two force transducers (FSG15N1A, Honeywell) placed between two molded plastic bars (see also Ward and Frackowiak, 2003 ). A continuous recording of the differential voltage signal, proportional to the exerted force, was acquired, fed into a signal conditioner (CED 1902, Cambridge Electronic Design), digitized (CED 1401, Cambridge Electronic Design), and fed into the computer running the stimulus presentation. This enabled us, during effort trials, to give online feedback about the exerted force using the thermometer display. Behavioral analysis. To examine which task variables affected participants' choice behavior, a logistic regression was fitted to participants' choices (1 = RH; 0 = LH) using the following nine regressors: a RH-LH bias (constant term); condition (ABS or ACT); gain (high or low); LH-effort on previous trial; RH-effort on previous trial; reward magnitude left; reward magnitude right; effort left; effort right. t tests performed across participants on the obtained regression coefficients were adjusted for multiple comparisons using Bonferroni correction. Because only reward magnitudes and efforts influenced behavior significantly (see Results), the logistic regression models performed for the analysis of the neural data below ( Eqs. 2 , 3 ) only contained these variables (or their amalgamation into combined value). To examine the influence of reward and effort on participants' choice behavior in more depth, we tested whether participants indeed weighed up effort against reward, and whether they treated reward and effort as continuous variables. If reward and effort compete for their influence on choice, then the influence of effort should become larger as the reward difference becomes smaller, and vice versa. Thus, we performed a median split of our trials according to the absolute difference in reward (or effort) between the two choice options. We then calculated the likelihood of choosing an option as a function of its effort (reward) level, separately for the two sets of trials. Effort (reward) values were distributed across 10 bins with equal spacing; this binning was independent of the effort (reward) level of the alternative option. For statistical comparisons, we fitted a slope for each participant to the mean of all bins. t tests were performed on the resulting four slopes testing for the influence of (1) effort in trials with small reward difference, (2) effort in trials with large reward difference, (3) reward in trials with small effort difference, and (4) reward in trials with large effort difference. We report uncorrected p values, but all conclusions hold when correcting for six comparisons (1–4 against zero; 1 vs 2; 3 vs 4). We also tested for effects of fatigue: the above logistic regression suggested that choices were not affected by whether or not the previous trial required the production of effort, as shown previously in this task ( Klein-Flügge et al., 2015 ). More detailed analyses examined the percentage of trials in which the higher effort option was chosen (running average across 20 trials), and participants' performance in reaching and maintaining the required force. The latter was measured as the time point when 10 consecutive samples were above force criterion (the shorter, the sooner), and as the percentage of time out of 12 s that participants were at criterion, respectively. For all measures, we compared the first and last third of trials. Here we report the comparison between the first and last third across the entire experiment. However, separate analyses, using the first and last third of just the first or the second block, revealed identical results. There were no effects of fatigue: in all cases, participants either improved or stayed unchanged (percentage higher effort chosen: first third, 60.56 ± 1.94%; last third, 60.93 ± 2.79%, p = 0.69, t (20) = −0.40; reaching the force threshold: first third, 0.83 ± 0.04 s; last third, 0.76 ± 0.03 s; p = 0.01, t (20) = 2.82; maintaining the force above threshold: first third, 92.49 ± 0.47%; last third, 93.51 ± 0.28%; p = 0.01, t (20) = −2.84). To derive participants' subjective values for the offers presented on each trial, we developed an effort discounting model ( Klein-Flügge et al., 2015 ). This model has been shown to provide better fits than the hyperbolic model previously suggested for effort discounting ( Prévost et al., 2010 ) both here and in our previously published work ( Klein-Flügge et al., 2015 ). Crucially, its shape is initially concave, unlike a hyperbolic function, allowing for smaller devaluations of value for effort increases at weak force levels, and steeper devaluations at higher force levels, which is intuitive for effort discounting and biologically plausible. Our model follows the following form: V is subjective value, C is the effort cost, M the reward magnitude, and k and p are free parameters. C and M are scaled between 0 and 1, corresponding to 0% MVC and 100% MVC, and 0 points and 50 points, respectively. A simple logistic regression on the difference in subjective values between choice options was then used to fit participants' choices; in other words, the following function (softmax rule) was used to transform the subjective values V1 and V2 of the two options offered on each trial into the probability of choosing option 1 as follows: The free parameters (slope k , turning point p , softmax precision parameter β V ), were fitted using the Variational Laplace algorithm ( Penny et al., 2003 ; Friston et al., 2007 ). This is a Bayesian estimation method, which incorporates Gaussian priors over model parameters and uses a Gaussian approximation to the posterior density. The parameters of the posterior are iteratively updated using an adaptive step size, gradient ascent approach. Importantly, the algorithm also provides the free energy F , which is an approximation to the model evidence. The model evidence is the probability of obtaining the observed choice data, given the model. To maximize our chances to find global, rather than local maxima with this gradient ascent algorithm, parameter estimation was repeated over a grid of initialization values, with eight initializations per parameter. The optimal set of parameters (i.e., that obtained from the initialization that resulted in the maximal free energy) was used for modeling subjective values in the fMRI data. For our BOLD analyses, the most relevant parameter was β V . It reflects the weight (i.e., strength) with which participants' choices are driven by subjective value, rather than noise; it is also often referred to as precision or inverse softmax temperature. Fitting ACT and ABS, or high and low gain trials separately did not lead to any significant differences between conditions (paired t tests on parameter estimates between conditions all p > 0.3) and did not improve the model evidence (paired t test on the model evidence; fitting conditions separately or not: p = 0.82; fitting gain separately or not: p = 0.63). Trials were therefore pooled for model fitting. Once fitted, the performance of our new model was compared with that of the hyperbolic model and two parameter-free models (difference: reward − effort; quotient: reward/effort) as described by Klein-Flügge et al. (2015 ) using a formal model comparison. fMRI data acquisition and preprocessing. The fMRI methods followed standard procedures (e.g., Klein-Flügge et al., 2013 ): T2*-weighted EPI with BOLD contrast were acquired using a 12 channel head coil on a 3 tesla Trio MRI scanner (Siemens). A special sequence was used to minimize signal drop out in the OFC region ( Weiskopf et al., 2006 ) and included a TE of 30 ms, a tilt of 30° relative to the rostrocaudal axis and a local z -shim with a moment of −0.4 mT/m ms applied to the OFC region. To achieve whole-brain coverage, we used 45 transverse slices of 2 mm thickness, with an interslice gap of 1 mm and in-plane resolution of 3 × 3 mm, and collected slices in an ascending order. This led to a TR of 3.15 s. In each session, a maximum of 630 volumes were collected (∼33 min) and the first five volumes of each block were discarded to allow for T1 equilibration effects. A single T1-weighted structural image with 1 mm 3 voxel resolution was acquired and coregistered with the EPI images to permit anatomical localization. A fieldmap with dual echo-time images (TE1 = 10 ms, TE2 = 14.76 ms, whole brain coverage, voxel size 3 × 3 × 3 mm) was obtained for each subject to allow for corrections in geometric distortions induced in the EPIs at high field strength ( Andersson et al., 2001 ). During the EPI acquisition, we also obtained several physiological measures. The cardiac pulse was recorded using an MRI-compatible pulse oximeter (model 8600 F0, Nonin Medical), and thoracic movement was monitored using a custom-made pneumatic belt positioned around the abdomen. The pneumatic pressure changes were converted into an analog voltage using a pressure transducer (Honeywell International) before digitization, as reported previously ( Hutton et al., 2011 ). Preprocessing and statistical analyses were performed using SPM8 (Wellcome Trust Centre for Neuroimaging, London; ). Image preprocessing consisted of realignment of images to the first volume, distortion correction using fieldmaps, slice time correction, conservative independent component analysis to identify and remove obvious artifacts (using MELODIC in Fmrib's Software Library; ), coregistration with the structural scan, normalization to a standard MNI template, and smoothing using an 8 mm FWHM Gaussian kernel. Data analysis: GLM. The first GLM (GLM1) included 12 main event regressors. The offer phase was described using onsets for (1) ACT trials preparing a left response, (2) ACT trials preparing a right response, and (3) ABS trials. All three events were modeled using durations of 2 s and were each associated with four parametric modulators: the reward magnitude and effort of the chosen and unchosen option. Crucially, these four parametric modulators competed to explain common variance during the estimation, rather than being serially orthogonalized (in other words, we implicitly tested for effects that were unique to each parametric explanatory variable). The response phase was described using four regressors for “no force” trials (1 s duration) and four regressors for effort production trials (12 s duration): (4–7) no force ACT left , ACT right , ABS left , ABS right ; (8–11) effort production left (low gain), left (high gain), right (low gain), right (high gain). Finally, the outcome was modeled as a single regressor because the proportion of trials in which efforts were not produced successfully was negligible (median: 0; mean: 0.43 ± 0.22 trials; only 4 of 21 participants had any unsuccessful trials). In addition to event regressors, a total of 23 nuisance regressors were included to control for motion and physiological effects of no interest. First, to account for motion-related artifacts that had not been eliminated in rigid-body motion correction, the six motion regressors obtained during realignment were included. Second, to remove variance accounted for by cardiac and respiratory responses, a physiological noise model was constructed using an in-house MATLAB toolbox (The MathWorks) ( Hutton et al., 2011 ). Models for cardiac and respiratory phase and their aliased harmonics were based on RETROICOR ( Glover et al., 2000 ). The model for changes in respiratory volume was based on Birn et al. (2006 ). This resulted in 17 physiological regressors in total: 10 for cardiac phase, 6 for respiratory phase, and 1 for respiratory volume. The parameters of the hemodynamic response function were modified to obtain a double-gamma hemodynamic response function, with the standard settings in Fmrib's Software Library ( ): delay to response 6 s, delay to undershoot 16 s, dispersion of response 2.5 s, dispersion of undershoot 4 s, ratio of response to undershoot 6 s, length of kernel 32 s. The second GLM (GLM2) was identical to the first, except that the four parametric regressors (reward magnitude and effort of the chosen and unchosen option) were replaced by the subjective model-derived values of the chosen and unchosen option. This allowed us to identify regions encoding the difference in subjective value between the offers. Three further GLMs were fitted to the data to test whether the values derived from the sigmoidal model provide the best explanation of the measured BOLD signals. These GLMs were identical to GLM2, except that the parametric regressors for the values of the chosen and unchosen option derived from the sigmoidal model were replaced by (1) the values derived from a hyperbolic model (GLM3), (2) the values derived from a parameter-free difference “reward − effort” (GLM4), or (3) the values derived from a parameter-free quotient “reward/effort” (GLM5). Identifying signatures of choice computation. Our first aim was to identify brain regions with BOLD signatures of choice computation (see Fig. 2 A ). Thus, we first identified brain regions that fulfilled the following two criteria (GLM1): (1) the BOLD signal correlated negatively with the difference in effort between chosen and unchosen options; and (2) the BOLD signal correlated positively with the difference in reward magnitude between chosen and unchosen options. Collectively, these two signals form the basis of a value difference signal because effort contributes negatively and reward magnitude contributes positively to overall value. Previous work has demonstrated, using predictions derived from a biophysical cortical attractor network, that at the level of large neural populations, as measured using human neuroimaging techniques, such as fMRI or MEG, the characteristic signature of a choice comparison process is a value difference signal ( Hunt et al., 2012 ). The responses predicted for harder and easier choices differ because the speed of the network computations varies as a function of choice difficulty (e.g., faster for high value difference). Thus, an area at the formal conjunction of the two contrasts described by options 1 and 2 would carry the relevant signatures for computing a subjective value difference signal, a cardinal requirement for guiding choice. Importantly, while we reasoned that the choice computations in our specific task should follow similar principles as in Hunt et al. (2012 ), we expected this computation to occur in different regions because it would be based on the integration of a different type of decision cost. In an additional analysis (see Fig. 5 ), for completeness, we also identified brain regions significant in the inverse contrast (i.e., a conjunction of positive effort and negative reward magnitude difference) ( Wunderlich et al., 2009 ; Hare et al., 2011 ). Regions of interest (ROIs) and extraction of time courses. For whole-brain analyses, we used a FWE cluster-corrected threshold of p < 0.05 (using a cluster-defining threshold of p < 0.01 and a cluster threshold of 10 voxels). For a priori ROI analyses, we used a small-volume corrected FWE cluster-level threshold of p < 0.05 in spheres of 5 mm around previous coordinates, namely, in left and right putamen ([±26, −8, −2]) ( Croxson et al., 2009 ), SMA ([4, −6, 58]) ( Croxson et al., 2009 ), and vmPFC ([−6, 48, −8] ( Boorman et al., 2009 ). BOLD time series were extracted from the preprocessed data of the identified regions by averaging the time series of all voxels that were significant at p < 0.001 (uncorrected). Time series were up-sampled with a resolution of 315 ms (1/10 × TR) and split into trials for visual illustration of the described effects (e.g., see Fig. 2 B ). At the suggestion of one reviewer, the two main analyses (conjunction of reward and inverse effort difference described above, and value difference contrast described below) were repeated in FSL using Flame1 because of differences between SPM and FSL in controlling for false positives when using cluster-level corrections ( Eklund et al., 2015 ). For this control analysis, we imported the preprocessed (unsmoothed) images to FSL. We then used FSL's default smoothing kernel of 5 mm and a cluster-forming threshold of z > 2.3 (corresponding to p < 0.01; default in FSL). The obtained results are overlaid in Figure 2 A , D . Encoding of subjective value. We next asked whether BOLD signal changes in the regions identified using the abovementioned conjunction could indeed be described by the subjective values derived from our custom-made behavioral model. We thus performed a whole-brain contrast, identifying regions encoding the difference in subjective value between the chosen and unchosen option (GLM2; see Fig. 2 D ). To test whether the BOLD signal was better explained by subjective value as modeled using the sigmoidal function or three alternative models (hyperbolic; “difference”: reward − effort; “quotient”: reward/effort; GLM3-GLM5; see Fig. 3 B ), we calculated the difference between the value difference maps obtained on the first level for each participant (sigmoid vs hyperbolic; sigmoid vs difference; sigmoid vs quotient; see Fig. 3 C ). A standard second-level t test was performed on the three resulting difference images and statistical significance evaluated as usual. Relating neural and behavioral effects of value difference. If it was indeed the case that the regions identified to encode value difference are involved in choice computation and as a result, inform behavior, the BOLD value signal should systematically relate to behavioral measures of choice performance ( Jocham et al., 2012 ; Kolling et al., 2012 ). To test this, we used the behavioral measure of the effect of value difference, β V, as derived from the logistic regression analysis above ( Eq. 2 ). Importantly, before fitting β V , model-derived subjective values were scaled between [0, 1] for all participants so that any difference in the fitted regression coefficient β V indicated how strongly value difference influenced behavioral choices in a given participant. β V reflects how consistently participants choose the subjectively more valuable option. In other words, this parameter captures how strongly value rather than noise determines choice behavior. To examine whether the size of the neural value difference signal carried behavioral relevance, the behavioral weights β V were then used as a covariate for the value difference contrast in a second-level group analysis. At the whole-brain level, we thus identified regions where the encoding of value difference was significantly modulated by how strongly participants' choices were driven by subjective value (see Fig. 2 F ). This analysis was restricted to the regions that encoded value difference at the first level. For illustration of the effect, the neural signature of value difference (regression coefficients for chosen vs unchosen value at the peak time of 6 s) was plotted against β V (see Fig. 2 F ). Reward maximization versus effort minimization. In our task, reward maximization is in conflict with effort minimization in almost all trials because the option that has a higher reward value is also associated with a higher effort level. To capture the separate behavioral influences of reward and effort for each participant, another logistic regression analysis was conducted, but now both the difference in offer magnitudes and in efforts were entered into the design matrix, rather than just their combination into value as in Equation 2 as follows: Here, β M is the weight or precision with which reward magnitude difference (M1 − M2) influences choice, and β E is the weight (precision) with which effort difference (E1 − E2) influences choice. Next, to identify which brain regions might bias the choice computation either toward reward or away from physical effort, we performed two independent tests. First, we used the behaviorally defined weights for effort, −β E , as a covariate on the second level, to identify regions where the encoding of effort difference scales with how “effort averse” participants were. In such regions, a larger difference between chosen and unchosen effort signals would indicate that participants avoid efforts more strongly (see Fig. 4 B ). Based on prior work, we had a priori hypotheses about effort preferences being guided by SMA and putamen (e.g., Croxson et al., 2009 ; Kurniawan et al., 2010 , 2013 ; Burke et al., 2013 ). Therefore, we used a small-volume correction ( p < 0.05) around previously established coordinates (putamen [±26, −8, −2]; SMA [4, −6, 58]) ( Croxson et al., 2009 ). Second, in an analogous fashion, we used the behavioral weights for reward magnitude, β M , as a covariate on the second level to identify regions where the encoding of reward magnitude difference scales with how reward-seeking participants are. In brain regions thus identified, a larger BOLD signal difference between chosen and unchosen reward signals would imply that participants place a stronger weight on reward maximization in their choices (see Fig. 4 A ). Based on prior work, we expected reward magnitude comparisons to occur in vmPFC (e.g., Kable and Glimcher, 2007 ; Boorman et al., 2009 ; Philiastides et al., 2010 ). Therefore, we used a small-volume correction ( p < 0.05) around previously established coordinates [−6, 48, −8] ( Boorman et al., 2009 ). We further characterized the relationship between participants' effort sensitivity and BOLD signal changes by asking whether the neural encoding of effort difference relates to the individual distortions captured in the parameters k and p of the effort discounting function. For each trial, we compared the true effort difference between the chosen and unchosen option with the modeled subjective effort difference between the chosen and unchosen option. We took the sum of the absolute error from the best linear fit between these two variables as an index of how well our initial GLM captured subjective distortions in the evaluation of effort. We used this measure as an additional regressor for our second-level analysis, in addition to β E (these two regressors are uncorrelated: r = −0.27, p = 0.24). This approach had the advantage that it combined subjective effort distortions driven by both p and k into a single parameter relevant for the effort comparison (correlation of the summed errors with k : r = 0.9646, p < 0.001; with p: r = 0.60, p = 0.0043). Results Human participants performed choices between options with varying rewards and physical efforts (force grips; Fig. 1 A ). Our main aim was to identify areas carrying neural signatures of value comparison, which are sometimes absent on choices when all decision variables favor the same choice ( Hunt et al., 2012 ). Therefore, for the majority of decisions, larger rewards were paired with larger efforts so that reward maximization competed with energy minimization, and the reward and effort of each option had to be combined into an integrated subjective value to derive a choice. We first tested whether both the size of reward and the associated effort of each choice option had an impact on participant's choice behavior. A logistic regression showed that participants' choices were indeed guided by the reward magnitude and effort of both options (left reward: t (20) = −9.71, Cohen's d = −4.34, p = 4.28e-08; right reward: t (20) = 8.89, Cohen's d = 3.98, p = 1.44e-07; left effort: t (20) = 7.56, Cohen's d = 3.38, p = 2.79e-06; right effort: t (20) = −8.37, Cohen's d = −3.74, p = 2.79e-06; Fig. 1 B ). As expected, larger rewards and smaller effort costs attracted choices. Overall, participants chose the higher effort option on 48 ± 2% of trials. Download figure Open in new tab Download powerpoint Figure 1. Task and behavior. A , Human participants chose between two options associated with varying reward magnitude (numbers) and physical effort (bar height translates into force, Offer). Once the fixation cross turned red (Response), participants were allowed to indicate their choice. Thus, the time of choice computation was separable in time from the motor response. Following a response, the effort had to be realized on an unpredictable 30% of trials (top). On these trials, participants had to produce a 12 s power grip at a strength proportional to the bar height of the chosen option. Force levels were adjusted to individuals' maximum force at the start of the experiment. Participants received feedback about successful performance of the grip (99% accuracy), and the rewards collected on successful trials were added to the total winnings. On 70% of trials (bottom), no effort was required and the next trial commenced (intertrial interval [ITI]). B , Participants' choices were driven by both options' reward magnitude and effort level showing that all dimensions of the outcome were taken into account for computing a choice. Benefits and costs had opposite effects: larger efforts discouraged, and larger reward magnitudes encouraged, the choice of an option (standard errors: ± SEM). C , Correlations between left (L), right (R), chosen (C), and unchosen (U) effort levels (e) and reward magnitudes (r) show that the regressors of interest were sufficiently decorrelated in our design. D , Effort has a strong effect on choice in trials with small reward differences, but no effect when the reward difference is large (green panels; median split on reward difference; effort binned for visualization). Similarly, reward has a stronger effect in trials with small effort differences compared with trials with large effort differences (blue panels). This shows that participants indeed trade off effort against reward, and confirms that reward has a stronger and opposite effect compared with effort (red slope), as shown in B . Black lines indicate individual participants and suggest that reward and effort were treated as continuous variables. Next, we examined whether effort was weighed up against reward; if this was the case, the influence of effort (reward) on the participant's choice would become stronger as the reward (effort) difference between the options becomes smaller. Indeed, effort had a bigger impact on choice in trials with a small compared with a large reward difference (median split; Fig. 1 D , green panels; difference in slopes: t (20) = −18.06, p = 7.51e-14; small reward difference only: slope = −1.23 ± 0.11; t (20) = −11.51, p = 2.82e-10; large reward difference only: slope = 0.26 ± 0.12, t (20) = −1.95, p = 0.066). The same was true for reward: its impact on choice behavior was greater in trials with a small compared with a large effort difference ( Fig. 1 D , blue panels; difference in slopes: t (20) = 11.95, p = 1.46e-10; small effort difference only: slope = 1.65 ± 0.05, t (20) = 36.03, p = 1.14e-19; large effort difference only: slope = 0.38 ± 0.12, t (20) = 3.08, p = 0.0059). This analysis also confirmed that effort and reward were treated as continuous variables. Given that behavior was guided by the costs as well as the benefits associated with the two choice options, we next asked whether any brain region encoded both effort and reward in a reference frame consistent with choice. Our main aim was to identify neural signatures of the choice computation: any brain region comparing the values of the two choice options should be sensitive to information about both costs and benefits. Recent work using a biophysically realistic attractor network model ( Wang, 2002 ) suggests that the mass activity of a region computing a choice should reflect the difference of the values of both choice options ( Hunt et al., 2012 ). In our task, a region comparing the options should hence encode (1) the inverse difference between chosen and unchosen efforts and (2) the (positive) difference between chosen and unchosen rewards. We therefore computed the formal conjunction of these two contrasts, which is a conservative test, asking whether any region is significant in both comparisons. This test focused on the decision phase, which was separated in time from the motor response ( Fig. 1 A ). We identified a cluster of activation in the SMA and in the caudal portion of dorsal ACC (dACC), on the border of the anterior and posterior rostral cingulate zones (RCZa, RCZp) and area 24 ( Neubert et al., 2015 ) ( Fig. 2 A ; p < 0.05 cluster-level FWE-corrected; peak coordinate: [−6, 11, 34], t (1,40) = 4.02; SMA peak coordinate: [−9, −7, 58], t (1,40) = 5.29). No other regions reached FWE cluster-corrected significance ( p < 0.05). Notably, we did not identify any activations in the vmPFC, a region commonly identified in reward-related value computations, even at lenient statistical thresholds ( p < 0.01, uncorrected). Replication of this conjunction analysis in FSL, performed at the suggestion of one reviewer, obtained comparable results, with only dACC and SMA reaching cluster-level corrected significance ( Fig. 2 A , green overlays). The two difference signals for effort and reward are illustrated for the BOLD time series extracted from the dACC cluster in Figure 2 B . Download figure Open in new tab Download powerpoint Figure 2. Neural signatures of effort choice comparison in SMA and dACC. A , As a marker of choice computation, we identified regions encoding (1) the difference between the chosen and unchosen reward magnitudes and (2) the inverse difference between the chosen and unchosen effort levels. The conjunction of both contrasts in SPM (shown at p < 0.001 uncorrected) revealed the SMA and a region in the caudal portion of dACC (both FWE-corrected p < 0.05). Cluster-level corrected results obtained from FSL's Flame 1 ( z > 2.3, p < 0.05) are overlaid in green to confirm this finding. B , For illustration purposes, the two opposing difference signals are shown for the dACC cluster on the right (standard errors: ± SEM). C , A custom-built sigmoidal model was fitted to participants' choices to obtain individual effort discounting curves (gray; red represents group mean). In the model, the subjective value of an option's reward ( y -axis, represented in %) is discounted with increasing effort levels ( x -axis). This allowed inferring the subjective values ascribed to choice options and modeling of subjective value in the BOLD data. D , The difference in subjective value between the chosen and unchosen option, as derived from the behavioral effort discounting model in C , was encoded in a similar network of regions as the combined difference in reward magnitude and effort shown in A , including caudal dACC, SMA, bilateral putamen, and insula (shown at p < 0.001 uncorrected as obtained with SPM; cluster-level corrected FSL results overlaid in green for z > 2.7, p < 0.05). E , The subjective value difference signal extracted from the dACC is shown for illustration (standard errors: ± SEM). F , Left, Regions encoding subjective value as in D but where the strength of this signal additionally correlated with the extent to which value difference guided behavior (inverse softmax temperature β V ; shown at p < 0.01 uncorrected; only the dACC survives cluster-level FWE-correction at p < 0.05). Right, Illustration of the correlation in dACC for visual display purposes only. The stronger the BOLD difference between the chosen and unchosen option in this region, the more precisely participants' choices are guided by value (β V ). This suggests that the dACC's value signal computed at the time of choice is relevant for guiding choices. G , Regions where the encoding of effort difference correlates, across subjects, with a marker for the individual level of effort distortion as captured by the parameters k and p of the modeled discount function. The better an individual's subjectively experienced effort was captured in the GLM (i.e., the less distorted their discount function), the stronger the inverse effort difference signal in caudal dACC and SMA (light blue represents p < 0.001 uncorrected; dark blue represents p < 0.005 uncorrected; dACC/SMA survive cluster-level FWE-correction at p < 0.05). This suggests that dACC and SMA encode effort difference in the way it subjectively influences the choice. These results raise the question of whether and how effort and reward are combined into an integrated value for each option, a prerequisite for testing whether any brain region encodes the comparison between subjective option values. Although established models exist to examine how participants compute compound values for uncertain/risky rewards (prospect theory) ( Kahneman and Tversky, 1979 ; Tversky and Kahneman, 1992 ) and delayed rewards (hyperbolic) ( Mazur, 1987 ; Laibson, 1997 ; Frederick et al., 2002 ), it remains unclear how efforts and rewards are combined into a subjective measure of value. We performed several behavioral experiments to develop a behavioral model that can formally describe the range of effort discounting behaviors observed in healthy populations ( Klein-Flügge et al., 2015 ). One key feature of this model is that it can accommodate cases when increases in effort at lower effort levels have a comparatively small effect on value, compared with increases in effort at higher effort levels (i.e., concave discounting). When we fitted this model to the choices recorded during the scanning session, participants' behavior was indeed best captured by an initially concave discounting shape (initially concave in 16 of 21 participants; Fig. 2 C ), consistent with previous work ( Klein-Flügge et al., 2015 ) and the intuition that effort increases are less noticeable at lower levels of effort compared with higher levels of effort. Using the individual model fits, we then directly tested for neural signatures consistent with a value comparison between the subjective values of the two choice options. This is a slightly less conservative test than the formal conjunction of effort and reward magnitude difference described above, but we note that this test revealed a highly consistent pattern of results. We found strong evidence for a network consisting of the SMA (peak: [−9, −7, 58], t (1,20) = 8.64), caudal portion of dACC (peak: [−3, 11, 34], t (1,20) = 7.1), and bilateral putamen (several peaks: left [−33, −13, 4], t (1,20) = 4.96 and [−33, −10, −2], t (1,20) = 5.28; right [33, −1, −2], t (1,20) = 4.96) to encode the (positive) difference in subjective value between the chosen and unchosen options ( Fig. 2 D ; all cluster-level FWE-corrected; p < 0.05). Again, comparable results were obtained using FSL ( Fig. 2 D , green overlays). This network resembled regions previously described for the evaluation of physical effort but was clearly distinct from the neural system associated with decisions about goods involving the vmPFC ( Kable and Glimcher, 2007 ; Boorman et al., 2009 ; FitzGerald et al., 2009 ; Philiastides et al., 2010 ; Hunt et al., 2012 ; Kolling et al., 2012 ; Clithero and Rangel, 2014 ; Strait et al., 2014 ). To validate our choice of behavioral discounting model, we performed a formal model comparison and found that the sigmoidal model provided a better explanation of choice behavior than (convex) hyperbolic discounting, previously proposed for effort discounting ( Prévost et al., 2010 ), and two parameter-free descriptions of value “reward minus effort” and “reward divided by effort” (model exceedance probability: xp = 1; mean of posterior distribution: mp_sigm = 0.75; mp_hyp = 0.05; mp_diff = 0.16; mp_div = 0.04; Fig. 3 A ). On average, the sigmoidal model correctly predicted 88 ± 1% of choices. To examine whether our measure of value derived from the sigmoidal model also best predicted the BOLD signal, we recalculated the value difference contrasts in an analogous way, this time modeling value using a hyperbolic or one of the two parameter-free models. The resulting whole-brain maps similarly highlighted SMA and dACC (surviving cluster-level FWE-corrected, p < 0.05 for the hyperbolic and difference models, not significant for the quotient model; Fig. 3 B ). But importantly, direct statistical comparison showed that the neural signal in these regions was significantly better explained by the values derived from the sigmoidal model (cluster-level FWE-corrected, p < 0.05 for the difference and quotient models; sigmoidal vs hyperbolic: SMA peak [−3, −7, 61], t (1,19) = 3.95; sigmoidal vs difference: dACC peak [−6, 11, 34], t (1,19) = 3.28; SMA peak [−6, −7, 58], t (1,19) = 5.33; sigmoidal vs quotient: dACC peak [−6, 11, 34], t (1,19) = 4.77; SMA peak [−6, −7, 61], t (1,19) = 6.72; Fig. 3 C ). This suggests that the BOLD signal aligns with the subjective experience of effort-discounted value, which was best captured using the sigmoidal model. Download figure Open in new tab Download powerpoint Figure 3. Model-derived value describes choice signals more accurately than model-free value. A , Bayesian model comparison for value modeled using the sigmoidal model, hyperbolic model, and two parameter-free descriptions of value: reward minus effort and reward divided by effort. The sigmoidal model captures choice behavior best (standard errors: ± SEM). B , Comparison of the BOLD response to value difference for the behavioral sigmoidal model (red; like Fig. 2 D ), the hyperbolic model (purple), and the parameter-free descriptions of value (blue: reward − effort; green: reward/effort; all shown at p < 0.001 uncorrected). The three alternative contrasts reveal a similar network, albeit less strongly. C , Crucially, the sigmoidal model provides a significantly better description of the BOLD signal in SMA, extending into caudal dACC, compared with all other models. Purple represents sigmoidal versus hyperbolic. Blue represents sigmoidal versus parameter-free subtraction. Green represents sigmoidal versus parameter-free division (shown at p < 0.001 uncorrected). A crucial question is whether the observed value difference signal bears any behavioral relevance for choice, rather than potentially being a mere byproduct of a choice computation elsewhere. In the former case, one would expect that the encoding of subjective value difference relates to the strength, or “weight,” with which subjective value difference influenced behavior across participants ( Jocham et al., 2012 ; Kolling et al., 2012 ; Khamassi et al., 2015 ). Such a behavioral weight was derived for each participant using a logistic regression on the normalized model-derived subjective values. The resulting parameter estimate is the same as the inverse softmax temperature or precision and reflects how consistently participants choose the subjectively more valuable option (see β V in Eq. 2 ). The only region that was significant in this second-level test and also encoded value difference at the first level was the dACC ( Fig. 2 F ; cluster-level FWE-corrected, p < 0.05; peak [−3, 11, 31], t (1,19) = 3.71). In other words, dACC encoded value difference on average across the group, and participants who exhibited a larger BOLD value difference signal in the dACC were also more consistent in choosing the subjectively better option (larger β V ); this relationship is illustrated in Figure 2 F . To further probe whether the identified network of regions evaluates the choice options in a subjective manner, we examined the relationship between the subjective “distortion” of effort described by the parameters k and p of the individual effort discount function, and the BOLD signal related to the effort difference across participants. We calculated a measure to describe how much the true effort difference deviated from the subjectively experienced effort difference overall across trials. This “distortion” regressor correlated with k ( r = 0.9646, p < 0.001) and p ( r = 0.60, p = 0.0043), but not β E ( r = −0.27, p = 0.24), and was used as a second-level covariate for the effort difference contrast. GLM1 contained the efforts shown on the screen and thus should have captured the subjectively experienced effort better in participants who showed smaller effort distortions (i.e., with discounting closer to linear). Thus, in regions related to the comparison of subjective effort or effort-integrated value, we expected participants with less effort distortions to show a stronger negative effort difference signal. Indeed, we found such a positive second-level correlation with the BOLD signal in dACC and SMA, supporting the notion that effort difference is encoded in these regions in the way it subjectively influences the choice ( Fig. 2 G ; cluster-level FWE-corrected, p < 0.05; dACC peak [−3, 14, 31], t (1,19) = 5.01, global maxima; SMA peak [−6, −7, 61], t (1,19) = 4.08). ACC has access to information from motor structures ( Selemon and Goldman-Rakic, 1985 ; Dum and Strick, 1991 ; Morecraft and Van Hoesen, 1992 , 1998 ; Kunishio and Haber, 1994 ; Beckmann et al., 2009 ) previously linked to evaluating effort ( Croxson et al., 2009 ; Kurniawan et al., 2010 , 2013 ; Burke et al., 2013 ), and prefrontal regions known to be involved in reward processing, such as the vmPFC and OFC ( Padoa-Schioppa and Assad, 2006 ; Kennerley and Wallis, 2009 ; Levy and Glimcher, 2011 ; Rudebeck and Murray, 2011 ; Klein-Flügge et al., 2013 ; Chau et al., 2015 ; Stalnaker et al., 2015 ). We thus reasoned that the ACC may be a key node for the type of effort-based choice assessed in the present task. To further test this hypothesis, we sought to identify regions that mediate between reward maximization versus effort minimization in our task. To this end, we first extracted two separate behavioral weights reflecting participants' tendency to seek reward and avoid effort. These behavioral parameters were derived from a logistic regression with two regressors explaining how much choices were guided by the difference in reward magnitude and the difference in effort level between options (β M and β E in Eq. 3 ). This is distinct from using just one regressor for the combined subjective value difference as done above (β V ). Across participants, we then first identified brain regions where the encoding of chosen versus unchosen reward magnitude correlated with the weight, β M , with which choices were influenced by the difference in reward between the chosen and unchosen option. Second, we performed the equivalent test for effort (i.e., we identified regions where the neural encoding of chosen vs unchosen effort correlated with the weight, −β E, with which behavior was guided by the difference in effort between the chosen and unchosen option). The two tests revealed two distinct networks of regions. First, the vmPFC encoded reward magnitude difference across subjects as a function of how much participants' choices were driven by the difference in reward between the options (SVC FWE-corrected cluster-level p = 0.037; peak [−6, 44, −8], t (1,19) = 2.87; Fig. 4 A ). Unlike in many other tasks ( Kable and Glimcher, 2007 ; Boorman et al., 2009 ; FitzGerald et al., 2009 ; Philiastides et al., 2010 ; Hunt et al., 2012 ; Kolling et al., 2012 ; Clithero and Rangel, 2014 ; Strait et al., 2014 ), the vmPFC BOLD signal did not correlate with chosen reward or reward difference on average in the group. However, reward difference signals were on average positive for participants whose choices were more strongly driven by reward magnitudes (median split; Fig. 4 A ). At the whole-brain level, the correlation of behavioral reward-weight, β M , and BOLD reward difference encoding did not reveal any activations using our FWE cluster-level corrected criterion of p < 0.05. Using a lenient exploratory threshold ( p = 0.01, uncorrected), we identified a small number of other regions including the posterior cingulate cortex bilaterally and visual cortex ( Fig. 4 A ), but crucially no clusters in motor, supplementary motor, or striatal regions. Download figure Open in new tab Download powerpoint Figure 4. Distinct circuits bias choices toward reward maximization or effort minimization. A , Regions where the encoding of reward magnitude difference varied as a function of the behavioral weight participants placed on reward (β M ; top: shown at p < 0.01 uncorrected). This showed that the BOLD signal in vmPFC (SVC FWE-corrected, p < 0.05) reflected the difference between chosen and unchosen reward more strongly in participants who also placed a stronger weight on maximizing reward (top, bottom left). Although we could not identify an average reward difference coding in vmPFC across the group, the subset of participants who placed a stronger weight on reward (larger β M ; median split, ellipse) did encode the difference between the chosen and unchosen reward magnitudes (bottom right). This suggests that vmPFC might bias choices toward reward maximization (standard errors: ± SEM). B , A very distinct network of regions, including the SMA and putamen (both SVC FWE-corrected, p < 0.05), encoded effort difference as a function of participants' behavioral effort weight (β E ; shown at p < 0.01 uncorrected). This system was active more strongly in participants who tried to more actively avoid higher efforts and has often been associated with effort evaluation. It might counteract the vmPFC circuit shown in A to achieve effort minimization, which is in constant conflict with reward maximization in our task. Correlation plots (bottom) are only shown for visual illustration of the effects for a priori ROIs; no statistical analyses were performed on these data. By contrast, a network of motor regions, including SMA and putamen, encoded effort difference as a function of the individual behavioral effort weight −β E ( Fig. 4 B ; SVC FWE-corrected cluster-level SMA: p = 0.048, peak [3, −7, 58], t (1,19) = 2.59; left putamen: p = 0.035, peak [−27, −4, −5], t (1,19) = 3.39; right putamen no suprathreshold voxels). In other words, these regions encoded the difference in effort between the chosen and unchosen options more strongly in participants whose choices were negatively influenced by large effort differences (i.e., participants who were more sensitive to effort costs). Using a whole-brain FWE cluster-level-corrected threshold ( p < 0.05), no regions were detected in this contrast. At an exploratory threshold ( p = 0.01, uncorrected), this contrast also highlighted regions in the brainstem, primary motor cortex, thalamus, and dorsal striatum ( Fig. 4 B ), and thus regions previously implicated in evaluating motor costs and in recruiting resources in anticipation of effort ( Croxson et al., 2009 ; Burke et al., 2013 ; Kurniawan et al., 2013 ), but clearly distinct from the vmPFC/posterior cingulate cortex network identified in the equivalent test for reward above. Together, our data thus show that two distinct networks centered on vmPFC versus SMA/putamen encode the reward versus effort difference as a function of how much these variables influence the final choice. Yet only the caudal portion of dACC encodes the difference in overall subjective value as a function of how much overall value influences choice. This region in dACC could therefore be a potential mediator between reward maximization and effort minimization, which appear to occur in separate neural circuits. Functionally distinct subregions of medial PFC For completeness, we also tested whether any areas encode an opposite value difference signal (i.e., the inverse of the conjunction analysis and of the subjective value difference contrast performed above), reflecting the evidence against the chosen option and thus one notion of decision difficulty. This did not reveal any regions at our conservative (cluster-level FWE-corrected) threshold in either test. At a more lenient exploratory threshold ( p = 0.01 uncorrected), a single common cluster in medial PFC (pre-SMA/area 9) was identified ( Fig. 5 ), in agreement with previous reports of negative value difference signals in this region ( Wunderlich et al., 2009 ; Hare et al., 2011 ). Importantly, the location of this activation was clearly distinct from the caudal dACC region found to encode a positive value difference ( Fig. 2 ). Here, by contrast, value difference signals did not correlate with the strength with which subjective value difference influenced behavior across participants (β V ; no suprathreshold voxels at p = 0.01 uncorrected), suggesting that this region's functions during choice are separate from those that bias behavior. Download figure Open in new tab Download powerpoint Figure 5. Opposite coding of relative choice value in dorsal medial frontal cortex. A , Regions where the BOLD signal encodes an inverse rather than a positive difference between chosen and unchosen reward magnitudes and a positive rather than an inverse difference between chosen and unchosen effort (i.e., the exact inverse of the conjunction shown in Fig. 2 A ). The only region detected at a lenient threshold ( p = 0.01 uncorrected; no regions survive FWE correction) is a nearby but anatomically distinct region in medial prefrontal cortex previously suggested to serve as a choice comparator ( Wunderlich et al., 2009 ; Hare et al., 2011 ). B , However, in this region, the BOLD signal does not relate to behavior, as was the case for the caudal portion of dACC (see Fig. 2 F ). Discussion Choices requiring the consideration of motor costs are ubiquitous in everyday life. Unlike other types of choices, they require knowledge of the current state of the body and its available energy resources, to weight physical costs against potential benefits. How this trade-off might be implemented neurally remains largely unknown. Here, we identified a region in the caudal part of dACC as the key brain region that carried the requisite signatures for effort-based choice: dACC represented the costs and benefits of the chosen relative to the alternative option, integrated effort and reward into a combined subjective value signal, computed the subjective value difference between the chosen relative to the alternative option, and activity here correlated with the degree to which participants' choices were driven by value. ACC integrates effort and reward information Work from several lines of research suggests that ACC may be a key region for performing cost-benefit integration for effort-based choice. For example, lesions to ACC (but not OFC) result in fewer choices of a high effort/high reward compared with a low effort/low reward option: yet such animals still choose larger reward options when effort costs for both options are equated, implying that ACC is not essential when decisions can be solved only by reward ( Walton et al., 2003 , 2009 ; Schweimer and Hauber, 2005 ; Rudebeck et al., 2006 ; Floresco and Ghods-Sharifi, 2007 ). BOLD responses in human ACC reflect the integrated value of effort-based options in the absence of choice ( Croxson et al., 2009 ). Further, single neuron recordings from ACC encode information about both effort and reward ( Shidara and Richmond, 2002 ; Kennerley et al., 2009 , 2011 ) and integrate costs and benefits into a value signal ( Hillman and Bilkey, 2010 ; Hosokawa et al., 2013 ; Hunt et al., 2015 ). ACC thus appears to have a critical role in integrating effort and reward information to derive the subjective value of performing a particular action. ACC encodes a choice comparison signal However, from the aforementioned work, it remained unclear whether cost-benefit values of different choice options are actually compared in ACC, or whether reward and effort may be compared in separate neural structures and the competition resolved between areas. When one choice option is kept constant, the value of the changing option correlates perfectly with the value difference between the options ( Kurniawan et al., 2010 ; Prévost et al., 2010 ; Bonnelle et al., 2016 ), which precludes distinguishing between valuation and value comparison processes. This is similarly true when only one option is offered and accepted/rejected ( Bonnelle et al., 2016 ). We here varied both options' values from trial to trial, which allowed us to identify a choice comparison signal in the ACC, and thus the essential neural signature implicating this area in decision making. First, we show that a region in the caudal portion of dACC encodes separate difference signals for effort and reward. The direction of these difference signals aligns with their respective effect on value, with effort decreasing and reward increasing an option's overall value. Second, we demonstrate a comparison signal between integrated option values. We used a novel behavioral model ( Klein-Flügge et al., 2015 ) to characterize participants' individual tendency to discount reward given the level of motor costs. Using the resultant model-derived subjective values, we identified the dACC as a region encoding a combined value difference signal. Indeed, our model provided a better characterization of the BOLD signal than other models of effort discounting, and dACC activity was related to individuals' “distortions” of effort. This resolves an important question showing that effort and reward information are indeed brought together within a single region to inform choice. Finally, this value comparison signal also varied as a function of how much value influenced choices across participants. This result further strengthens the idea that the dACC plays a crucial role in guiding choice, rather than merely representing effort or reward information. In our task, no other region exhibited similar dynamics, even at lenient thresholds. Influences from “effort” and “reward” circuits Nevertheless, an important question remains: do the regions that preferentially encode reward or effort have any influence on choice? To examine this question, we looked for regions that explain participants' tendency to avoid effort, or to seek reward. This analysis revealed two distinct circuits. Whereas signals in vmPFC reflected the relative benefits as a function of how reward-driven participants' choices were, a network more commonly linked to action selection and effort evaluation ( Croxson et al., 2009 ; Kurniawan et al., 2010 , 2013 ; Prévost et al., 2010 ; Burke et al., 2013 ; Bonnelle et al., 2016 ), including SMA and putamen, encoded relative effort as a function of how much participants tried to avoid energy expenditure. It will be of interest to examine in future work how these circuits interact, and how different modulatory systems contribute to this interplay (see e.g., Varazzani et al., 2015 ). This question should be extended to situations when different costs coincide or different strategies compete (for one recent example, see Burke et al., 2013 ), or when information about effort and reward has to be learned ( Skvortsova et al., 2014 ; Scholl et al., 2015 ). Converging evidence for multiple decision circuits Our results contribute to an emerging literature demonstrating the existence of multiple decision systems in the brain which are flexibly recruited based on the type of decision ( Rushworth et al., 2012 ). One well-studied system concerns choices where costs are directly tied to outcomes (e.g., risk, delay). During this type of choice, vmPFC encodes the difference between the chosen and unchosen options' cost-benefit value ( Kable and Glimcher, 2007 ; Boorman et al., 2009 ; Philiastides et al., 2010 ; Hunt et al., 2012 ; Kolling et al., 2012 ), consistent with the decision impairments observed after vmPFC lesions ( Noonan et al., 2010 ; Camille et al., 2011a , c). Other types of choices, however, rely on other networks ( Kolling et al., 2012 ; Hunt et al., 2014 ; Wan et al., 2015 ). In the present study, decisions required the integration of motor costs, and we show that for this, dACC, rather than vmPFC, plays a more central role. vmPFC did not encode overall value or the difference in value between the options in our task; in our hands, vmPFC evidenced no information about effort costs, consistent with previous proposals ( Prévost et al., 2010 ; Skvortsova et al., 2014 ). Functionally dissociable anatomical subregions of mPFC The location in the dACC identified here is distinct from a more anterior and dorsal region in medial frontal cortex (in or near pre-SMA) where BOLD encodes the opposite signal: a negative value difference ( Wunderlich et al., 2009 ; Hare et al., 2011 ). It is also more posterior than a dACC region involved in foraging choices ( Kolling et al., 2012 ). The cluster of activation identified here extends from the cingulate gyrus dorsally into the lower bank of the cingulate sulcus, and it is sometimes also referred to it as midcingulate cortex (MCC) ( Procyk et al., 2016 ) or rostral cingulate zone ( Ridderinkhof et al., 2004 ). According to a recent connectivity-based parcellation, our activation is on the border of areas RCZa (34%), RCZp (33%) and area 24 (48%) ( Neubert et al., 2015 ). While it shares some voxels with the motor cingulate regions in humans ( Amiez and Petrides, 2014 ), most parts of our cluster are more ventral and located in the gyral portion of ACC (for a discussion of functionally dissociable activations in ACC, see also Kolling et al., 2016 ). Relevance for disorders of motivation Our findings in the dACC speak to an important line of research showing deficits in effort-based decision making in a number of disorders, including depression, negative symptom schizophrenia, and apathy ( Levy and Dubois, 2006 ; Cléry-Melin et al., 2011 ; Treadway et al., 2012 , 2015 ; Fervaha et al., 2013 ; Gold et al., 2013 ; Hartmann et al., 2014 ; Pizzagalli, 2014 ; Yang et al., 2014 ; Bonnelle et al., 2015 ). Patients with these disorders often show a reduced ability to initiate effortful actions to obtain reward. Crucially, they also exhibit abnormalities in ACC and basal ganglia circuits, as well as other regions processing information about the autonomic state, including the amygdala and some brainstem structures ( Drevets et al., 1997 ; Botteron et al., 2002 ; Levy and Dubois, 2006 ). Furthermore, individuals with greater behavioral apathy scores show enhanced recruitment of precisely the circuits implicated in the present study, including SMA and cingulate cortex, when deciding to initiate effortful behavior ( Bonnelle et al., 2016 ). This is interesting because apathy correlates with increased effort sensitivity (β E ) ( Bonnelle et al., 2016 ), and we found that individuals with increased effort sensitivity showed enhanced recruitment of SMA and brainstem regions for encoding the effort difference ( Fig. 3 B ). In other words, when committing to a larger (relative) effort, these circuits were more active in people who were more sensitive to effort. As discussed by Bonnelle et al. (2016 ), we cannot infer cause and effect, but it is possible that the neural balance between activations in reward and effort systems might be different in individuals with greater sensitivity to efforts (such as apathetic individuals). This may be why these people avoid choosing effortful options more often than others. It also provides a possible connection between the network's specific role in effort-based choice and its functional contribution to everyday life behaviors. Footnotes This work was supported by Wellcome Trust 086120/Z/08/Z to M.C.K.-F., 096689/Z/11/Z to S.W.K., and 088130/Z/09/Z and 091593/Z/10/Z to K.F., and European Research Council ActSelectContext 260424 to S.B. We thank Tim Behrens, Laurence Hunt, Matthew Rushworth, Marco Wittmann, and all our laboratory members for helpful discussions on the data; and the imaging and IT teams at the FIL and Sobell Department for their support with data acquisition and handling. The authors declare no competing financial interests. This is an Open Access article distributed under the terms of the Creative Commons Attribution License Creative Commons Attribution 4.0 International , which permits unrestricted use, distribution and reproduction in any medium provided that the original work is properly attributed. Correspondence should be addressed to Dr. Miriam C. Klein-Flügge, Department of Experimental Psychology, University of Oxford, 9 South Parks Road, Oxford OX1 3UD, UK. [email protected] This article is freely available online through the J Neurosci Author Open Choice option.
How do we decide if something is worth the effort? A team of researchers from Oxford University and UCL have been finding out. Every action we take involves a cost to us in physical energy, yet studies about decision-making have tended to look at how we weight up external costs like risks or time. However, being unwilling to exert effort is a symptom for a range of mental disorders, so understanding how the brain processes decisions about effort versus reward could provide insights into these conditions. In a study supported by the Wellcome Trust and European Research Council, the research team therefore decided to see if there was a distinct brain system involved in weighing up physical costs. Researcher Dr Miriam Klein-Flügge said: "We asked volunteers to make choices involving different levels of monetary reward and physical effort, while they were placed in a MRI scanner. "We found that the decisions they made were influenced by both reward size and effort required, with – unsurprisingly – higher reward, lower effort options being particularly favoured. We then looked for particular brain regions involved in the decision-making." The team found a relevant pattern of activity in three areas of the brain, the supplementary motor area (SMA), dorsal anterior cingulate cortex (dACC) and putamen. Further analysis showed that assessment of effort was centred on the SMA and putamen, with a separate network in the ventromedial prefrontal cortex assessing reward. The dACC encoded the difference between effort and reward as a single value, likely drawing together the results of the two separate neural circuits, and activity in this area was linked with the degree to which each volunteer's choices were driven by the overall value. Dr Miriam Klein-Flügge said: "This research fits with and adds to findings from other studies. There is not one single decision-making system in the brain but a set of them that are combined flexibly depending on the decision we are faced with. We have identified the system related to effort, a common factor in many decisions. "It offers an insight for research into a number of disorders including depression, apathy and negative symptom schizophrenia. Patients with these disorders often show a reduced ability to do something effortful to obtain a reward. Our volunteers were different in their sensitivity to effort, and showed different levels of neural activity, suggesting that people can have different balances between their reward and effort systems. "While further research is needed, it may be that some disorders involve a particularly acute imbalance between these different decision systems." Dr Raliza Stoyanova, in the Neuroscience and Mental Health team at Wellcome, said: "There are many situations that require us to weigh up effort and reward, for instance deciding whether the extra long walk to get our favourite type of sandwich is worth more than a shorter walk for a less favoured sandwich. "Although we know a lot about how the brain reacts to rewards of different sizes, the current study is the first to shed light on a brain region that is involved in comparing the size of a reward and the physical effort required to get it. Given that a range of psychiatric conditions involve difficulties in reaching rewards when they require effort, these results open up a number of avenues for future research to test more precisely if, and how, the reward/effort balance may go awry." The paper, Neural Signatures of value comparison in human cingulate cortex during decisions requiring an effort-reward trade-off, is published in the Journal of Neuroscience.
10.1523/JNEUROSCI.0292-16.2016
Biology
Maintaining the unlimited potential of stem cells
Jovylyn Gatchalian et al. A non-canonical BRD9-containing BAF chromatin remodeling complex regulates naive pluripotency in mouse embryonic stem cells, Nature Communications (2018). DOI: 10.1038/s41467-018-07528-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-07528-9
https://phys.org/news/2018-12-unlimited-potential-stem-cells.html
Abstract The role of individual subunits in the targeting and function of the mammalian BRG1-associated factors (BAF) complex in embryonic stem cell (ESC) pluripotency maintenance has not yet been elucidated. Here we find that the Bromodomain containing protein 9 (BRD9) and Glioma tumor suppressor candidate region gene 1 (GLTSCR1) or its paralog GLTSCR1-like (GLTSCR1L) define a smaller, non-canonical BAF complex (GBAF complex) in mouse ESCs that is distinct from the canonical ESC BAF complex (esBAF). GBAF and esBAF complexes are targeted to different genomic features, with GBAF co-localizing with key regulators of naive pluripotency, which is consistent with its specific function in maintaining naive pluripotency gene expression. BRD9 interacts with BRD4 in a bromodomain-dependent fashion, which leads to the recruitment of GBAF complexes to chromatin, explaining the functional similarity between these epigenetic regulators. Together, our results highlight the biological importance of BAF complex heterogeneity in maintaining the transcriptional network of pluripotency. Introduction Embryonic stem cells (ESCs) have the remarkable ability to self-renew and give rise to any of the over 200 different mature cell types, the property of pluripotency. To maintain ESC identity, the genome is precisely controlled so that only stem cell-specific transcription programs are turned on while lineage-specific programs are silenced 1 , 2 . This control is in part achieved by ATP-dependent chromatin remodeling complexes, which regulate chromatin structure 3 , 4 . In particular, several subunits of the mammalian BRG1-associated factors (BAF) chromatin remodeling complex are required for formation of the inner cell mass (ICM) of the embryo and for maintenance of ESCs in vitro 5 , 6 , 7 , 8 . In mouse ESCs, a specialized BAF complex exists in the form of esBAF, a ~2 MDa complex composed of 9–11 subunits including BRG1, BAF155, BAF47, ARID1A, BAF45A/D, BAF53A, BAF57, SS18, and BAF60A/C 9 , 10 . Genome sequencing studies using conditional deletion of BRG1, the ATPase component of the complex, revealed that esBAF collaborates with the master pluripotency regulators OCT4, SOX2, and NANOG in modulating the expression of ESC-specific genes while repressing genes associated with differentiation 11 , 12 . In addition to esBAF, the related polybromo-associated BAF (PBAF) complex is also present, which contains distinct components including BRD7, ARID2, and PBRM1 13 , 14 , 15 . Upon differentiation, the BAF complex becomes even more diversified, assembling the different subunits in a combinatorial manner, which imparts to the complex its cell type- and developmental stage-specific activities 16 , 17 , 18 . However, it is unclear how distinct BAF complex assemblies and the unique subunits therein contribute to BAF-dependent functions. Mass spectrometric studies in mouse ESCs identified BRD9 as a novel BRG1-interacting partner 9 and was subsequently shown to be a dedicated BAF complex subunit in a leukemic cancer cell line 19 . BRD9 harbors a single bromodomain (BD), an epigenetic reader domain that recognizes acetylated lysines on histones and non-histone proteins 20 , 21 . However, the role of BRD9 in BAF complex targeting and activity remains uncharacterized. Here we find that in mouse ESCs, BRD9 and GLTSCR1/1L are defining members of the smaller, non-canonical GLTSCR1/1L-containing BAF complex or GBAF complex 22 . We perform IP-mass spectrometry characterization of the GBAF complex in mouse ESCs to define shared and unique subunits. GBAF is distinct from esBAF as it lacks BAF47, BAF57, and ARID1A. Chromatin IP (ChIP)-Seq analyses show that esBAF and GBAF are uniquely targeted to sites across the genome and are co-bound with distinct sets of pluripotency transcription factors (TFs). The genomic binding of the GBAF complex is consistent with its role in maintaining the naive pluripotent state, as inhibition of BRD9 results in transcriptional changes representative of a primed epiblast-like state. Conditional deletion of esBAF subunit ARID1A is not highly correlated with this transition, indicating that GBAF complexes have a functionally specific role in regulating this pathway. We demonstrate that BRD9 is targeted to chromatin via its BD, highlighting the role of this reader domain in GBAF complex targeting. Finally, we provide evidence for BD and extra-terminal protein 4 (BRD4)-mediated targeting of the GBAF complex that is BD-dependent, which accounts for their complementary roles in regulating the naive pluripotency transcriptional network. Our studies not only provide important insight into BRD9 function but also add to our understanding of how diversity in BAF complex assembly contributes to the intricate control of the ESC transcription program. Results BRD9 regulates transcription for the maintenance of ESCs To determine the specific role of BRD9 in the maintenance of ESC pluripotency, we made use of the BRD9-BD inhibitor, I-BRD9, which specifically inhibits binding of the BRD9-BD to acetylated residues 23 . By assaying for cell number at different time points, we found a time- and I-BRD9 concentration-dependent decrease in cell proliferation, pointing to a BD-dependent role for BRD9 in maintaining mouse ESCs in serum/leukemia inhibitory factor (LIF) conditions (Fig. 1a ). Because I-BRD9 has been shown to also inhibit BRD4 in vitro 23 , we tested two other BRD9-BD inhibitors (BRD9i), BI-9564 and TP472, which also yielded similar concentration-dependent growth defects in ESCs (Fig. 1b and Supplementary Figure 1a ). Consistent with the specific activity of the BRD9i, short hairpin RNA (shRNA)-mediated knockdown of Brd9 in mouse ESCs with three independent shRNAs resulted in a near complete loss of BRD9 protein expression relative to that of a scrambled control (Supplementary Figures 1 b, 1c ), leading to significant reduction in ESC growth at 2 and 4 days post shRNA transduction (Fig. 1c ). Fig. 1 BRD9 is part of the ESC pluripotency transcriptional regulatory network. a Time course experiment assessing mouse ES cell number after treatment with DMSO or I-BRD9 at either 1.11 or 3.33 µM. Error bars represent one standard deviation from the mean of biological replicates, n = 9. Two-tailed t -test was performed to obtain the p values. * p < 0.05, ** p < 0.01, *** p < 0.001. b As in a , but for ESCs treated with DMSO or BI-9564 at either 300 nM or 1 µM; n = 6. c As in a , but for ESCs transduced with shRNA against a scrambled control or Brd9 ; n = 3 for each sh Brd9 experiment. d Scatterplot of average log2 fragments per kilobase of transcript per million mapped reads (FPKM) mRNA expression level in DMSO-treated ESCs against log2 fold change (FC) in expression after 24 or 48 h of 10 µM I-BRD9 treatment in serum/LIF conditions. Red and blue indicate differential expression increased or decreased by 1.5-fold or more (Benjamin-Hochberg FDR < 0.05), respectively, from two independent biological replicates. e Venn diagram of differentially expressed genes (DEGs) in the 24 and 48 h I-BRD9 RNA-Seq datasets. f Significance of the I-BRD9 DEG enrichment in each gene ontology process. FDR values were calculated according to Benjamin-Hochberg multiple testing (GSEA). g GSEA enrichment plots of either the significantly downregulated (left) or upregulated (right) I-BRD9 DEGs using the shcontrol and sh Brd9 RNA-Seq dataset. NES normalized enrichment score, FWER p value familywise error rate calculated using single tail tests on the positive or negative side of the null distribution. h Box plot of the log2 FC in expression upon sh Brd9 knockdown, grouped into quintiles according to the genes’ log2 FC upon I-BRD9 addition. Shades of blue and red correspond to the degree of log2 FC down- or upregulation in I-BRD9/DMSO. Black bars indicate the mean per quintile. i As in g , GSEA enrichment plot of BRG1-dependent genes in ESCs using the I-BRD9 RNA-Seq dataset. Source data for a – c are provided as Source Data file Full size image We next addressed whether BRD9 exerts its function in pluripotency by regulating gene expression. We treated ESCs with I-BRD9 and assessed mRNA expression changes at 24 and 48 h post treatment using high-throughput sequencing (RNA-Seq). We observed dramatic changes in gene expression upon treatment with I-BRD9 (Fig. 1d ; 1.5-fold change (FC), false discovery rate (FDR) < 0.05, Benjamin-Hochberg). At 24 h, I-BRD9 treatment resulted in 351 differentially expressed genes (DEGs), the vast majority of which were also present among the 929 DEGs changed following 48 h of I-BRD9 treatment (Fig. 1e ). In both instances, we observed more downregulated than upregulated genes, suggesting that BRD9 generally maintains gene expression. Gene ontology (GO) analysis of I-BRD9-dependent genes revealed that BRD9 functions primarily in regulating tissue development and cellular differentiation (Fig. 1f ). To confirm the on-target effects of I-BRD9, we performed RNA-Seq in ESCs where Brd9 was knocked down using a pooled collection of three shRNAs. We observed a high degree of concordance between the mRNA expression changes with I-BRD9 treatment and Brd9 knockdown. Gene set enrichment analysis (GSEA) showed that I-BRD9-downregulated genes are positively enriched among genes that decrease upon Brd9 knockdown, while I-BRD9-upregulated genes are enriched among genes that increase upon Brd9 knockdown (Fig. 1g ). Conversely, the change in gene expression upon Brd9 knockdown was strongly correlated with the change upon I-BRD9 treatment (Fig. 1h ). Finally, to determine whether BRD9’s role in transcriptional regulation is in the context of BAF complexes, we performed GSEA to compare our I-BRD9 RNA-Seq data with a publicly available dataset of BRG1-dependent genes in mouse ESCs 12 . We saw a significant enrichment of genes downregulated by I-BRD9 in the BRG1 dataset (Fig. 1i ). Consistent with this, inhibition of BRG1 activity by the small-molecule inhibitor PFI-3 caused a similar decrease in ESC cell growth (Supplementary Figure 1d ). These data indicate that BRD9 and BRG1 cooperate to regulate a subset of genes in ESCs that are critical for ESC maintenance. BRD9 associates with a non-canonical BAF complex in ESCs Given the functional impact of BRD9 loss or inhibition on ESC pluripotency, we sought to further characterize BRD9’s association with BAF complexes in mouse ESCs. To this end, we performed IP of endogenous BRD9 from micrococcal nuclease-treated nuclear lysates under high stringency wash conditions and analyzed the precipitated proteins by mass spectrometry (Supplementary Table 1 ). The results validated previous observations that BRD9 interacts with BRG1, BAF155, the BCL7 proteins, and SS18 (Fig. 2a , Supplementary Table 1 ) 19 . We identified other BAF subunits, including BAF60A, BAF155, BAF53A, GLTSCR1L, and GLTSCR1, which were among the top hits in the mass spectrometry results. Surprisingly, however, we did not recover established BAF/PBAF subunits ARID1A, BAF47, BAF57, PBRM1, or BRD7, suggesting that we had identified a non-canonical BRD9-containing BAF complex. Fig. 2 BRD9 defines a distinct, non-canonical BAF complex. a Immunoprecipitation (IP)-mass spectrometry using BRD9 or IgG antibody from mouse ESCs. The plot shows the spectral count fold change in BRD9 IP relative to IgG and the corresponding AC test p values, calculated with two technical replicates using PatternLab. In orange are proteins that satisfy FC 2, AC test p < 0.05. b Immunoblotting analysis of fractions after mouse ESC nuclear lysates were subjected to a density sedimentation assay in 10–30% glycerol gradient. LMW and HMW indicate lower and higher molecular weights, respectively. Nonspecific bands are marked with an asterisk. Molecular weights from ladder are indicated. c Immunoprecipitation (IP) experiments from mouse ESC nuclear lysates using antibodies against BRG1, BRD9, and BAF47. Blots developed using chemiluminescence are marked with double asterisks; each IP was taken from a different exposure. d Depletion IP experiment from ESCs using antibodies against BRG1 or BAF155. Each lane shows the remaining proteins after each successive IP, labeled 1 through 4. e IP experiment from ESCs using antibodies against IgG or BRD9, which was incubated in increasing concentration of urea. f Schematic of the esBAF, GBAF, and PBAF complexes in mouse ESCs. Colored in orange, pink, and mustard yellow are subunits that define each individual complex. In green is the enzymatic component, BRG1. Source data are provided as a Source Data file Full size image We next isolated ESC nuclear extracts and subjected the proteins to a glycerol gradient density sedimentation assay. Our results show that BRD9 co-sediments earlier in the gradient with BRG1, GLTSCR1, BAF155, and BAF60A, indicating that it associates with the lower molecular weight GBAF complex (Fig. 2b ). BRD9 did not sediment with esBAF subunits, defined by the presence of ARID1A, BAF57, and BAF47, or with PBAF, which uniquely incorporates PBRM1 and BRD7. IP experiments with specific antibodies against BRG1, BRD9, and BAF47 verified that BRD9 is in a BAF complex distinct from esBAF, as it does not interact with ARID1A, BAF57, or BAF47 (Fig. 2c ). The reciprocal IP demonstrated that BAF47 associates with known esBAF subunits, but does not interact with BRD9 (Fig. 2c ). In addition, we confirmed that GLTSCR1 and GLTSCR1L each exclusively associate with BRD9, but not with BAF47 (Fig. 2c ). To demonstrate that BRD9 is a dedicated GBAF subunit, we depleted ESC nuclear lysates of either BRG1 or BAF155 using specific antibodies and monitored BRD9 levels after each round of four IP reactions. This showed that BRD9 is quickly depleted with BRG1 or BAF155, suggesting that BRD9 is exclusive to BAF complexes (Fig. 2d ). Finally, we tested how stable the associations are between BRD9 and its interacting proteins using urea-based denaturation studies. We found that BRG1, BAF60A, and GLTSCR1L remain associated with BRD9 in up to 1 M urea (Fig. 2e ). In summary, our biochemical studies demonstrate that BRD9 forms a stable BAF complex that is distinct from the canonical esBAF and PBAF complexes and uniquely contains GLTSCR1/1L in mouse ESCs (Fig. 2f ). The presence of this non-canonical BAF complex is not unique to ESCs, as we also observed it in HCT116 cells, a human colorectal cancer line (Supplementary Figures 2 a, 2b ). Moreover, a recent study showed that BRD9 associates with a smaller GLTSCR1/1L-containing BAF complex termed the “GBAF complex” in a human monocytic cell line and HEK293Ts 22 , which is also supported by mass spectrometry studies defining the assembly of this non-canonical complex in HEK293T cells, synovial sarcoma, and malignant rhabdoid cell lines 24 . Collectively, these results argue that the presence of GBAF is a general phenomenon as it is found in both human cancer cells and mouse ESCs, the latter of which represents a karyotype normal background. For continuity, and to distinguish it from esBAF and PBAF complexes, we will hence refer to the non-canonical BAF complex as the GBAF complex. GBAF and esBAF localize to distinct genomic elements To identify the genomic localization of the non-canonical GBAF complex in ESCs, we employed ChIP-Seq with antibodies against BRD9 and BRG1. We identified 35,932 BRD9-bound sites and 73,011 BRG1-bound sites (Fig. 3a ). Eighty-one percent of sites occupied by BRD9 are also bound by BRG1 and their localization on the genome is strikingly similar, consistent with a strong biochemical interaction (Fig. 3b, c ). To establish whether GBAF localizes distinctly from esBAF, we also performed ChIP-Seq against ARID1A, an esBAF subunit which is not present in GBAF. We identified 20,677 ARID1A-bound sites, which showed 72% overlap with BRG1 targets (Fig. 3a–c ). Only 33% of ARID1A peaks overlapped with BRD9, suggesting that these complexes are independently targeted, but are co-bound at some sites, in agreement with several studies showing co-localization of chromatin remodeling complexes on the genome 25 , 26 . Co-occurrence binding analysis with publicly available histone modification data in mouse ESCs revealed a difference between BRD9 and ARID1A targets. Specifically, we found that BRD9 is more enriched at sites that are marked with H3K4me3, an epigenetic mark more commonly found at promoters (Fig. 3d ). ARID1A, on the other hand, showed stronger enrichment at sites marked with H3K4me, which is strongly associated with enhancers. Both ARID1A and BRD9 are bound at sites marked with H3K27ac, but not H3K27me3. We further examined BRD9 and ARID1A localization to different enhancer classes by defining poised (H3K4me-positive, H3K27ac-negative), active (H3K4me- and H3K27ac-positive), and super enhancers (defined by H3K4me-positive, H3K27ac-positive, and high Mediator binding 27 ). Here we observed similar enrichment of ARID1A and BRD9 at active enhancers whereas at both poised and super enhancers, ARID1A showed a relatively stronger enrichment than BRD9 (Fig. 3d ). BRG1 is known to bind to both promoters and distal sites 11 and further analysis of BRD9 and ARID1A binding at these different regions showed that the two have different binding proclivities, with BRD9 binding more strongly to promoters and ARID1A more strongly to distal sites (Fig. 3e ). Furthermore, BRD9 and BRG1, but not ARID1A, localize to topologically associating domain (TAD) boundaries, which are also strongly enriched for H3K4me3 (Fig. 3f ) 28 . These data suggest that esBAF and GBAF are preferentially targeted to different regions of the genome. Fig. 3 GBAF and esBAF localize to distinct genomic elements. a Venn diagram overlap of BRG1, BRD9, and ARID1A ChIP sites, with n representing the number of observed peaks. b Heat map of BRD9 and BRG1 ChIP signal ± 3 kilobases (kb) centered on the BRD9 peak, ranked according to BRD9 read density. c Significance of binding overlap between BRD9, BRG1, and ARID1A ChIP sites. The natural log of p values were calculated using hypergeometric distribution (mergePeaks.pl in HOMER). d Co-occurrence analysis showing the natural log of the ratio of the observed number of overlapping peaks over the expected values. This was done for BRD9, BRG1, and ARID1A ChIP sites against publicly available ChIP-Seq data for the histone modifications H3K4me, H3K4me3, H3K27me3, and H3K27ac. Poised enhancers are defined by sites that contain H3K4me1 but lack H3K27ac, whereas active enhancers are sites that are positive for both modifications. Super enhancers are H3K4me+, H3K27ac+, and Med1-high sites. e ChIP-Seq signal of BRG1, BRD9, and ARID1A ± 2 kb surrounding BRG1 peak center at promoters and distal sites. f ChIP-Seq signal of BRG1, BRD9, and ARID1A ± 50% size of a topologically associating domain (TAD) around a TAD Full size image GBAF and esBAF are co-bound with different factors To understand how GBAF complexes participate in the pluripotency regulatory network, we profiled GBAF and esBAF complex binding sites for enriched TF-binding motifs. We distinguished BRG1-bound sites that uniquely contain either BRD9 (GBAF) or ARID1A (esBAF) and determined the enriched motifs for both (Fig. 4a ). Consistent with previous reports demonstrating the role of BRG1 in OCT4-dependent transcription 12 , the motif for the master pluripotency regulators OCT4/SOX2/TCF/NANOG was the most enriched motif for esBAF complex binding, followed by motifs for the high-mobility group domain-containing Sox family members. In contrast, GBAF complex sites are enriched for the CTCF/CTCFL motif, followed by the zinc finger-containing TFs Kruppel like factor 3 (KLF3) and the Specificity proteins Sp5 and Sp1. TF binding analyses verified the results of the motif search, where we observed greater occupancy of OCT4, SOX2, and NANOG on esBAF sites while CTCF binding was stronger at GBAF sites (Fig. 4b, c ). KLF4 and Sp5 binding was observed at both esBAF and GBAF binding sites, with slightly greater occupancy at GBAF complex binding sites. These data are consistent with prior literature detailing the role of esBAF complexes in regulating the master pluripotency TFs, and further highlight the specific binding pattern of GBAF complexes, suggesting cooperation between GBAF complexes and KLF4 and Sp5. Fig. 4 GBAF and esBAF are co-bound by different pluripotency transcription factors. a Significance of enriched motifs for BRG1 sites uniquely bound by either ARID1A (top) or BRD9 (bottom). p Values were calculated using cumulative binomial distribution (findMotifsGenome.pl in HOMER). b Histogram of ChIP reads of the indicated transcription factor (TF) ± 1 kb surrounding the peak center of sites that are co-bound by either ARID1A and BRG1 or BRD9 and BRG1. c Representative genome browser tracks showing co-binding of BRD9 and BRG1 with either KLF4 or Sp5 at promoters (middle and right, respectively) or ARID1A and BRG1 with OCT4, SOX2, and NANOG at distal sites (left) Full size image GBAF complexes regulate naive pluripotency Both KLF4 and Sp5 have been shown to regulate naive pluripotency in part through the transcriptional regulation of Nanog , whose downregulation marks the exit of ESCs from the naive state into a state that is primed for lineage specification 29 , 30 , 31 . Interestingly, we observed downregulation of both Nanog and Klf4 in I-BRD9-treated ESCs (Fig. 5a ). To determine if this downregulation was specific to GBAF complexes, we performed RNA-Seq on Arid1a f/f :CreERT2 ESCs treated with ethanol (vehicle) or tamoxifen to induce deletion of Arid1a . In contrast to I-BRD9 treatment, ARID1A loss did not affect the expression of either Nanog or Klf4 (Fig. 5a ). Neither BRD9 inhibition nor ARID1A deletion resulted in significant changes in the core regulators of pluripotency, Pou5f1 or Sox2 , consistent with what has been observed previously in BRG1-deficient ESCs 11 , 12 . These data suggested that GBAF complexes may specifically regulate naive pluripotency. Consistent with this, we observed that I-BRD9-treated ESCs have a flatter morphology that resembles primed or epiblast ESCs (EpiESCs) 32 , 33 , whereas those treated with vehicle maintain the characteristic domed structure of naive ESCs (Fig. 5b ). Furthermore, ESCs cultured with either I-BRD9 or BI-9564 for 6 days have reduced colony-forming activity (Fig. 5c ) and yield less cells with alkaline phosphatase (AP) activity (Fig. 5d, e ), which is consistent with EpiESCs being less clonogenic in culture than their naive counterparts 34 . Together, our functional data indicate that BRD9 has an important role in maintaining the naive pluripotent state. Fig. 5 GBAF complexes regulate naive pluripotency. a Bar graph of mean mRNA expression FC for the pluripotency TFs upon either I-BRD9 treatment of v6.5 ESCs or tamoxifen treatment of Arid1a f/f CreERT2 ESCs (ARID1A KO) over DMSO or ethanol, respectively, from two biological replicates. 1.5-FC decrease is denoted with a dotted line. Error bar = standard deviation. Benjamin-Hochberg FDR < 0.05 is denoted with triple asterisks. b Brightfield images of ESCs treated with either DMSO or I-BRD9 at 3 or 10 µM for 6 days. Scale bar = 150 µm. c Quantification of ESC colonies cultured with the indicated treatment for 6 days. This is representative of two independent experiments. d Representative images of wells containing ESCs treated with the indicated vehicle or BRD9i for 6 days then assayed for alkaline phosphatase (AP) activity. e Quantification of the colonies in d , showing the mean and standard deviation of two biological replicates. Error bars = standard deviation. Source data are provided as Source Data file. f Venn diagram overlap of DEGs with I-BRD9 treatment for 48 h and DEGs with FGF/Activin A addition over serum/LIF (EpiESC/ESC). p value was calculated using hypergeometric test of overlap, with population size being the total number of genes tested ( N = 24,538). g Scatterplot of the mRNA log2 FCs in I-BRD9/DMSO and EpiESC/ESC for the 477 common DEGs in f . Linear regression analysis was performed to calculate the R 2 . Best fit is represented as a pink dashed line. h As in g , but of the mRNA log2 FC of DEGs that are common between Arid1a f/f CreERT2 ESCs treated with tamoxifen/ethanol and EpiESC/ESC ( n = 302). Best fit is represented as a gray dashed line. i Significance of TF binding on the 477 common DEGs in f . p values were calculated using hypergeometric test. In parentheses are the percentages of EpiESC genes that are bound by the corresponding TFs. j Heat map of mRNA log2 FCs for the indicated genes in EpiESC/ESC, I-BRD9/DMSO, and Arid1a f/f CreERT2 ESCs tamoxifen/ethanol Source data Full size image To assess this directly, we measured the overlap between I-BRD9 DEGs and a previously published dataset comparing naive mouse ESCs cultured in serum/LIF conditions and EpiESCs generated by culturing in Activin A/FGF4 35 . We found that over half of I-BRD9 DEGs ( n = 477) were present in this EpiESC-ESC dataset (Fig. 5f ). Importantly, there was a significant correlation between genes that were upregulated or downregulated in both Activin A/FGF4 and I-BRD9 conditions (Fig. 5g , R 2 = 0.649, linear regression). In contrast, there was no correlation between the transcriptional changes that occur in the ESC-EpiESC transition and following deletion of Arid1a (Fig. 5h , R 2 = 0.009, linear regression). Thus, BRD9, but not ARID1A, specifically protects naive pluripotency as inhibition of BRD9 results in gene expression changes that closely resemble the primed epiblast-like state. To determine if GBAF complexes regulate the naive pluripotency program through facilitating Sp5 and KLF4, we profiled the occupancy of these factors at genes regulated during the ESC-EpiESC transition. Consistent with the role of KLF4 and Sp5 in maintaining naive pluripotency, we found that ESC-EpiESC genes are significantly occupied by these factors, but not c-Myc (p = 0.057, hypergeometric test), in ESCs (Fig. 5i ). In addition, BRD9 and BRG1 binding are highly significant at these genes, with 82% of ESC-EpiESC genes being co-bound by both. Moreover, BRD9 and BRG1 are frequently co-bound with KLF4 (88% of KLF4 targets 36 ) and Sp5 (88% of Sp5 targets 37 ), consistent with the enrichment of KLF4 and Sp5 motifs at GBAF complex binding sites. KLF4/Sp5-bound gene targets include key regulators of naive pluripotency, such as Nanog , Nr5a2 , Prdm14 , Gli2, Tet2 , Tdgf1 , Fgf4 , and Tcl1 . As expected, these genes are significantly reduced in the ESC-EpiESC transition (Fig. 5j ). Moreover, these genes are also significantly downregulated by I-BRD9 treatment, indicating that GBAF complexes are required for expression of KLF4/Sp5-bound targets (Fig. 5j ). Several KLF4-bound genes in the pro-differentiation FGF/MEK/ERK pathway were also upregulated in both datasets, including Jun , Fosl2 , and Atf3 , further highlighting the role of BRD9 in collaborating with KLF4 to both maintain the ESC naive state and inhibit ESC priming. These data demonstrate that BRD9 is required for the proper transcription of KLF4/Sp5-dependent targets, consistent with significant overlap of GBAF complex binding with KLF4 and Sp5 in ESCs. In contrast, conditional deletion of Arid1a had variable, and often minimal, effect on the expression of these genes. Thus, GBAF complexes have a functionally specific role in preserving the naive pluripotency of ESCs and preventing transition to the primed state, a function that is not controlled by canonical esBAF complexes. The BD localizes BRD9 to its genomic targets The BD is a well-studied reader domain that recognizes acetylated lysines on histones and non-histone proteins. BRD9’s BD has been shown to bind acetylated histone peptides in vitro, with no clear preference, but its physiological target remains elusive 20 . Therefore, it is unclear whether the BD serves as a targeting module for BRD9. To address this, we performed a cellular fractionation assay from ESCs treated with I-BRD9 for 24 h to inhibit any BD-acetylated lysine interaction. We observed a global reduction of BRD9 from the chromatin fraction that was dependent on I-BRD9 concentration (Fig. 6a, b ). IP experiments against BRD9 with or without I-BRD9 show that GBAF remains intact (Supplementary Figures 3 a, 3b ). We next performed ChIP-Seq against BRD9 24 h after treating ESCs with I-BRD9. Consistent with the fractionation assay results, BRD9 binding was markedly diminished upon inhibitor treatment (Fig. 6c ). In fact, at over 12,000 sites, BRD9 occupancy was significantly reduced by at least 1.5-fold (Fig. 6d , Poisson p value < 0.0001), while <200 sites exhibited increased occupancy. The displacement from chromatin is rapid, as we observed loss of BRD9 after 6 h of I-BRD9 treatment at 2226 sites, 61% of which remained down at 24 h (Fig. 6e ). Furthermore, the average BRD9 read density at all of its targets progressively decreased over time with I-BRD9 treatment (Fig. 6f, g ), in support of the BD functioning as a targeting module for BRD9. Fig. 6 The bromodomain mediates BRD9 targeting to chromatin. a Representative immunoblotting analysis of a cellular fractionation assay in mouse ESC lysates after treatment with either DMSO or I-BRD9 at 3 or 10 µM for 24 h. Molecular weights from ladder are indicated. b Quantification of BRD9 chromatin fraction signal normalized to the loading control, Histone H3. Average of three independent experiments; error bars represent one standard deviation from the mean. Source data are provided as Source Data file. c Heat map of BRD9 ChIP signal ± 3 kb surrounding the BRD9 peak center in DMSO- or I-BRD9-treated ESCs, ranked according to BRD9 read density in DMSO. d Scatterplot of log2-transformed BRD9 ChIP tags in DMSO- and I-BRD9-treated ESCs. Blue and red correspond to 1.5-fold decrease or increase of BRD9 tag count in I-BRD9-treated ESCs, respectively (Poisson p value < 0.0001). e Venn diagram overlap of differentially decreased BRD9 ChIP sites after 6 and 24 h of I-BRD9 addition. f Histogram of BRD9 ChIP reads ± 2 kb surrounding the BRD9 peak center in DMSO and after 6 or 24 h of I-BRD9 treatment. g Representative genome browser tracks showing the progressive decrease in BRD9 ChIP reads with I-BRD9 treatment at 6 and 24-h time points. h Heat map of log2 FC in mRNA expression for genes annotated to BRD9 ChIP binding sites. Sites are ranked by degree of BRD9 ChIP signal loss after 24 h of I-BRD9 treatment, as indicated by an orange gradient. i Scatterplot of log2-transformed BRD9 ChIP tags in DMSO- and I-BRD9-treated ESCs at sites that are annotated to the genes that are significantly changed upon I-BRD9 treatment. Red line indicates y = x and corresponds to no change. j As in i , but for BRG1 ChIP tags in DMSO- and I-BRD9-treated ESCs. k Histogram of BRG1 ChIP reads ± 1 kb surrounding the BRG1 peak center in DMSO and after 24 h of I-BRD9 treatment, at sites annotated to I-BRD9 DEGs. Numbers of ChIP sites and of genes annotated to these sites are indicated Full size image We next assessed the effect of BRD9 displacement from chromatin on gene expression. We found that gene expression is affected at genes that exhibit loss in BRD9 occupancy upon I-BRD9 treatment (Fig. 6h ). Conversely, BRD9 ChIP read density is decreased with I-BRD9 treatment at essentially all sites annotated to genes that are significantly regulated by I-BRD9 (Fig. 6i ). We also performed ChIP-Seq against BRG1 in ESCs after 24 h of I-BRD9 treatment to determine whether BRG1 binding is affected. While there was no global loss of BRG1 from chromatin with I-BRD9 by western blot (Fig. 6a and Supplementary Figure 3c ), BRG1 binding at sites annotated to the I-BRD9 DEGs trended downward (Fig. 6j, k ). Based on the fact that only 17% of total BRG1 is contained in GBAF complexes (Fig. 2b ), we cannot rule out esBAF and PBAF binding at these sites or other compensatory mechanisms. Altogether, our data demonstrate that the BD is essential for targeting BRD9, and is required for BRD9’s role in gene expression. BRD4 recruits GBAF to target genes in a BD-dependent manner BRD4 has also been shown to play a crucial role in maintaining ESC pluripotency by regulating the expression of several genes, including Nanog , Lefty1 , and Rex1 38 , 39 . Indeed, we observed good correlation between genes from EpiESC-ESC dataset and DEGs from a published study comparing ESCs treated with either vehicle or JQ1, a small molecule that potently inhibits BRD4-BD binding to acetylated residues (common genes n = 1666) (Supplementary Figure 4a , R 2 = 0.341, linear regression) 39 , 40 . Similarly, gene expression following JQ1 treatment was highly correlated with I-BRD9 treatment at common DEGs ( n = 664) (Fig. 7a , R 2 = 0.646, linear regression). This is not due to I-BRD9 nonspecifically targeting BRD4 and displacing it from chromatin because we observed no loss of BRD4 from the chromatin fraction with I-BRD9 using a cellular fractionation assay in ESCs (Supplementary Figures 4b, 4c ). Likewise, JQ1 has no activity toward BRD9 in vitro 40 . These data suggest that BRD4 and the GBAF complex cooperate in regulating the naive pluripotency program. Fig. 7 BRD4 recruits GBAF to target gene sites in a BD-dependent manner. a Scatterplot of the mRNA log2 FCs in I-BRD9/DMSO and JQ1/vehicle for 664 common DEGs. Linear regression analysis was performed to calculate the R 2 , with the best fit shown as a pink dashed line. b Venn diagram of overlaps between BRG1, BRD9, and BRD4 ChIP binding sites in ESCs, with n representing the number of observed peaks. c ESCs were treated with either DMSO or 200 nM of HDAC inhibitor trichostatin A (TSA) for 6 h prior to nuclear lysate collection then IP experiments were performed against BRD9 with or without I-BRD9. Molecular weights from ladder are indicated. d Quantification of BRD4 signal normalized to BRD9 bait signal then normalized to untreated from two biological experiments labeled a and b. Source data are provided as Source Data file. e Scatterplot of log2-transformed BRD4 ChIP tags in DMSO- and I-BRD9-treated ESCs. Blue and red correspond to 1.5-fold decrease or increase of BRD4 tag count in I-BRD9-treated ESCs, respectively (Poisson p value < 0.0001). f As in e , but for BRD9 ChIP tags in DMSO- and JQ1-treated ESCs. g Venn diagram of the overlap between JQ1-sensitive and I-BRD9-sensitive BRD9 ChIP sites. h Pie chart showing the number of I-BRD9 DEGs that are BRD9-bound in DMSO and those that significantly lose BRD9 binding upon treatment with I-BRD9, JQ1, or both (FC 1.5, Poisson p value < 0.0001) Full size image This led us to ask whether BRD4 collaborates with the GBAF complex based on a physical interaction. We found that BRD4 and BRD9 engage in a transient interaction, as it is diminished in progressively harsher wash conditions (Supplementary Figure 4d ). This explains why we did not observe peptides corresponding to BRD4 in our IP-mass spectrometry data, which was done under high stringency conditions. Additionally, only a small fraction of BRD4 co-sediments with GBAF complexes in the glycerol gradient sedimentation assay, confirming that it is not a bone fide GBAF subunit (Fig. 2b ). We next investigated if BRD4 and GBAF co-localize on chromatin in ESCs. To this end, we performed ChIP-Seq against BRD4 in ESCs and observed 43,737 sites bound by BRD4 (Fig. 7b ). Comparison with BRD9- and BRG1-binding sites revealed that 69% of BRD4-bound sites are co-bound by BRG1 while 47% are co-bound by BRD9, with 43% being bound by all three. On the other hand, we found that 25% of BRD4-bound sites are also bound by ARID1A (Supplementary Figure 4e ). Altogether, our data suggest that an interaction between BRD4 and the GBAF complex accounts for their overlapping roles in regulating the naive pluripotency transcriptional network. We next asked if the interaction between BRD4 and the GBAF complex is BD-dependent. We found that the interaction between BRD9 and BRD4 was enhanced by treating ESCs with the histone deacetylase I and II inhibitor trichostatin A (TSA) 6 h prior to nuclear lysate collection, which preserves acetylation on histones and non-histone proteins (Fig. 7c, d ). Addition of I-BRD9 reduced this interaction, indicating that the BRD9-BD potentially recognizes an acetylated form of BRD4. In light of this, we considered three possible modes of chromatin targeting—independent recruitment of BRD9 and BRD4 or targeting that is either BRD4-dependent or BRD9-dependent. To distinguish between them, we performed ChIP-Seq in ESCs against BRD4 with or without I-BRD9 and against BRD9 with or without JQ1. Consistent with our fractionation assay, I-BRD9 treatment had minimal effects on BRD4 chromatin targeting with only 492 sites being significantly decreased by 1.5-FC (Fig. 7e , Poisson p value < 0.0001). This ruled out a BRD9-BD-dependent targeting of BRD4. On the other hand, JQ1 treatment resulted in 12,849 sites significantly losing BRD9 (Fig. 7f , Possion p value < 0.0001), indicating that BRD4-BD is required for BRD9 localization on genomic targets. We compared the JQ1- and I-BRD9-sensitive BRD9 sites and found that there is substantial overlap between them, with 6965 common sites (Fig. 7g ). In addition, when we compared the genes annotated to these common sites and the I-BRD9 DEGs, we found that essentially all of the genes associated with the naive pluripotency program lose BRD9 from chromatin in both JQ1 and I-BRD9 treatments (Fig. 7h ). Altogether, these data point to a BRD4-BD-mediated recruitment of GBAF complexes to target sites that include key genes in the naive pluripotency network. BRD9 function is dispensable in 2i conditions Finally, it was recently shown that BRD4 is dispensable in the in vitro naive or ground pluripotent state, which is achieved by treating ESCs with glycogen synthase kinase 3β and MAP kinase kinase (MEK) inhibitors, commonly known as 2i 39 . We tested if this is also the case for BRD9 function and indeed, treatment of ESCs maintained in 2i conditions with I-BRD9 did not result in the dose-dependent decrease in cell growth that we observed in serum/LIF conditions (Fig. 8a ). Consistent with this, I-BRD9-dependent reduction of Nanog and Klf4 expression was blunted when ESCs were cultured in 2i conditions (Fig. 8b ), similar to 2i-mediated rescue of pluripotency gene expression in JQ1-treated ESCs, as was previously shown 39 . Moreover, whereas NANOG protein expression was reduced by I-BRD9 in serum/LIF-cultured ESCs, it remained relatively high with the same treatment in 2i (Fig. 8c, d ). Of note, BRD9 protein levels were decreased in both serum/LIF and 2i with either I-BRD9 and JQ1 treatment, likely due to it being targeted for degradation after being displaced from chromatin. Thus, BRD9 also appears to be non-essential in the ground state, in line with GBAF complexes and BRD4 functioning together in the same regulatory network. Fig. 8 BRD9 is dispensable for regulating the naive pluripotency network in 2i. a Time course experiment assessing mouse ES cell number cultured in 2i conditions (MEK inhibitor PD0325901 and GSK3 inhibitor CHIR99021), after treatment with DMSO or I-BRD9 at either 3 or 10 µM. Error bars represent one standard deviation from the mean of three biological replicates. b Bar graph showing the fold change in pluripotency gene expression after 72 h of treatment with I-BRD9 or JQ1 of ESCs cultured in serum/LIF (S/L) or 2i. Mean of two technical replicates; error bars represent one standard deviation from the mean. Source data are provided as a Source Data file. c Immunoblotting analysis of ESC nuclear lysates after 72 h of treatment with I-BRD9 or JQ1 of ESCs cultured in S/L or 2i. Molecular weights from ladders are indicated. Source data are provided as a Source Data file. d Quantification of the indicated proteins in c , normalized to TATA-binding protein (TBP) loading control then normalized to values in S/L DMSO. Dotted line is y = 1, which represents S/L DMSO. e Model of BRD4-mediated recruitment of BRD9/GBAF to sites annotated to naive pluripotency genes Full size image Discussion The mammalian BAF complex is highly polymorphic, assembling its subunits in a combinatorial manner that is cell type- or developmental stage-specific. Here we report that BRD9 associates with the non-canonical GBAF complex in ESCs that is distinct from esBAF and PBAF complexes. GBAF complexes are distinguished by the lack of BAF47, ARID1A, and BAF57, and the unique incorporation of GLTSCR1L or GLTSCR1. The lack of BAF47 is particularly notable due to its abilities to promote BRG1 binding to nucleosomal DNA and to stimulate BRG1’s ATPase and chromatin remodeling activities 41 , 42 . BAF47’s absence suggests that BRD9 or another GBAF subunit can substitute for this function or that BRG1’s enzymatic activity may be different in the context of GBAF. The function of GLTSCR1L or GLTSCR1 in GBAF complexes is not clear. Like other BAF subunits that have several paralogs, GLTSCR1 and GLTSCR1L are incorporated into mutually exclusive GBAF complexes 22 . In mouse ESCs, both paralogs are expressed, which likely explains why genetic deletion of GLTSCR1 in ESCs had no adverse effects on pluripotency 22 . Upon retinoic acid-induced differentiation 43 , Gltscr1 is upregulated 3.6-fold while Gltscr1l is downregulated 3.7-fold, indicating a potential exchange between these mutually exclusive paralogs whereby GLTSCR1 becomes more dominant than GLTSCR1L in GBAF function during specific stages of development. Our studies further establish that in addition to being biochemically distinct, GBAF and esBAF complexes are differentially targeted on the ESC genome. One remarkable difference is GBAF’s localization at TAD boundaries and its strong enrichment at CTCF sites, which was also recently reported in cancer cell lines 24 . This indicates that GBAF could be playing a role in chromatin organization, which warrants future studies. The differential targeting on the genome also lends insight into how distinct BAF complexes are functionally integrated into the ESC pluripotency network. esBAF binding is enriched at sites bound by the general pluripotency regulators OCT4, SOX2, and NANOG, consistent with previous work demonstrating that BRG1 facilitates the binding of these factors to their target sites 12 . GBAF complexes, in contrast, support naive pluripotency by collaborating with KLF4 and Sp5. Specifically, genes downstream of the LIF/STAT3 pathway, including Klf4 and its target Nanog , are downregulated in I-BRD9-treated cells, while genes involved in the pro-differentiation FGF/ERK pathway are upregulated. Previous reports have shown that BRG1 regulates the LIF/STAT3 pathway by maintaining accessibility at STAT3-binding sites 44 . While there is significant overlap between BRG1- and BRD9-dependent targets, our data specifically implicate KLF4 targets, suggesting that GBAF complexes play a functionally specific role in this pathway. Our studies thus distinguish functionally specific roles for BAF complex assemblies in maintaining ESC transcriptional programs. Finally, we show that the BD of BRD9 is essential for targeting BRD9 to chromatin and affecting gene expression. In particular, at naive pluripotency gene targets, the BRD9-BD recognizes an acetylated form of BRD4, recruiting GBAF complexes to chromatin (Fig. 8e ). BRD4 and BRD9 have been reported to co-localize at the Myc super enhancer in the mouse cell line model for acute myeloid leukemia, suggesting that recognition of acetylated BRD4 may be a common mechanism of GBAF complex recruitment 45 . Future studies are required to determine in which cell types the interaction occurs, which BRD4 acetylated residue is recognized by the BRD9-BD and what regulates this modification. Given the low-affinity interaction of a single BD with an acetylated lysine, it is likely that the interactions with other GBAF members, including a previously reported interaction between BRD4 and GLTSCR1 22 , 46 , may stabilize the association between GBAF complexes and BRD4. Indeed, treatment with I-BRD9 does not completely inhibit the interaction between BRD4 and BRD9 (Fig. 7c, d ). It is also important to note that other mechanisms likely contribute to GBAF recruitment, for example BRD9-BD-dependent recognition of modifications on histone proteins or TFs. Indeed, a recent study from Evans and colleagues demonstrated that BRD9 is recruited via a BD-dependent interaction with lysine 91 on the Vitamin D receptor in islet cells 47 . Together, these studies demonstrate that reader domains in BAF complexes can serve as targeting modules for BAF complex recruitment. While BAF complexes contain many such domains, none of these domains has been shown to affect BAF complex targeting, which is primarily mediated through TFs 48 , 49 , 50 . It is assumed that the multivalent binding from multiple domains ensures appropriate targeting and buffers BAF complexes against the effects of single domain inhibition. It is thus significant that BRD9-BD inhibition leads a rapid loss of BRD9 from chromatin, demonstrating that BAF complexes can be recruited or stabilized by interactions with chromatin. In summary, our studies provide compelling evidence that BRD9 functions within the naive pluripotency regulatory network by associating with a non-canonical GBAF complex, which is recruited to target sites via BD-dependent recognition of BRD4. This further expands the function of BAF complexes in stem cell biology and contributes to our understanding of how biochemical diversity in BAF complex assembly provides increased regulatory control in transcriptional programs. Methods Cell culture v6.5 mouse ESCs (Salk Institute Transgenic Core) and were Arid1a f/f ;ActinCreERT2 ESCs 51 , 52 were cultured in Knockout™ Dulbecco’s modified Eagle’s medium (Thermo Fisher Sci #10829018) supplemented with 15% ES-qualified serum (Applied Stem Cell Inc. #ASM-5007) and Knockout™ Serum Replacement (Thermo Fisher Sci #10828028), 2 mM l -glutamine (Gibco #35050061), 10 mM HEPES (Gibco #15630080), 1 mM sodium pyruvate (Gibco #11360070), 100 U mL −1 penicillin/streptomycin (Gibco #15140122), 0.1 mM non-essential amino acids (Gibco #11140050), 0.1 mM beta-mercaptoethanol (Gibco #21985023), and LIF. ESCs were maintained on gamma-irradiated mouse embryonic fibroblast (MEF) feeders for passage or gelatin-coated dishes for assays at 37 °C, 5% CO 2 with daily media changes and passaged every other day. For ESCs cultured in 2i, media as described above were supplemented with 3 µM CHIR99021 (LC Laboratories C556) and 1 µM PD0325901 (BioTang 391210–10–9) and cells were grown on gelatin-coated dishes. Arid1a f/f ;ActinCreERT2 ESCs were treated with either vehicle or 1 µM 4-hydroxytamoxifen (Sigma, dissolved in ethanol) for up to 24 h to induce Arid1a deletion. HCT116 colorectal cancer cells were purchased from Horizon Discovery (American Type Culture Collection: CCL-247) and cultured in RPMI (Life Technologies #11875–085) supplemented with 10% fetal bovine serum (Omega Sci FB-11 Lot #419414) and 100 U mL −1 penicillin/streptomycin (Gibco #15140122). Stocks of the following small-molecule inhibitors were made in dimethyl sulfoxide (DMSO): I-BRD9 (Cayman Chemicals #17749), BI-9564 (Cayman Chemicals #17897), TP472 (Tocris #6000), (+/−) -JQ1 (Sigma #SML0974), PFI-3 (Sigma #SML0939), and TSA (Sigma T8552). For phenotypic assays, ESCs were treated with increasing concentrations of inhibitors. For IP experiments, ESCs were treated with either vehicle or 200 nM TSA for 6 h prior to nuclear lysate collection. List of shRNAs Brd9 #1: TTTATTTCTTCTTTCATCTTTG (Addgene #75130). The hairpin was restriction enzyme digested from the retroviral vector with Mlu I and Xho I and ligated into pGipZ lentiviral vector (NEB Quick Ligation Kit). Non-targeting hairpin in pGipZ was used as a control. Brd9 #2: ATCAGGCTCAGGTGCGTTC (Dharmacon V3SM11241–231713739) and Brd9 #3: TTGAGTGATCACCACCTGT (Dharmacon V3SM11241–233485164) were used as provided. Non-targeting hairpin (Dharmacon VSC11709) was used as a control. Lentivirus preparation and ESC infection HEK293T cells were transfected with the lentiviral constructs and packaging plasmids Md2G and psPAX2 using polyethylenimine-mediated transfection. Forty-eight hours post transfection, the media containing the virus was collected, filtered, and centrifuged at 70,952 × g for 2 h at 4 °C. The viral pellet was resuspended in 1× phospho-buffered saline (PBS) and stored at −80 °C until use. v6.5 mouse ESCs were infected in suspension with the concentrated virus for 1–2 h with 5 µg mL −1 of polybrene then plated onto MEF feeders for incubation with the virus overnight. Media was changed the next day. Twenty-four hours later, shRNA-expressing cells were selected with 1 µg mL −1 of puromycin on puromycin-resistant MEFs for 48 h. Cell growth assay For hairpin-transduced v6.5 mouse ESCs, 100K cells per well were plated on a gelatin-coated 12-well plate 2 days after puromycin selection. Cells were counted using Trypan Blue exclusion method on the TC20 Cell Counter (Biorad) after 2 or 4 days. Cells were passaged at day 2 to inhibit contact-induced differentiation. For the small-molecule inhibitor experiments, v6.5 mouse ESCs were plated at 100K cells per well on a gelatin-coated 12-well plate. Twenty-four hours later, the cells were treated with either vehicle (DMSO) or varying concentrations of the following inhibitors: I-BRD9, BI-9564, PFI-3, or TP472. Cells were counted using Trypan Blue exclusion method on the TC20 Cell Counter (Biorad) every 48 h, during which the cells were re-plated at the same cell density on a new 12-well plate. Two-tailed t -test was performed to obtain the p values from biological replicates, n = 9 for I-BRD9, n = 6 for BI-9564 and TP472, n = 3 for PFI-3, and n = 3 for each sh Brd9 hairpin. * p < 0.05, ** p < 0.01, *** p < 0.001. Clonogenicity assay Cells were seeded at 100 cells per 9 cm 2 in gelatin-coated six-well plates. Cells were treated continuously from day of plating with either vehicle (DMSO) or the indicated concentrations of BRD9 inhibitors, I-BRD9 or BI-9564. Media was replaced every day and at day eight, colonies were counted. For detection of AP activity, cells were seeded and treated as indicated above. At day 8, cells were fixed with 4% paraformaldehyde in 1× PBS, rinsed with Tris-buffered saline solution (TBS) with 0.05% Tween-20 and stained using the Alkaline Phosphatase Detection Kit (Millipore Sigma SCR004) per the manufacturer’s instructions. Density sedimentation assay v6.5 mouse ESCs or HCT116 cells were lysed in Buffer A (25 mM HEPES, pH 7.6, 5 mM MgCl 2 , 25 mM KCl, 0.05 mM EDTA, 10% glycerol, and 0.1% NP-40) supplemented with 1 mM dithiothreitol (DTT), 1 mM phenylmethylsulfonyl fluoride (PMSF), 1 µM pepstatin, 10 µM leupeptin, and 10 µM chymostatin at 10 7 cells per 5 mL and incubated on ice for 10–15 min. Nuclei were spun down at 900 xg for five minutes then resuspended in Buffer C (10 mM HEPES, pH 7.6, 3 mM MgCl 2 , 100 mM KCl, 0.1 mM EDTA, and 10% glycerol) supplemented with 1 mM DTT, 1 mM PMSF, 1 µM pepstatin, 10 µM leupeptin, and 10 µM chymostatin at 40 × 10 6 cells per 945 µL. Ammonium sulfate was added at a final concentration of 300 mM, incubated on an end-to-end rocker in the cold room for 30 min, then spun down at 446,082 × g for 10 min. Nuclear proteins were precipitated by incubation with ammonium sulfate at a final concentration of 0.3 mg mL −1 , on ice for 20 min then centrifugation at 446,082 × g for 10 min. Dry pellet was stored at −80 °C until use. Seven hundred micrograms of nuclear proteins were resuspended in 150 µL of HEMG solution (25 mM HEPES, pH 7.9, 0.1 mM EDTA, 12.5 mM MgCl 2 , and 100 mM KCl) supplemented with 1 mM DTT, 1 mM PMSF, 1 µM pepstatin, 10 µM leupeptin, and 10 µM chymostatin then overlaid onto 10 mL of HEMG solution with 10–30% glycerol gradient prepared in a 14 × 89 mm polyallomer centrifuge tube (Beckman). Proteins were subjected to ultra-centrifugation in a SW40 rotor at 4 °C for 16 h at 283,807 × g . The next day, 0.5 mL fractions were collected and analyzed using immunoblotting (IB). List of antibodies ARID1A: Santra Cruz sc-32761 or Millipore 04-080 (IB 1:1000 v/v and ChIP 10 μL for 6.25 × 10 6 cells) BRG1: Santa Cruz sc-17796 (IB 1:1000 v/v) or Abcam 110641 (IP 1:100 m/m, IB 1:2000 v/v, and ChIP 5 μL for 6.25 × 10 6 cells) BAF155: Santra Cruz sc-10756 (IB 1:1000 v/v) or in-house antibody 19 (IP 1:100 m/m) PBRM1: Bethyl A301–591A, 1:2000 v/v GLTSCR1: Santa Cruz sc-515086, 1:1000 v/v GLTSCR1L: Thermo Fisher Sci PA5–56126, 1:500 v/v BRD9: Active Motif 61537 (IP 1:100 m/m, IB 1:2000 v/v, and ChIP 5 μL for 6.25 × 10 6 cells) BRD7: Santa Cruz sc-376180, 1:500 v/v BRD4: Bethyl A301–985A50 (IP 1:100 m/m, IB 1:2000 v/v, and ChIP 5 μL for 6.25 × 10 6 cells) BAF47: Santa Cruz sc-166165 (IP 1:100 m/m and IB 1:1000 v/v) BAF57: Bethyl A300–810A, 1:2000 v/v BAF60A: Santa Cruz sc-135843, 1:1000 v/v BAF53A: Novus Bio NB100–61628, 1:2000 v/v IgG: rabbit, Santa Cruz sc-2027 (IP-mass spec 5 μL for 1 mg of nuclear lysates) or Cell Signaling 2729S (IP 1:100 m/m) TBP: Thermo Fisher MA1–189, 1:2000 v/v OCT4: Santa Cruz sc-5279, 1:1000 v/v NANOG: Abcam ab80892, 1:1000 v/v Immunoprecipitation Nuclear lysates were collected from v6.5 mouse ESCs or HCT116 cells following a revised Dignam protocol 53 . After cellular swelling in Buffer A (10 mM HEPES, pH 7.9, 1.5 mM MgCl 2 , and 10 mM KCl) supplemented with 1 mM DTT, 1 mM PMSF, 1 µM pepstatin, 10 µM leupeptin, and 10 µM chymostatin, cells were lysed by homogenization using a 21-gauge needle with six to eight strokes. If lysis remained incomplete, cells were treated with 0.05–0.1% NP-40 for 10 min on ice prior to nuclei collection. Nuclei were spun down at 900 × g for 5 min then resuspended in Buffer C (20 mM HEPES, pH 7.9, 20% glycerol, 420 mM NaCl, 1.5 mM MgCl 2 , and 0.2 mM EDTA) supplemented with 1 mM DTT, 1 mM PMSF, 1 µM pepstatin, 10 µM leupeptin, and 10 µM chymostatin. After 30 min of end-to-end rotation in the cold room, sample was clarified at 21,100 × g for 10 min. Supernatant was collected and flash-frozen in liquid nitrogen, if necessary. Prior to the IP, the nuclear lysates were diluted with two-thirds of original volume of 20 mM HEPES, pH 7.9, and 0.3% NP-40 to bring down the NaCl concentration. A unit of 200–300 µg of nuclear lysates was used per IP with antibodies against BRG1, BRD9, or BAF47 overnight at 4 °C. Precipitated proteins were bound to 50:50 Protein A, Protein G Dynabeads (Invitrogen) for 1–2 h and washed with either RIPA (50 mM Tris, pH 8, 150 mM NaCl, 1% NP-40, 0.5% sodium deoxycholate, and 0.1% SDS) for BRD9-IP or Wash Buffer (50 mM Tris, pH 8, 150 mM NaCl, 1 mM EDTA, 10% glycerol, and 0.5% Triton X-100) for BRG1- and BAF47-IPs. Proteins were eluted in SDS-polyacrylamide gel electrophoresis (SDS-PAGE) loading solution with boiling and analyzed by IB. For IP with BRD9 with or without I-BRD9, nuclear lysates were split into equal volume and incubated with or without 10 µM I-BRD9 (diluted fresh into binding buffer) for 30 min prior to addition of antibody. For GLTSCR1/GLTSCR1L/BAF60A blotting, the IP was washed with 50 mM Tris, pH 8, 150 mM NaCl, 1 mM EDTA, 10% glycerol, and 0.5% Triton X-100. For BRD4 blotting, the IP was washed with 50 mM Tris, pH 8, 150 mM NaCl, and 0.1% NP-40. I-BRD9-treated samples were washed with buffers that contained 20 µM I-BRD9. For BRG1 and BAF155 depletion assay, nuclear lysates were subjected to four rounds of incubation with the respective antibodies and Protein A + G dynabeads, minimum of 3 h to overnight at 4 °C. The supernatant flowthrough from each IP was collected and analyzed by IB. Urea-based denaturation assay v6.5 mouse ESC nuclear lysates were collected as described above in the Immunoprecipitation section. Two hundred micrograms of the nuclear lysates were subjected to varying concentrations of urea (0.25–5 M) in 25 mM Tris, pH 8, 150 mM NaCl, 0.1% NP-40, and 1 mM DTT for 15 min at room temperature (RT) prior to performing the IP with an antibody against BRD9. The precipitated proteins were washed and eluted as described above and analyzed by IB. IB assay Protein samples were run on 4–12% Bis-Tris gels (Life Technologies). After primary antibody incubation, blots were probed with 1:20,000 v/v dilution of either fluorescently labeled secondary antibodies (Life Technologies #A21058, Invitrogen #SA535571) in 2% bovine serum albumin in PBST or horseradish peroxidase-conjugated anti-rabbit secondary antibody (Veriblot Abcam #131366) in 5% non-fat milk in TBST for an hour at RT. Fluorescent images were developed using Odyssey. Veriblot-probed blots were treated with enhanced chemiluminescence substrate (Biorad #170–5060) for 5 min then developed on film. Original scans of all blots are included as a Source Data File. Cellular fractionation Fractionation of v6.5 ESCs treated with either DMSO or I-BRD9 at 3 or 10 µM for 24 h was done according to published protocol 54 . Briefly, 20 × 10 6 cells were lysed in Buffer 1 (10 mM HEPES, pH 7.9, 10 mM KCl, 1.5 mM MgCl2, 0.34 M sucrose, 10% glycerol, 1 mM DTT, protease inhibitors, and 0.1% Triton X-100). After 8 min on ice, nuclei were harvested by centrifugation at 1,300 × g for 5 min then resuspended in Buffer 2 (3 mM EDTA, 0.2 mM EGTA, 1 mM DTT, and protease inhibitors). Supernatant 1 is the cytosolic fraction. Samples were spun at 20,000 × g to isolate chromatin fraction (pellet), which was subsequently resuspended in 100 µL of 1× SDS-PAGE loading dye in TBS + 100 mM DTT and incubated at 70 °C for 10 min. Supernatant 2 is the nuclear soluble fraction. For loading onto SDS-PAGE gels, sample viscosity was reduced by dilution. IP-mass spectrometry Rabbit polyclonal IgG and BRD9-specific antibodies were crosslinked to Dynabeads (Invitrogen) using bis(sulfosuccinimidyl) suberate (BS3). Briefly, Dynabeads were blocked by incubating with 10 µg µL −1 sheared, salmon-sperm DNA in wash buffer (WB) (0.1 M NaPO4, pH 8.2, and 0.1% Tween-20) then incubated with antibody at RT for 15 min. After two washes in conjugation buffer (20 mM NaPO4, pH 8.2, and 150 mM NaCl), the antibody-beads complexes were incubated with 5 mM BS3 for 30 min at RT. Crosslinking was quenched with Tris-HCl, pH 7.4, and the antibody-beads complexes were washed with conjugation buffer and equilibrated with IP buffer (20 mM Tris, pH 8, 150 mM NaCl, and 0.1% NP-40). IP was performed from 1 mg of v6.5 mouse ESC nuclear lysates with either rabbit IgG or BRD9-specific antibody. Precipitated proteins were treated with Micrococcal nuclease S7 for 15 min, washed with RIPA buffer, and eluted in 20 mM Tris, pH 8, 150 mM NaCl, 1× SDS-PAGE loading dye, and 1 mM DTT with boiling. Samples were precipitated by methanol/chloroform. Dried pellets were dissolved in 8 M urea/100 mM TEAB, pH 8.5. Proteins were reduced with 5 mM tris(2-carboxyethyl)phosphine hydrochloride (Sigma-Aldrich) and alkylated with 10 mM chloroacetamide (Sigma-Aldrich). Proteins were digested overnight at 37 °C in 2 M urea/100 mM TEAB, pH 8.5, with trypsin (Promega). Digestion was quenched with formic acid, 5% final concentration. The digested samples were analyzed on a Fusion Orbitrap tribrid mass spectrometer (Thermo) in a data-dependent mode. Protein and peptide identification was done with PatternLab for Proteomics 55 . Tandem mass spectra were searched with Comet 56 against a mouse UniProt database. The search space included all fully tryptic and half-tryptic peptide candidates. Carbamidomethylation on cysteine was considered as a static modification. Data were searched with 40 ppm precursor ion tolerance. Identified proteins were filtered using SEPro 55 and utilized a target-decoy database search strategy to control the false discovery rate to 1% at the protein level 57 . RNA-Seq sample preparation v6.5 mouse ESCs were transduced in suspension with either pooled sh Brd9 or shcontrol lentivirus for 1–2 h then plated onto MEF feeders for incubation with virus overnight. RNA was collected 4 days post puromycin (1 µg mL −1 ) selection on puro-resistant MEF feeders. Arid1a f/f :CreERT2 ESCs were cultured on gelatin-coated dishes, treated with either ethanol or 1 µM tamoxifen then passaged 24 h post treatment. Forty-eight hours after passage, RNA was collected. v6.5 mouse ESCs treated with either DMSO or 10 µM I-BRD9 were cultured on MEF feeders then passaged onto gelatin-coated dishes 24 h prior to RNA collection. RNA from 1–3 × 10 6 cells was extracted and purified with the Zymo Research Quick-RNA miniprep kit according to the manufacturer’s instructions. RNA-Seq libraries were prepared using Illumina TruSeq Stranded mRNA kit following the manufacturer’s instructions with 5 µg of input RNA. Quantification of gene expression Seventy-two hours after v6.5 mouse ESCs were treated with DMSO, 10 µM I-BRD9, or 500 nM JQ1, RNA samples were isolated using Quick-RNA Miniprep Kit (Zymo Research). cDNA synthesis was performed with 2 µg of RNA using SuperScript III and oligo-dT primer in 20 µL reaction volume per the manufacturer’s protocol (Invitrogen #18080–051). Two microliters of 1:50 diluted cDNA samples (in water) were used per reaction. Quantitative real-time PCR analysis was performed in technical duplicates using CFX Real Time System (Biorad) with iTaq Universal SYBR Green Supermix (Biorad #64163963). Gapdh was used as the endogenous control. The sequences of primers used are listed in Supplementary Table 2 . ChIP-Seq sample preparation Approximately, 20 × 10 6 v6.5 mouse ESCs cultured on gelatin treated with DMSO or 3 µM I-BRD9 for 6 or 24 h or 500 nM JQ1 for 24 h were collected and crosslinked first in 3 mM disuccinimidyl glutarate in 1× PBS then in 1% formaldehyde. After quenching the excess formaldehyde with 125 mM glycine, the fixed cells were washed, pelleted, and flash-frozen. Upon thawing, the cells were resuspended in lysis solution (50 mM HEPES-KOH, pH 8, 140 mM NaCl, 1 mM EDTA, 10% glycerol, 0.5% NP-40, and 0.25% Triton X-100, and incubated on ice for 10 min. The isolated nuclei were washed with wash solution (10 mM Tris-HCl, pH 8, 1 mM EDTA, 0.5 mM EGTA, and 200 mM NaCl) and shearing buffer (0.1% SDS, 1 mM EDTA, and 10 mM Tris-HCl, pH 8) then sheared in a Covaris E229 sonicator for 10–20 min to generate DNA fragments between ~200 and 1000 base pairs (bp). After clarification of insoluble material by centrifugation, the chromatin was immunoprecipitated overnight at 4 °C with antibodies against BRG1, BRD9, and ARID1A bound to Protein A + G Dynabeads (Invitrogen) in ChIP buffer (50 mM HEPES-KOH, pH 7.5, 300 mM NaCl, 1 mM EDTA, 1% Triton X-100, 0.1% DOC, and 0.1% SDS). Antibody-bound DNA were washed and treated with Proteinase K and RNase A and crosslinking was reversed by incubation at 55 °C for two and a half hours. Purified ChIP DNA was used for library generation (NuGen Ovation Ultralow Library System V2) according to the manufacturer’s instructions for subsequent sequencing. RNA-Seq analysis Single-end 50 bp reads were aligned to mm10 using STAR alignment tool (V2.5) 58 . RNA expression was quantified as raw integer counts using analyzeRepeats.pl in HOMER using the following parameters: -strand both -count exons -condenseGenes –noadj. To identify DEGs, we performed getDiffExpression.pl in HOMER, which uses the DESeq2 R package to calculate the biological variation within replicates. Cut-offs were set at log2 FC = 0.585 and FDR at 0.05 (Benjamin-Hochberg). For RNA expression of nearest annotated gene for sites that lose BRD9 ChIP binding with I-BRD9, ChIP peaks were annotated to the closest transcription start site (TSS) and the associated log2 fold change (I-BRD9/DMSO) was determined. GO analysis GO analysis was performed on the list of 929 I-BRD9 DEGs on the GSEA website (GSEA homepage, [ ], 2004–2017). Gene set enrichment analysis GSEA software was used to perform the analyses with the following parameters: number of permutations = 1000; enrichment statistic = weighted; and metric for ranking of genes = difference of classes (input RNA-Seq data were log-transformed). ChIP-Seq analysis Single-end 50 bp reads were aligned to mm10 using STAR alignment tool (V2.5) 58 . ChIP-Seq peaks were called using findPeaks within HOMER using default parameters for histone (-style histone) or TF (-style factor). Peaks were called when enriched greater than twofold over input and greater than fourfold over local tag counts, with FDR 0.001 (Benjamin-Hochberg). For histone ChIP, peaks within a 1000 bp range were stitched together to form regions. ChIP-Seq peaks or regions were annotated by mapping to the nearest TSS using the annotatePeaks.pl command. Differential ChIP peaks were found by merging peaks from control and experiment groups and called using getDiffExpression.pl with fold change ≥ 1.5 or <−1.5, Poisson p value < 0.0001. For peak calling with replicate samples, we used the getDifferentialPeaksReplicates.pl program, with -style factor and default parameters for FC and Poisson p value. Significance of peak overlap was determined by calculating the number of peaks co-occurring across the entire genome using the HOMER mergePeaks program. For enhancer enrichment analysis, we defined the enhancer classes using publicly available mouse ESC ChIP-seq data for Mediator and histone modifications (see Data availability) 27 , 59 , 60 . Enhancers were called by identifying all H3K4me-positive regions that are at least 1 kb away from the nearest TSS or H3K4me3 mark. These were subdivided as active (H3K27ac-positive) or poised (H3K27ac-negative) 60 . We then differentiated the H3K4me-positive and H3K27ac-positive regions into active and super enhancers by ranking the regions by Mediator ChIP-Seq tag density and using the tangent of the curve to call super enhancers 27 . Promoter annotation was performed using the HOMER annotatePeaks program, which by default is −1 kb to +100 bp away from a known TSS. Distal sites were called using the HOMER getDistalPeaks program, which finds intergenic regions but excludes transcription termination sites. Motif analysis Sequences within 200 bp of peak centers were compared to known motifs in the HOMER database using the findMotifsGenome.pl command with the following fragment size and motif length parameters, respectively: -size 200 -len 8. Random GC content-matched genomic regions were used as background (default). Enriched motifs are statistically significant motifs in input over background by a p value of <0.05. p Values were calculated using cumulative binomial distribution. TAD boundary enrichment analysis Hi-C data from mESCs 61 were downloaded from the GEO database (GSE96107​) and mapped to the mm10 genome using bwa-mem. Reads were paired manually using an in-house pipeline, and PCR duplicate reads were removed using Picard. TADs were called using the directionality index (DI) method 28 . Briefly, Hi-C interaction matrices were converted to a vector of upstream and downstream interaction frequency bias using a chi-squared liked statistic, termed the DI. The DI values were then used as input for a Gaussian mixture hidden Markov model to identify local states of upstream and downstream bias in interaction frequencies. Domains were called as regions of continuous downstream biased states and ends when the last in a series of upstream biased states are reached. The regions between the topological domains are termed TAD boundaries if they are <400 kb or unorganized chromatin if they are more than 400 kb. Enrichment of ChIP-Seq data over TADs was calculated by partitioning each TAD into 100 bins, and also considering 50 bins upstream and downstream of the domain. The number of peaks per kb per bin was calculated and averaged across all domains in the genome. To compare across samples with different number of peaks, the final averaged values were normalized by the number of peaks in each dataset divided by 10,000. Statistical tests Statistically significant differences in cell growth assays: Two-tailed t tests were performed to calculate p values on Graphpad Prism Version 7. Number of replicates are provided above, in the Cell growth assay section. Overlap between datasets: p values were calculated using hypergeometric test of overlap with population size being the total number of genes tested ( N = 24,538), using an online tool found at . Correlation of RNA-Seq values between datasets: Goodness of fit ( R 2 ) was analyzed using linear regression on Graphpad Prism Version 7. Data availability The raw mass spectrometry files for BRD9-interacting proteins from mESCs have been deposited into the ProteomeXchange Consortium via the PRIDE partner repository with the following identifier PXD010670 . RNA-seq and ChIP-Seq data that support the findings of this study have been deposited in the Gene Expression Omnibus under the accession code GSE111264 . We also used publicly available sequencing data, which were processed using HOMER v4.8 (Christopher Benner, HOMER, , 2018): histone modifications H3K4me, H3K4me3, and H3K27ac (GEO GSE24165); H3K27me3 (GEO GSM1397343); Mediator (GEO GSE44288); CTCF (GEO GSE30203); Sp5 (GEO GSE72989); KLF4 (GEO GSM288354); OCT4, NANOG, and SOX2 (GEO GSE44286); c-Myc (GEO GSM288356); RNA-Seq Brg1 f/f (GEO GSE87820); RNA-Seq ESC-EpiESC (GEO GSE79796); RNA-Seq JQ1 (GEO GSE88760); RNA-Seq RA-induced differentiation (GSE39522); and Hi-C data (GSE96107). Source data are provided for Figs. 1a–c , 2 , 5a–e , 6a , 7c, d , 8a–c , Supplementary Figures 1 , 2 , 3 and 4b-d as a Source Data file. All other data are available from the corresponding author upon reasonable request. A Reporting Summary for this Article is available as a Supplementary Information file.
Embryonic stem cells (ESCs) are the very definition of being full of potential, given that they can become any type of cell in the body. Once they start down any particular path toward a type of tissue, they lose their unlimited potential. Scientists have been trying to understand why and how this happens in order to create regenerative therapies that can, for example, coax a person's own cells to replace damaged or diseased organs. Scientists from the Salk Institute discovered a new protein complex that keeps the brakes on stem cells, allowing them to maintain their indefinite potential. The new complex, called GBAF and detailed in Nature Communications on December 3, 2018, could provide a future target for regenerative medicine. "This project started as an exploration of embryonic stem cell pluripotency, which is this property that allows ESCs to become all different cell types in the body," says Diana Hargreaves, an assistant professor in Salk's Molecular and Cell Biology Laboratory and the senior author of the paper. "It's very important to know how various networks of genes control pluripotency, so finding a previously unknown protein complex that plays such an important regulatory role was very exciting." Every cell in the body has the same set of DNA, which contains the instructions for making every possible cell type. Teams of large protein complexes (known as chromatin remodelers) activate or silence genes, directing an embryonic stem cell down a particular path. Like a team of contractors planning to renovate a house, these protein complexes contain varying subunits, the combination of which changes the physical shape of DNA and determines which genes can be accessed to direct the cell to become, for example, a lung cell or brain cell. Hargreaves's team wanted to better understand how these subunits come together and how particular subunits might dictate a complex's function. So they turned to a protein called BRD9, which was known to associate with the BAF family of chromatin remodelers and was suspected to be a subunit. The team applied a chemical inhibitor of BRD9 to dishes of embryonic stem cells and performed a series of experiments to comprehensively analyze the cells' pluripotency in association with changes in BAF complex activity. Scientists from the Salk Institute discovered a new protein complex that keeps the brakes on stem cells, allowing them to maintain their indefinite potential. Credit: Salk Institute The group was surprised to discover that BRD9 acts as a brake on embryonic stem cell development. When BRD9 is working, cells retain their pluripotency, whereas when its activity is inhibited cells start moving on to the next stage of development. Further work to identify which BAF complexes were at work in the cells revealed another surprise: BRD9 was part of an as-yet-unknown BAF complex. "For me, what was most exciting about our study was the fact that we had discovered a new BAF complex in embryonic stem cells," says Jovylyn Gatchalian, a Salk research associate and the paper's first author. Adds Hargreaves, "What we see with this work is that there's biochemical diversity at the level of individual variants of the BAF complex that allows for greater regulatory control. Understanding the complexities of that control is going to be key to any regenerative therapies."
10.1038/s41467-018-07528-9
Biology
Gorilla mobs attacking single individuals suggests new type of behavior for them
Stacy Rosenbaum et al. Observations of severe and lethal coalitionary attacks in wild mountain gorillas, Scientific Reports (2016). DOI: 10.1038/srep37018 Abstract In humans and chimpanzees, most intraspecific killing occurs during coalitionary intergroup conflict. In the closely related genus Gorilla, such behavior has not been described. We report three cases of multi-male, multi-female wild mountain gorilla (G. beringei) groups attacking extra-group males. The behavior was strikingly similar to reports in chimpanzees, but was never observed in gorillas until after a demographic transition left ~25% of the population living in large social groups with multiple (3+) males. Resource competition is generally considered a motivator of great apes' (including humans) violent intergroup conflict, but mountain gorillas are non-territorial herbivores with low feeding competition. While adult male gorillas have a defensible resource (i.e. females) and nursing/pregnant females are likely motivated to drive off potentially infanticidal intruders, the participation of others (e.g. juveniles, sub-adults, cycling females) is harder to explain. We speculate that the potential for severe group disruption when current alpha males are severely injured or killed may provide sufficient motivation when the costs to participants are low. These observations suggest that the gorilla population's recent increase in multi-male groups facilitated the emergence of such behavior, and indicates social structure is a key predictor of coalitionary aggression even in the absence of meaningful resource stress. Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep37018
https://phys.org/news/2016-11-gorilla-mobs-individuals-behavior.html
Abstract In humans and chimpanzees, most intraspecific killing occurs during coalitionary intergroup conflict. In the closely related genus Gorilla, such behavior has not been described. We report three cases of multi-male, multi-female wild mountain gorilla ( G. beringei ) groups attacking extra-group males. The behavior was strikingly similar to reports in chimpanzees, but was never observed in gorillas until after a demographic transition left ~25% of the population living in large social groups with multiple (3+) males. Resource competition is generally considered a motivator of great apes’ (including humans) violent intergroup conflict, but mountain gorillas are non-territorial herbivores with low feeding competition. While adult male gorillas have a defensible resource (i.e. females) and nursing/pregnant females are likely motivated to drive off potentially infanticidal intruders, the participation of others (e.g. juveniles, sub-adults, cycling females) is harder to explain. We speculate that the potential for severe group disruption when current alpha males are severely injured or killed may provide sufficient motivation when the costs to participants are low. These observations suggest that the gorilla population’s recent increase in multi-male groups facilitated the emergence of such behavior, and indicates social structure is a key predictor of coalitionary aggression even in the absence of meaningful resource stress. Introduction Intergroup coalitionary aggression is rare in the animal kingdom, but has particularly notable evolutionary and social significance in Homo sapiens 1 . It is therefore unsurprising that the vertebrate animal behavior corollary best approximating human warfare occurs in one of humans’ closest extant relative, chimpanzees 2 , 3 . As is true for humans, rates of such encounters vary widely across sites and social groups 4 , but lethal chimpanzee ‘raids’ and other forms of cooperative intergroup attacks have been regularly reported at multiple long-term field sites 2 , 5 , 6 , 7 , 8 , 9 , 10 , 11 . Between and within species, violent intergroup conflict is most likely to occur when important resources are defensible and demographic power imbalances reduce the cost to individual participants 4 , 11 , 12 , 13 , 14 . These socioecological conditions help explain differences in relative rates of both human and chimpanzee ‘warfare,’ across sites and groups, as well as its near-absence in other close relatives including bonobos 2 , 3 , 4 and orangutans 15 . Ultimate explanations for the evolution of coalitionary aggression include direct benefits via improved resource access 11 , 12 , 13 , 14 or status maintenance/elevation 16 , 17 , or indirect benefits via kin selection and reciprocal altruism 18 , 19 . Mountain gorillas ( Gorilla beringei ) have been continuously observed in the wild for nearly the same length of time as chimpanzees (~50 years 20 ). Very little coalitionary aggression, either intra or intergroup, has been reported despite the species’ close relationship to humans and chimpanzees 21 . Published examples are limited to small intragroup female alliances, male intervention in such alliances, and reports of individual males supporting either other males, or females 22 , 23 , 24 , 25 . The purported absence of meaningful coalitionary aggression is unsurprising for two reasons. First, the modal group type is one male with multiple females and their offspring, which limits opportunities for coalitions 26 . Second, resource defense is often considered an important motivator of such behavior in great apes 3 , 4 , 11 , but unlike chimpanzees and (historically) humans, mountain gorillas are herbivores with an abundant year-round food supply 27 , 28 . Far less is known about the behavior of more frugivorous western lowland gorillas ( G. gorilla ), but since they also primarily occur in single-male groups opportunities for male coalitions are limited, and benefits of female coalitions are probably minimal 29 . To date no coalitions have been reported for either sex in the western lowland subspecies. Mountain gorilla groups maintain overlapping home ranges, and social units frequently encounter one another in the forest 30 . Interactions can be risky even for animals that do not actively participate. Sexually selected infanticide is an important source of infant mortality in this population 31 , and intergroup interactions expose young animals to potentially infanticidal males. However, as in many primate species, most intergroup interactions are characterized primarily by chasing, aggressive vocalizations, and sometimes minor wounding, but do not usually end in serious injury and may even involve affiliative behavior 13 . In a typical interaction between gorilla groups or between a group and a solitary adult male, young adult and/or fully adult males engage in repeated bluff displays that may or may not escalate to physical contact 32 , 33 . Females and younger group members typically watch from a distance, though females may use interactions to transfer between social groups, and males will herd them to prevent transfers 32 . Involvement of animals other than young adult and fully adult males is usually limited to vocal aggression, if they participate at all (Karisoke Research Center long-term records, pers. obs.). Gorillas’ morphology (extreme sexual dimorphism 34 , well-developed male weaponry 35 , and small testes relative to body size 36 ) strongly suggests they have a long evolutionary history of male contest competition and a one male, multi-female social structure. This was largely supported by demographic and behavioral data collected from the mid 1950’s to the early 1990’s on the mountain gorilla population living in central Africa’s Virunga Massif, one of only two remaining populations in the world. While groups containing two or occasionally three adult males (likely fathers and sons) were reported as far back as the 1950s 37 , 38 , most mountain gorilla groups contained only one adult male 20 , 26 , 38 . However starting in the mid to late 1990’s, the habituated gorilla groups, particularly those monitored by the Dian Fossey Gorilla Fund’s Karisoke Research Center (KRC), grew dramatically larger and increasingly multi-male (hereafter defined as groups containing 3+ adult males). In this system either sex can disperse (females can join established groups or solitary adult males, to start new groups; males become solitary until acquiring females), or reproduce in their natal group 39 , 40 . While the modal group type population-wide remained single male, fewer young adult males dispersed than apparently had previously 41 , 42 . As yet, the reason for this purported behavior change remains elusive. Various authors have noted that multi-male groups have advantages for both males and females (for males, better female retention and more reproductive opportunities; for females, lower infant mortality [e.g. refs 31 , 41 , 43 and 44 ]). Due to these advantages, the existence of one or a few multi-male groups may create an ‘arms race’ that incentivizes multi-male structure in neighboring groups 45 , 46 . However, this does not satisfactorily explain why this ‘novel’ (if indeed it is new) social structure did not evolve long ago. Ecological explanations such as habitat loss, increased population density, and poaching pressure remain unconvincing based on the demographic shift’s timing relative to major habitat disturbances 41 and uneven distribution across the population (i.e., the well-monitored sectors where the most dramatic changes occurred were not necessarily subjected to more disturbances). Regardless of the cause, the structural changes created groups that reached at their extremes 65 individuals, 9 co-resident adult males, and adult male-to-female ratios of nearly 1:1 30 , 40 , 41 . Thus, despite morphological and behavioral evidence suggesting a long history of single male groups, from the mid 1990s onward a sizable proportion of the gorilla population (~25%; KRC long-term records 42 ) resided in groups that bore more structural similarity to chimpanzee groups than to harems, but without chimpanzees’ fission-fusion dynamics 47 . Though there is some evidence of subgrouping (KRC long-term records), these groups maintain a cohesive structure, with members tending to rest, feed, and travel together in visual and/or auditory contact of most other members [refs 20 and 40 , pers. obs.]. Like chimpanzees, males living in the same group have easily discernible dominance hierarchies, at least among the top few ranking males 40 ; females also have dominance hierarchies but they are markedly weaker 48 . After the social structure shift that occurred in the 1990s, in 2004, 2010, and 2013 research and tracking staff from the KRC observed multi-male, multi-female groups of mountain gorillas in Volcanoes National Park, Rwanda, collectively and violently attack extra-group males. These attacks were qualitatively and quantitatively different from species-typical mountain gorilla intergroup encounters for their violence, speed, remarkable coordination, and participant demographic. We base this on the collective experience of the observers, who together have tens of thousands of hours of experience tracking and studying mountain gorillas. All incidents were witnessed during the course of normal daily KRC non-invasive data collection (see below). In the cases described here, all group members of both sexes simultaneously attacked solitary males (two cases) or the male individual in a two-animal group (one case) that interacted with their group. Because this behavior is undocumented in the literature we describe the 2004 attack (witnessed by SR) in detail, and summarize the other two from reports written by tracking and research staff. Anecdotally, tracking staff reported to SR they had witnessed similar behavior in the 1990s, again during or after the group structure changes, though we are unaware of any written record. Site background Started in 1967 by Dr. Dian Fossey, KRC operates one of the world’s longest-running field sites. KRC staff and scientists collect daily demographic, behavioral, and non-invasive biomaterial data (e.g. hormones, health, genetics) on mountain gorillas in habituated groups. Though numbers fluctuate, over the last 20 years KRC has monitored between 75 and 120 individual gorillas in 3 to 12 different social groups at any given point in time. Data collection protocols involve observing known individuals from a distance of at least 7 m. Due to the long site history, extensive life history information is available for most individuals, and staff can conclusively identify many habituated solitary males who dispersed from monitored social groups around sexual maturity. These include the males involved in the encounters described here. 2004 Attack On October 14 2004, the solitary adult male Inshuti ( Table 1 ) approached Beetsme group, a mixed-sex group of 26 animals ( Tables 2 and 3 ) that was feeding on bamboo shoots. Beetsme group males immediately began species-typical aggressive displays that included chest beating, running, and smashing vegetation, but no physical contact was observed. The group began moving, followed by Inshuti, and aggressive behaviors temporarily stopped. Inshuti came within 2 meters, apparently deliberately, of some of the group’s males (identifications unknown, though not the alpha male) as they moved, without exchanging aggressive displays. Once, Inshuti put his arms on the ground in a manner that suggested either play solicitation or submission, though alternatively this may also simply have been an indication of fatigue. Table 1 Victim information. Full size table Table 2 Demographics of attacking groups. Full size table Table 3 Relatedness * among males, and males and infants, in the attacking groups. Full size table Fifty minutes after initial contact, observers heard loud screams but were unable to identify the screamer(s) due to dense vegetation. Within seconds of the screaming, Inshtui ran away from Beetsme group’s primary direction of travel, followed by three unidentified group males. The three males caught Inshuti and held his arms and legs to the ground. The rest of the group ran toward them from multiple directions, since as they moved they had dispersed across a wide area. Based on the sound of crashing vegetation and the timing of their appearance, observers inferred that all of the group members began running toward the victim immediately upon hearing the screaming. The group members surrounded Inshuti; it was difficult to distinguish him under the other gorillas. The alpha male’s actions were the most violent of the behaviors visible to observers. While many gorillas were pulling out chunks of Inshuti’s hair, biting, kicking, and hitting him, the alpha male repeatedly sank his teeth into his body and shook his head back and forth, similar to a canid shaking prey. Inshuti attempted to escape and moved ~20 meters before being dragged down and held under the group again. Most or possibly all the attackers screamed (either an aggressive or fear vocalization in this species 37 , 49 ) and “pig grunted” (a more mild form of vocal aggression 49 ) throughout. Because the group was so large, not all individuals were able to contact Inshuti simultaneously. Those who could not reach him milled around in physical contact with those who were touching him, and appeared to be trying to reach through the other attackers to touch him. Two young infants clung to their mothers’ backs throughout, but the other juveniles and infants actively participated. Approximately 3–4 minutes after the attack began, it abruptly stopped. It was unclear to observers why, but all attackers stopped within seconds of each other. Inshuti fled into nearby vegetation. Led by the second-ranked male, the group walked away from the attack site nearly in single file. This allowed us to count the participants. The count was one short (25) of the whole group, and we were unable to establish which animal was missing. We are uncertain whether it did not participate or was missed as they moved away, but we believe it is more likely we failed to count it. Visibility as the animals left the site was very good and no animals re-appeared before all 26 animals were counted ~10 minutes later. They retreated silently, and after a short, fast walk of a few hundred meters, the group started feeding. There was no intragroup aggression or aggression toward observers, and they appeared quite calm. Four Beetsme group animals suffered minor injuries. The alpha male had a tiny cut on his left eyelid, and a subordinate adult male had two small cuts, on his right nostril and left shoulder. One adult female had a large but superficial wound on her back. A second adult female also had a superficial cut on her back, though this may have been the result of intragroup aggression that occurred early in the interaction before the attack. There was blood, hair, and diarrhea on the ground at both the original site and the spot where the group attacked their victim for the second time. Inshuti survived despite extensive injuries ( Table 1 , Fig. 1 ). 2010 Attack On June 1 2010 tracking and research staff collecting data on a multi-male, multi-female group of 42 gorillas (Pablo group; Tables 2 and 3 ) heard screaming. The observers followed the gorillas in their view toward the screams, and encountered an unidentified solitary male surrounded by the rest of the group members ( Table 1 ). All of the Pablo group animals participated in attacking the solitary male; documented behaviors included biting, kicking, hitting, and dragging. The entire attack lasted 18 minutes. In this time, there were six discrete attack periods interspersed with pauses where Pablo’s group remained gathered around the victim ( Fig. 2 ). Visibility was very poor due to the large number of animals, but the victim appeared to be trying to escape throughout. He eventually extricated himself from the center of the group and ran. It was unclear if the group let him go, or if he escaped. Pablo group’s second-ranked male followed him, and continued aggressive bluff displays at the solitary male for ~30 minutes before returning to his social group. Tracking staff followed the solitary male and found him not moving, breathing heavily, and bleeding profusely from multiple wounds. He was not seen alive again. On June 13 th staff found the body of an adult male in the same area of the forest. It was conclusively identified by field and veterinary staff as Bikwi, a 19-year old male who had dispersed from group Susa ( Table 1 ). A necropsy revealed peri-mortem injuries consistent with the attack ( Table 1 ), supporting our supposition that the body belonged to the attack victim. Figure 2 Pablo group members gather during the 2010 attack; the victim was in the center of the surrounding animals. Photograph courtesy of the Dian Fossey Gorilla Fund International. Full size image 2013 Attack On May 18 2013, tracking staff contacted group Titus, a mixed-sex group of nine animals ( Tables 2 and 3 ) and found them with a two-member group consisting of adult male Inshuti ( Table 1 ; also the victim of the 2004 Beetsme group attack) and an adult female, Shangaza. The Titus group animals exchanged species-typical aggressive displays with and screamed at Inshuti. The female Shangaza, whose adult son was a member of Titus group, “hooted” (a contact vocalization 49 ) repeatedly during the exchange. An hour after observers arrived, Titus group’s alpha male, followed by all eight of his group members, ran after Inshuti and held him to the ground. All of the Titus group members bit and hit him repeatedly. Shangaza remained at the initial interaction site, did not participate, and was not attacked. Approximately one minute after the attack started, Inshuti escaped and rejoined Shangaza, and Titus’ group moved out of view of the observers. The male members of Titus’ group had participated in the 2004 Beetsme group attack against Inshuti nine years prior as 3, 4, 5, and 12 year-olds. None of their group’s females or immatures were group members during the 2004 attack. Shangaza, who in 2004 was a member of Beetsme group but had dispersed and joined Inshuti during the intervening years, was herself an attacker in 2004. Despite his injuries ( Table 1 ), Inshuti once again survived. Because the same male was a victim twice, we cannot rule out the possibility that perhaps aberrant behavior by this individual encouraged the groups’ behavior. Observers who have monitored Inshuti over the course of his life (including the authors) consider him more aggressive than many other habituated male gorillas, but there was nothing outwardly remarkable about his behavior toward other gorillas either in general or on the days of the attacks. His social bonds first with members of his natal group, and later with females and infants in his own group, were apparently normal. Discussion Encounters between gorillas in different social groups are a regular feature of mountain gorillas’ lives 30 , 32 . When they escalate to contact aggression, most involve only a small, predictable demographic, i.e. adult males, and the great majority end with only minor injuries. We are unsure precisely what prompted the events described here. Whatever their origin, these attacks are remarkable for several reasons. First, the timing of these attacks suggests that multi-male, multi-female social structures are a prerequisite for such behavior. Despite the extended observation history on the population, this type of aggression was not observed until after a remarkable demographic shift that left many mountain gorillas living in social structures that both humans and chimpanzees share. A dominant theory for explaining similar behavior in chimpanzees, the imbalance-of-power hypothesis, predicts that attacks will only occur when victim(s) are outnumbered and the risk to individual attackers is low 50 . The demographics of the incidents were highly consistent with that prediction, facilitated by large group size and multiple adult males, who are far more powerful fighters than females due to their size and large canines. The costs to individual attackers would likely have been too high for the behavior to evolve in a population where groups contained far fewer males. Once groups were free from this constraint, coalitionary attacks occurred. However, it is important to note that in one case some of the attacking animals did sustain injuries, suggesting that the risk is not zero even when the victim is greatly outnumbered. To our knowledge, injuries to attackers have not been reported in chimpanzees. Second, they confirm that food resource competition is not necessary for coalitionary violence to occur in great apes. Theory predicts conflict when coveted resources are defensible 4 ; attacks on neighbors have direct benefits for individuals and groups by maintaining or increasing range size, and therefore access to preferred feeding sites 11 . Mountain gorillas are herbivores that eat at least 55 species of plants [ref. 51 , KRC long-term records], many of which are available year-round and few of which are monopolizable 27 , 28 . There are probably few wild primate populations on earth with less food resource stress than Virunga mountain gorillas and solitary males are in no way a threat to a group’s food supply, so this is not a convincing explanation for coalitionary aggression in mountain gorillas. The gorillas’ behavior is also consistent with the intergroup dominance hypothesis, which posits that intergroup dominance promotes fitness through a variety of mechanisms 13 . Male gorillas do have a defensible resource—i.e., females—and pregnant or nursing females presumably have strong incentive to drive off potentially infanticidal intruders 31 . Solitary males can be vicious fighters, and are dangerous to both infants and to other adult males. In the last three years, three alpha males in mixed-sex groups monitored by KRC died as a result of interactions with solitary males (KRC long-term records). Solitary males are known to “stalk” mixed-sex groups for extended periods of time as they attempt to obtain females to start their own groups 52 , and encounters with them are more likely to result in aggression than encounters with other groups 33 . For males plus nursing females and their infants, extra-group males are clearly dangerous; driving them permanently away or killing them has obvious direct benefits for these classes of individuals. However, females who were apparently neither pregnant nor nursing, sub-adults, and juveniles also participated, and the benefits for them are less obvious. It is unclear what might have motivated their participation. If anything, cycling adult females may benefit from interactions with other males since they are a chance to evaluate potential mates. One possibility is that the potential for severe social group disruption or disintegration, which can occur when an alpha male is seriously injured or killed, creates sufficient motivation for these classes of animals to participate. Being forced to find and join a new social group (for females) or disperse before full physical maturity (for males) likely carries considerable personal risk. Alternatively, selection for participation in coalitionary aggression against outside males may be so strong for adult males and pregnant/lactating females that the associated proximate mechanisms have carry-over effects that generalize to other age or reproductive status categories. In other words, the possible net benefits of interactions with outgroup males for cycling females are not big enough to select for more condition-dependent mechanisms that motivate coalitionary aggression when pregnant or lactating. Kin selection is believed to be an important proximate mechanism underlying similar behavior in male chimpanzees, and it is important to note that overall relatedness in this small, closed population (n = ~480 individuals 42 ) is quite high. The mean relatedness coefficient of the participating males in each group was r = 0.25 ( Table 3 ). However, virtually all were related paternally ( Table 3 ), and there is currently no evidence that mountain gorillas discriminate paternal kin 53 . Three of the attackers in the 2004 case, plus one in 2010, were the victim’s maternal nephews, though they had never lived in the same group and thus may not have identified one another as kin (KRC long-term records). Given these facts, plus the whole group participation (some females had few or no close relatives co-resident), kin selection alone seems an unsatisfying explanation. Reciprocity is also an inadequate explanation. All group members participated so no subset incurred most of the costs, and chances for any sort of in-kind repayment are clearly limited. However, it is worth noting that their behavior is consistent with recent experimental work in humans indicating that perceived threat to the in-group causes not only retaliatory, but also preemptive aggression 54 . Though these cases bore striking resemblance to reports of coalitionary violence in chimpanzees, there were two noteworthy differences. First, in both humans and other non-human primates coalitionary violence generally involves one sex (e.g. refs 7 , 8 , 9 , 55 and 56 , reviewed in ref. 11 ) and immature animals are most often victims rather than attackers 2 . To our knowledge there have been no reports of the whole-group participation observed here in chimpanzees, though its occurrence is logistically limited by chimpanzees’ fission-fusion social structure. However, chimpanzees too are regularly found in mixed-sex, multi-age parties, but the great majority of observed intergroup violence is adult males attacking other adult males (though see refs 9 and 10 ). The same is true in humans; most cases of intergroup violence involve primarily or exclusively adult males despite nearly universal mixed-sex and age residence patterns 55 . Second, humans and chimpanzees often actively seek out victims. Male chimpanzees will patrol territory boundaries silently and appear to search for lone victims 56 , and humans spend considerable amounts of time planning attacks against neighbors in both industrialized and small-scale societies [e.g. refs 57 and 58 ]. There was no evidence of such behavior in the gorillas. In the first attack the victim approached the group, though we cannot be certain if the victim or the attackers approached in the other two cases as the initial contact was unobserved. Both the whole-group participation and lack of victim seeking are characteristics of spontaneous group violence in humans (i.e. communal rioting or mob violence 59 ). Human mobs are sometimes characterized by participant demographics that do not fit expected patterns, including individuals who have little or nothing to gain 60 . Nonetheless, the gorillas’ behavior appeared remarkably coordinated, clearly had direct benefits for some individuals, and bore important hallmarks of classic descriptions of coalitionary intergroup aggression in chimpanzees. While group attacks on neighbors are clearly rare events in G. beringei , it is unclear how the rates might compare to (for example) lethal coalitionary aggression among chimpanzees in Gombe National Park, which contains the world’s longest-studied chimpanzee population. In the 1960’s through early 1990s, KRC staff lived in the forest and conducted all-day group follows, making it less likely that coalitionary attacks occurred but were simply missed. From 1995 on, staff no longer lived in the forest and were limited to ~6 hours per day with the animals, which increases the possibility of missing rare events. Furthermore, deaths of solitary males are nearly impossible to detect. Observation and reporting of rare but potentially evolutionarily significant behaviors is yet another important reminder of the value of long-term monitoring of animal populations with slow life histories 61 , 62 . As data years mount at long-term field sites, new and surprising behaviors (for another recent example, see ref. 15 ) continue to refine our understanding of the plasticity of primate behavior and the complex origins of our own remarkable sociality. Additional Information How to cite this article : Rosenbaum, S. et al . Observations of severe and lethal coalitionary attacks in wild mountain gorillas. Sci. Rep. 6 , 37018; doi: 10.1038/srep37018 (2016). Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
(Phys.org)—A trio of researchers studying gorillas in Karisoke Research Center in Rwanda has reported on a developing trend observed in mountain gorillas—mobs attacking single individuals for unknown reasons. In their paper published in Scientific Reports, Stacy Rosenbaum, Veronica Vecellio and Tara Stoinski describe three mob attacks that have been observed by several human witnesses over the past decade and offer some possible explanations. For most of the modern study of gorillas in their native environment, the consensus has been that they are generally docile with one another—there have been observations of males fighting, sometimes to the death, but for the most part, the life of the gorilla was thought to be one of mostly peaceful. But now, it appears that the peace can be disturbed by the occasional mob attack on a single individual or, as the researchers note, two individuals. In the first witnessed attack, back in 2004, Rosenbaum was actually one of the witnesses. She describes the incident as arising seemingly out of nowhere. A single male the team had named Inshuti approached a group of gorillas the researchers had named the Beetsme. After some initial rebuffs, the lone male continued to seek acceptance. Then one of the gorillas screamed—the witnesses could not say if it was Inshuti or a member of the group. That was followed by three adult males chasing Inshuti until they caught him and pinned him to the ground. Soon thereafter, the rest of the Beetsme group arrived and all of them (including females and youngsters) participated in causing harm to Inshuti—from pulling hair to scratching and kicking. The leader of the Beetsme sunk his teeth into the gorilla's flesh and shook it like a fighting dog. The mob attack continued for just a few minutes, but then stopped just as quickly as it had started. The attackers walked away and Inshuti slunk into the underbrush to attend to his wounds. The researchers report on two other similar incidents, one of which included an attack on Inshuti and another male. They note that mob attacks by other apes, including chimps, is common, as in humans, but until these recent incidents, it was thought gorillas were gentle giants, unlikely to engage in such violence. The team admits they do not know why the gorillas have begun acting like mobs at times but note that it has occurred during a time when the mountain gorilla population has grown due to conservation efforts.
10.1038/srep37018
Biology
Modulating biomolecular condensates: A novel approach to drug discovery
Diana M. Mitrea et al, Modulating biomolecular condensates: a novel approach to drug discovery, Nature Reviews Drug Discovery (2022). DOI: 10.1038/s41573-022-00505-4 Journal information: Nature Reviews Drug Discovery
https://dx.doi.org/10.1038/s41573-022-00505-4
https://phys.org/news/2022-08-modulating-biomolecular-condensates-approach-drug.html
Abstract In the past decade, membraneless assemblies known as biomolecular condensates have been reported to play key roles in many cellular functions by compartmentalizing specific proteins and nucleic acids in subcellular environments with distinct properties. Furthermore, growing evidence supports the view that biomolecular condensates often form by phase separation, in which a single-phase system demixes into a two-phase system consisting of a condensed phase and a dilute phase of particular biomolecules. Emerging understanding of condensate function in normal and aberrant cellular states, and of the mechanisms of condensate formation, is providing new insights into human disease and revealing novel therapeutic opportunities. In this Perspective, we propose that such insights could enable a previously unexplored drug discovery approach based on identifying condensate-modifying therapeutics (c-mods), and we discuss the strategies, techniques and challenges involved. Introduction For more than a century, scientists have speculated on the structure and organization of the protoplasm 1 , 2 , 3 , 4 . In addition to membrane-bound organelles such as the nucleus and mitochondria, microscopists also observed organelles lacking membranes. For instance, the nucleolus was first described in the 1830s 5 . Additional membraneless organelles were identified at the turn of the twentieth century 6 , 7 , 8 , 9 , 10 , and many others have since been reported. Although the functions of these assemblies — now known as biomolecular condensates — have been described in some cases (Table 1 ), the mechanisms that control their formation, structure, dynamics, composition and activity are only now being studied intensively (Fig. 1 ). Table 1 Functional roles of biomolecular condensates Full size table Fig. 1: Examples of complex composition of biomolecular condensates. The molecular community defines the identity of a biomolecular condensate. Examples of three biomolecular condensates and selected components. The centrosome is the central organizer of microtubules and is involved in regulation of mitosis; the image shows a mitotic SiHa cell, with the centrosome structural protein CDK5RAP2 stained green, the nucleus blue and microtubules red. The nucleolus is the site of ribosome biogenesis; the image shows U2OS cells with the nucleolar scaffold protein NPM1 stained green and microtubules red. Stress granules; the image shows stress granules in HeLa cells, visualized via G3BP1 immunofluorescence. lncRNA, long non-coding RNA; rRNA, ribosomal RNA; snoRNA, small nucleolar RNA. The centrosome image is reproduced from , and the nucleolus image is reproduced from . Full size image Experimental evidence to support the hypothesis that biomolecular condensates form by aqueous phase separation was first generated by Cliff Brangwynne, Tony Hyman, Frank Jülicher and colleagues. They demonstrated that P granules — protein–RNA assemblies found in Caenorhabditis elegans — exhibit liquid-like behaviour in cells, including dripping, wetting and fusion, implicating phase separation in their formation 11 . Subsequent work from Brangwynne and Hyman showed that nucleoli in Xenopus oocytes also behave as liquids, exhibiting rapid ATP-dependent dynamics 12 . Publications from other laboratories soon provided additional support for the concept of biological phase separation. The Rosen laboratory was first to recognize the role and importance of weak multivalent interactions in driving phase separation and speculated that cellular organization and regulation across all of biology might be critically dependent upon such phase transitions 13 . Work from the McKnight laboratory showed that proteins containing low-complexity, intrinsically disordered regions (IDRs) phase separate into hydrogels capable of partitioning ribonucleoprotein (RNP) granule components 14 , 15 . Hanazawa, Yonetani and Sugimoto revealed that just two condensate components can reconstitute P granules in cells, supporting the idea that some proteins are necessary and sufficient to promote condensate assembly 16 . This early work was captured in a series of reviews 17 , 18 , 19 , 20 , 21 . The biomolecular condensates field reached an inflexion point in 2015, with multiple publications reporting breakthrough findings (reviewed in ref. 22 ); since then, research in the field has grown rapidly. Several lines of evidence emerging from such research support the relevance of biomolecular condensates for drug discovery. There are a growing number of examples of ‘aberrant behaviours’ of condensates that are associated with disease states, including neurodegeneration 23 , cancer (for example, prostate cancer) 24 , viral infections (for example, respiratory syncytial virus (RSV)) 25 and cardiac disease 26 , 27 (Table 2 ). Proteins of high therapeutic interest in neurodegenerative disease such as TDP43 and FUS have been identified inside condensates 28 , 29 . The anticancer drugs cisplatin and tamoxifen can partition into transcriptional condensates, altering their composition in cultured cells and in vitro reconstituted model condensates 30 ; initial reports show that small molecules can alter condensate behaviours with functional consequences in cell-based studies 31 , 32 . Finally, the tools to study condensates are rapidly maturing, increasing their applicability to efforts to identify condensate-modifying therapeutics ( c-mods ). Table 2 Dysregulation of biomolecular condensates in disease Full size table Several components are crucial to the basis of a c-mod discovery campaign. First, observed associations between condensate characteristics and diseases should be rigorously validated, with the aim of identifying associations that are causal. Furthermore, it should be established that molecular and mechanistic aspects of biomolecular condensates identified through in vitro studies are relevant in vivo. Second, assays that reliably reflect disease-relevant aspects of the biology of condensates need to be developed. In contrast to classical drugs that typically target unique macromolecules, the target for c-mods is a community of molecules engaged in an extended network. Major challenges include identifying the biomolecule(s) required for condensate assembly, as well as understanding the thermodynamics of the extended network and the kinetics of processes that disrupt the equilibrium. As discussed in detail in refs. 33 , 34 , caution needs to be exercised to not over-interpret qualitative data and results obtained from simplified (for example, in vitro reconstitution) or artificial (for example, overexpression) systems. Nevertheless, these model systems can be leveraged to obtain insights into the structural ensemble and mesoscale organization of a subset of macromolecules inside a condensate-like milieu (reviewed in ref. 35 ) and the effects of putative c-mods on the represented interactions, as we discuss later in the Perspective. Given the infancy of the field, the aspects of the rationale and strategies for pursuing c-mod discovery discussed are built on disparate pieces of evidence from studies that were not necessarily focused on drug discovery, or from drug discovery studies that were not searching for c-mods. However, we believe that there is a substantial amount of data from such studies that support the feasibility of targeting biomolecular condensates and provide a foundation for a guide to future c-mod discovery and assay development when interpreted with a condensate perspective. With the goal of contributing to such a guide, in this Perspective we first discuss how understanding the properties and functions of condensates may enable a novel approach to drug discovery. After briefly describing the physics and structural basis for the formation of biomolecular condensates, we outline the diverse roles that condensates play in cellular function and some of the evidence for the associations of aberrant condensates with disease. We then describe approaches and technologies for the identification and characterization of drug candidates that can modulate or otherwise exploit disease-relevant condensates and consider the challenges that need to be addressed for them to be effective. Overview of condensate biology Biomolecular condensates have been linked to many cellular processes, including sensing and responding to stress, compartmentalization of biochemical reactions, mechanical regulation and signalling (reviewed in refs. 34 , 36 ). Their composition is typically complex, consisting of hundreds of different proteins and nucleic acids, which form an extensive intermolecular network spanning length scales of nanometres to micrometres. The underlying mechanisms for biomolecular condensate assembly depend on their composition and architecture (reviewed in refs. 33 , 36 , 37 , 38 ). As a common denominator, this assembly is mediated by multivalent interactions leading to increased local concentration of a select molecular community, which creates a microenvironment with unique properties (reviewed in ref. 36 ). Here, we focus on the widely used model in which biomolecular condensates assemble through phase separation. We propose that targeting the emergent properties of the molecular community within condensates provides an untapped source of therapeutic agents. Notably, however, the c-mod design and discovery strategies discussed later in this Perspective are agnostic to the mechanisms underlying condensate assembly. Principles of condensate assembly Many biomolecular condensates are thought to assemble in a concentration-dependent manner to form non-stoichiometric macromolecular assemblies, via spontaneous or nucleated phase separation of a select set of proteins and/or nucleic acids. Phase separation occurs when the concentration of biomolecules exceeds the saturation concentration ( C sat ). This threshold defines the phase boundary, above which thermodynamics favours self-solvation of these biomolecules rather than solvation by the surrounding environment, driving formation of mesoscale assemblies rather than discrete biomolecular complexes. Minor changes in biomolecule concentration that cross the phase boundary trigger a sharp, switch-like response, leading to either condensate formation or dissolution, effectively changing the local concentration, sometimes over several orders of magnitude (Fig. 2a ). Importantly, this process is fast and reversible, making it an ideal element in sensing stress and other environmental changes. Fig. 2: Principles of condensate assembly and their regulation. a | Phase separation enables a sharp, switch-like response as the concentration of phase-separating biomolecule(s) exceeds the saturation concentration ( C sat ). In this case, a small change in bulk concentration can lead to sudden change in a molecule’s local concentration, and may lead to large changes in activity/signalling (top). By contrast, without phase separation, a molecule’s local concentration scales linearly with the bulk concentration, resulting in subtle effects (bottom). b | At atomic and molecular levels (angstrom to nanometre), various types of interactions (top left) and their valency (top right) define condensates. These interactions, in turn, define condensate composition. Examples of biomolecular topology that promote condensation include intrinsically disordered proteins with multivalency encoded in short linear motifs, modular proteins with multivalency encoded in repeat folded binding domains, modular proteins with a condensation-prone intrinsically disordered region (IDR) and one or more folded domains that drive specific localization (for example, transcription factors), and nucleic acids (DNA and RNA). Scaffolds (orange) are characterized by higher partition coefficients and lower C sat values compared with clients (blue). Together, all smaller-scale interactions modulate the biomolecular network of the molecular community (bottom left) and the emergent material properties (bottom right) at the mesoscale (nanometre to micrometre), optimized for condensate function (fit for purpose). Solid-like condensates such as the Balbiani body are reversible under physiological conditions, in contrast to solid-like pathological amyloids. Part b is adapted with permission from ref. 124 , Elsevier. Full size image Assembly of biomolecular condensates is facilitated by multivalency encoded as multiple copies of folded domains or of structural motifs, and/or low-complexity IDRs 13 , 22 , 39 . These interactions are typically weak and contribute to fine-tuning the material properties of biomolecular condensates 40 , 41 (Fig. 2b ). Physical properties of condensates Biomolecular condensates exhibit a range of material properties (reviewed in ref. 21 ). Stress granules 42 , 43 , P granules 11 and nucleoli 12 exhibit liquid-like properties. Gel-like assemblies (that is, the centrosome 44 , RNA expansion repeats 45 and the nuclear pore 46 ) and solid-like functional amyloids (that is, the Balbiani body 47 and A-bodies 48 ) have reduced internal dynamics. These material properties are correlated with the composition and biological functions of the condensates and can be dynamically modulated through changes in the environment or active biological processes (discussed below). For example, the liquid-like properties allow stress granules to rapidly assemble and disassemble as conditions vary. The gel-like centrosome can withstand the microtubule pulling forces when the mitotic spindle is formed 49 . The Balbiani body is proposed to promote dormancy during oocyte storage by shutting down all biochemical reactions (reviewed in ref. 50 ). The complex network of interactions between the various members of a molecular community determines the material properties of a condensate. The biomolecules within a community share features that contribute to their compatibility and co-localization (reviewed in ref. 51 ). These features include similar amino acid bias in IDRs 52 , certain families of folded domains 13 , 53 , 54 and similar classes of nucleic acids 28 , 55 , 56 . These molecules can be classified as scaffolds or clients (Fig. 2b ) based on whether they are essential for the formation of the underlying network of a condensate 53 . Scaffolds are multivalent biomolecules required for condensate formation; they typically exhibit the lowest C sat among components, initiating the condensation process, and are characterized by a high partition coefficient (the ratio between concentrations inside versus outside the condensate) 13 , 52 , 53 , 57 , 58 . Clients are molecules that partition into condensates via interactions with the scaffolds; they are typically characterized by lower partition coefficients compared with scaffolds 59 . Typically, multiple macromolecules can function as co-scaffolds (for example, G3BP1/2 in stress granules 60 and PGL1/3 in P granules 16 , 61 , 62 ). The composition of condensates dynamically adjusts based on changes in bulk concentration of co-scaffolds and clients 53 , 63 , 64 , as well as in response to non-equilibrium processes (for example, activity of energy-consuming enzymes). Functions of condensates Condensates provide a distinct environment optimized for function. The intra-condensate milieu regulates enzymatic reactions by compartmentalizing components involved in related biological processes. This unique environment can modulate one or more of the following parameters: diffusion of components, enrichment in substrates and/or depletion of inhibitors 65 , 66 , 67 , 68 , 69 , 70 . Biomolecular condensates can respond rapidly, with a low energy threshold, to sudden environmental changes, such as temperature, stress, starvation, detection of foreign material or other cellular stimuli 71 . Phase separation affords a reversible mechanism for increasing the local concentration of a particular component within the condensate, while reducing it in the outside environment. Such processes are involved in mitigating cellular toxicity, by sequestering excess materials in response to stress. For example, stress signals sensed in the cytoplasm trigger assembly of stress granules, which compartmentalize untranslated RNA and RNA-binding proteins from the cytoplasm and nucleus 28 , 72 (Fig. 1 ). Similarly, stress sensed in the nucleus leads to dynamic changes in the nucleolus 73 and formation of nuclear stress condensates 74 , 75 , 76 . Biomolecular condensates also play roles in minimizing cellular noise 77 , control of genome packaging 56 , 78 , 79 , 80 , transcription 81 , 82 , cell-cycle control and DNA double-strand breaks 83 , viral assembly 84 and immune responses 66 (Tables 1 , 2 ). Regulation of condensates The composition of biomolecular condensates is complex, dynamic and varies with cell type and the type of signal that induces condensate formation. Studies indicate that biological systems are often optimized to reside close to the phase boundary. Thus, small changes in the environment (for example, metabolite or biomolecule concentrations, pH or temperature) tip the equilibrium to either dissolve or assemble the condensate, generating a rapid switch-like signal (reviewed in ref. 71 ) (Fig. 2a ). Condensates exist in a non-equilibrium state via the action of energy-consuming enzymes 12 , 85 , competitive interactions with ligands 63 , 64 , 86 , hydrotropes 87 and other perturbations, which regulate their function, composition and dynamics. Protein quality control, including chaperones, autophagy and proteasome degradation 85 , 88 , 89 , 90 , 91 , and the post-translational machinery are intimately involved in the regulation of condensates and their emergent properties. Post-translational modifications Assembly and disassembly of condensates is modulated by covalent post-translational modifications (PTMs) of protein components, such as phosphorylation, acetylation, methylation, SUMOylation, ubiquitination, PARylation (poly-ADP-ribosylation) and glycosylation (reviewed in refs. 92 , 93 ). These modifications can have a dramatic effect on the conformational ensemble and dynamics of IDRs involved in condensate scaffolding (reviewed in refs. 94 , 95 ). For example, epigenetic modifications were shown to induce changes in material properties of chromatin, modulating access of the transcriptional machinery to the genetic information 56 . Additionally, epitranscriptomic and post-transcriptional modifications on RNA contribute similarly to modulating phase behaviour of condensates (reviewed in ref. 39 ). Competitive interactions Proteins that serve as scaffolds for condensates can interact with chaperones (reviewed in ref. 85 ) and nucleocytoplasmic transporters (for example, karyopherins) 96 , thereby modulating the C sat , preventing and reversing aberrant phase transitions. Similarly, helicase action controls the partitioning of RNA within biomolecular condensates 97 . For example, HSP70 is required for the dissolution of aged stress granules containing misfolded SOD1 (ref. 88 ). Karyopherins prevent aberrant phase transitions of prion-like domains and reverse phase separation of amyotrophic lateral sclerosis (ALS)-associated proteins (for example, FUS, TDP43 and hnRNPA1) by directly binding to cognate nuclear localization signals within the target protein, aided by weak, non-specific interactions that compete for the condensation-driving interactions 98 , 99 . Compositional change As described above, condensate composition, dynamics, material properties and function are interrelated. Partitioning of RNA inside protein condensates tunes the viscosity in vitro and in vivo 100 , 101 , 102 , 103 . The level of RNA can either promote or dissolve condensates scaffolded by RNA-binding proteins 104 , 105 , 106 , 107 . Such a regulatory mechanism has significant implications in biology; a notable example is regulatory feedback in transcription, where incipient amounts of RNA synthesis promote stabilization of transcriptional condensates, whereas accumulation of transcripts promotes their dissolution 108 . In addition to RNA, other nucleotide polymers can tune the stability of biomolecular condensates, including DNA 56 , 66 , 80 and PAR 109 , 110 . Modulation of the protein composition of a condensate can affect the dynamics of individual components. For example, the centrosome nucleator SPD2 diffuses faster when its binding partners PLK1 and TPXL1 are present in the reconstituted condensates 44 . In a recent report, An et al. 111 demonstrated that aberrant, persistent pathological stress granules formed by an ALS-associated FUS mutant exhibit different proteomics compared with normal stress granules. These stress granules are characterized by enriched physical interactions between components, consistent with earlier observations that pathological stress granules are less dynamic. Conformation-encoded regulation Condensate-associated proteins exhibit a modular topology that allows them to function as interaction hubs by engaging with multiple types of macromolecules. They encode structural switches that promote transitions between freely diffusing discreet monomers or oligomers to a ligand-bound scaffold of a large macromolecular network within a condensate, as described for the nucleolar and stress granule scaffolds, NPM1 (ref. 86 ) and G3BP1 (refs. 112 , 113 ), respectively. In these examples, ligand binding alters the conformation of a protein at atomic scale, triggering remodelling of the nanometre-to-micrometre scale molecular network of the condensate. Within the condensate microenvironment, IDRs can remain disordered, as seen in DDX4 (ref. 114 ) and FUS 107 , or can undergo folding upon binding. FUS and TDP43 were shown to form cross-β structures within their LCD domains in hydrogels to stabilize intermolecular interactions 115 , 116 , and the carboxy-terminal LCD of TDP43 stabilizes a helix structure upon dimerization in liquid-like droplets 117 . Modulation of the helical propensity by mutations, including those associated with ALS, affect not only the C sat for phase separation but also the material properties of the resulting condensates and the splicing function of TDP43 (ref. 117 ). Spatial positioning Biomolecular condensates provide a means for cells to spatially regulate important processes. For example, at homeostasis, FUS and TDP43 fulfil roles in RNA splicing and metabolism in the nucleus but are sequestered into cytoplasmic stress granules under stress conditions. The function of HSF1, a transcription factor for heat shock chaperones, is modulated under stress conditions via sequestration into nuclear condensates 74 . Organization of the cytoplasm, such as spatial patterning of specific transcripts in polar cells, is achieved by encapsulation of the target RNA molecules in biomolecular condensates 11 , 101 , 102 . Multi-compartment organization of the condensate interior (reviewed in ref. 37 ) arises via coexisting, non-miscible phases 40 , 103 , 118 , 119 , 120 . The nucleolus exhibits a three-layered architecture, determined by the surface tension with respect to the nucleoplasm 40 . It was proposed that the material properties of the different nucleolar layers are optimized to promote the correct sequence of steps and the vectorial flux in ribosome biogenesis 40 , 64 . Cellular surfaces, such as chromatin, membranes and the cytoskeleton, can serve as regulators of spatial positioning of biomolecular condensates. Membranes serve as nucleators to drive phase separation by restricting molecular diffusion and promoting local crowding effects (reviewed in ref. 95 ). For example, T cell signalling condensates and TIS granules form at the plasma membrane 65 and on the endoplasmic reticulum membrane 121 , respectively. Su et al. showed that phosphorylation of the T cell receptor triggers clustering of LAT (linker for activation of T cells) into mesoscale condensates at the plasma membrane, and that these condensates recruit components of T cell signalling, which subsequently trigger actin polymerization as a functional output 65 . Taken together, these mechanisms governing their behaviour, function and regulation play an important role in the normal function of condensates in cells and provide valuable insights for designing strategies to correct their malfunction in disease. Condensates in disease Here, we define a ‘ condensatopathy ’ as an aberration of a condensate that drives a specific disease phenotype, which has been observed in in vitro model systems and in vivo cellular and animal models of neurodegenerative diseases, dilated cardiomyopathy, certain types of cancer (reviewed in refs. 23 , 27 , 122 , 123 , 124 ) and viral infections. Intriguingly, there are multiple examples of genetic mutations showing strong clinical association with diseases that affect proteins that have been identified in biomolecular condensates. This suggests the possibility that these mutations might dysregulate condensate function, and thereby drive disease. We discuss a few better-understood examples below. Although correlations between condensate malfunction and disease in these model systems have been well documented, strong support for causation is still under development. Neurodegeneration ALS and frontotemporal dementia (FTD) pathology have been linked to environmental factors and a diversity of genetic alterations, including numerous point mutations in the low-complexity regions of RNP-granule localized proteins, as well as repeat expansions. Point mutations in proteins such as FUS 42 and hnRNPA1 (ref. 43 ) accelerate the kinetics of phase transition and promote amyloid-like fibril formation within the condensate environment in in vitro studies. Altered kinetics of clearance of RNP granules, namely prolonged persistence of condensates, was associated with the ALS-hallmark phenotype of cytoplasmic inclusions in cultured cells and neurons. For example, repetitive cycling of G3BP1-positive condensate formation and increased persistence, modulated via an optogenetics model, evolved towards cytoplasmic proteinaceous inclusions and caused cellular toxicity 125 . A recurring pathological observation in ALS/FTD is the presence of TDP43-rich cytoplasmic granules, irrespective of whether the TDP43 gene harbours mutations 126 . In vitro and cellular data suggest that these granules arise as condensates and undergo ageing (reviewed in ref. 23 ). This spatial re-localization of nuclear TDP43 into cytoplasmic condensates in cultured neurons was associated with increased condensate viscosity 127 and splicing defects in several motor neuron-specific mRNAs, including that encoding stathmin 2 (STMN2) 128 , a neuron-specific regulator of microtubule stability. The TDP43 condensatopathy causes a loss of function of STMN2 (ref. 128 ) and impaired axonal growth and regeneration 127 , 128 . Furthermore, optogenetic formation of TDP43-positive condensates via blue light illumination was sufficient to recapitulate the progressive motor dysfunction observed in patients with ALS in a Drosophila model 129 . Repeat expansions of a short nucleotide segment are another specific type of genetic alteration associated with diseases such as ALS/FTD, myotonic dystrophy, spinocerebellar ataxias and Huntington disease. The severity of these diseases scales with the length of the repeat (multivalency) of the transcript and/or translated polypeptide. Furthermore, the diagnosed clinical cases exhibit a minimum threshold length of the repeat. The resulting polyvalent RNAs and polypeptides have the hallmarks of biomolecules that will localize to condensates as scaffolds, and result in condensatopathies that sequester other important biomolecules. C9orf72 G4C2, CAG and CUG repeats are found in ALS/FTD, Huntington disease and myotonic dystrophy, respectively; transcripts containing variable lengths of these repeats form RNA foci in live cells, and dynamically arrested condensates in vitro 45 . polyGA and polyGR peptides, resulting from an ATG-independent translation of the G4C2 repeats, were shown to sequester proteasomal and nucleolar proteins, respectively (reviewed in ref. 130 ). Furthermore, overexpressed or exogenously added arginine-rich polypeptides of G4C2 repeats insinuate in pre-existing cellular condensates, such as the nucleolus, RNP granules and the nuclear pore complex. As a result, the condensates change their composition, material properties and function due to competition between the G4C2 peptides with the native interactions 131 , 132 . Correcting the altered material properties and/or sequestration of biomolecules due to the underlying condensatopathies may prevent or reverse these neurodegenerative diseases. Cardiomyopathy Condensatopathies associated with dysregulation of RNP granules are not limited to neurodegenerative diseases. A mutation in the gene coding for the tissue-specific alternative splicing factor RBM20 that is found in patients with congenital dilated cardiomyopathy is characterized by a RNP condensate defect, coupled with contractile dysfunction and aberrant heart anatomy in a heterozygous pig model 26 . The R636S point mutation localized in the low-complexity disordered RSRS region causes aberrant sarcoplasmic accumulation of RBM20. At the cellular level, the dominant effect of the mutant leads to RBM20 re-localization from nuclear splicing speckles to cytoplasmic condensates that fuse with other cellular condensates harbouring stress granule markers 26 . This condensatopathy causes sequestration of mRNA, polysomes and cardiac cytoskeleton proteins (for example, ACTC1). Interestingly, mice harbouring the congenital dilated cardiomyopathy mutation exhibited a more severe cardiac dysfunction phenotype than mice lacking RBM20 (ref. 133 ). Collectively, these observations suggest that the pathological mechanism attributed to the condensate phenotype is complex, involving loss of nuclear splicing function for RBM20, loss of function of proteins that partition into aberrant cytoplasmic RBM20 condensates and a gain of function of these condensates. Therefore, the RBM20 condensatopathy serves as a hub for misregulation of multiple pathways in congenital dilated cardiomyopathy and is an attractive node to target this disease therapeutically. Cancer Recent progress has shown a link between condensatopathies and several types of cancer. These condensatopathies deregulate many processes, including, but not limited to, genomic stability, signalling, protein quality control and transcription (reviewed in refs. 134 , 135 ). Transcription of key developmental genes is often under the control of super-enhancers. Super-enhancers are classically defined by chromatin immunoprecipitation sequencing as clusters of enhancers bearing large amounts of transcriptional machinery (transcription factors, coactivators and RNA Polymerase II (Pol II)). This high-density assembly of proteins at super-enhancers is now understood to constitute transcriptional condensates that drive gene expression 81 , 82 , 136 . These insights challenge the stoichiometric model of transcription, suggesting novel properties and functions of transcription factors and coactivators in a concentrated condensate of protein and DNA. For example, the function of transcription factor activation domains was poorly understood because they contain IDRs not amenable to crystallography. Now it is becoming clear that they may activate genes, in part, by their capacity to condense with coactivators on genomic regulatory elements. Transcription of oncogenes is a general feature of cancer cells. This often occurs through condensatopathies, such as acquisition of aberrant super-enhancers 137 . Condensatopathies resulting in aberrant gene expression are also associated with cancers. Several chromosomal translocations have been identified, where a condensation-prone IDR fused to a chromatin-associating folded domain creates aberrant condensates. Two examples are EWS–FLI in Ewing sarcoma 138 and NUP98–KDM5A in leukaemia 139 . NUP98–KDM5A is one of many variations of genetic translocations that fuse the condensation-prone amino-terminal FG-rich IDR of a nucleoporin (for example, NUP98 and NUP124) with a folded domain that anchors it at a specific location on chromatin, such as a DNA-binding domain (for example, HOXA9, HOXA13 and PHF23), a helicase domain (for example, DDX10) or a histone binding domain (for example, KDM5A, NSD1) 140 . These genetic translocations result in condensatopathies that share a common expression reprogramming phenotype, with upregulation of the developmentally silenced Hox genes. These cancers with diverse genetic aetiology may be treatable by similar drug strategies aimed at the underlying condensatopathies. Viral infections Biomolecular condensates are also leveraged by pathogens such as viruses to more effectively hijack the host cell and evade the host innate immunity self-defence mechanisms (reviewed in refs. 10 , 84 , 141 ). Literature reports link the roles of biomolecular condensates to multiple steps within the viral replication cycle, including viral entry and egress, transcription, protein synthesis, and genome and virion assembly (reviewed in ref. 84 ). Certain viral infections (for example, rabies and mammalian orthoreovirus) 10 induce formation of stress granules. Interestingly, despite the fact that Negri bodies in cells infected with rabies virus share some protein and RNA components with stress granules, the two biomolecular condensates behave similarly to immiscible liquids 142 , highlighting the importance of the whole molecular community in determining the identity, function and material properties of a condensate. This concept of a molecular community-imposed selectivity becomes important when designing compounds that target specific biomolecular condensates. Viruses have evolved to evade the host’s innate immune response via multiple mechanisms. The host senses foreign cytoplasmic genomic material via pathogen receptors such as RIGI and MDA5 and induces PML body assembly in the nucleus, as a part of the interferon-dependent innate immune response. Partitioning the viral RNA within viral factory condensates provides a shielding mechanism, preventing its detection by the cytoplasmic pathogen-sensing machinery. Additionally, DNA and RNA viruses disrupt PML bodies as part of their nuclear replication (reviewed in ref. 10 ). Viral latency is one of the primary challenges that prevent the development of cures for patients suffering from viral infections such as HIV-1. The histone chaperone CAF1 condenses with the viral HIV-1 LTR to form nuclear bodies that recruit other histone chaperones and epigenetic modifiers, and these condensates maintain the integrated viral genome during latency 143 . These observations could provide a novel intervention point to reactivate latent HIV-1-infected cells, which has been a long-standing focus of efforts to develop a potential cure for HIV-1 infection. Insights from in vitro and in cell overexpression model systems into the molecular mechanisms of replication and host evasion of SARS-CoV-2 indicate that the dimerization of the nucleocapsid protein 144 promotes phase separation with specific viral RNA elements, primarily located at the 5ʹ and 3ʹ UTRs 145 , as well as with host heterogeneous nuclear RNPs, such as stress granule proteins 146 . Phase separation inhibits PTMs such as Lys63-linked polyubiquitination of a host antiviral signalling protein, MAVS, thereby suppressing activation of the innate immune system 144 . Drug discovery strategies The roles of condensates in normal and aberrant cellular functions are becoming clearer, and a range of tools are now available to study these cellular phenomena, including protein proximity labelling, advanced microscopy techniques and computational methods, as discussed further below. Accordingly, there is a growing opportunity to explore condensate-informed approaches to drug discovery. We introduce the term condensate-modifying therapeutics (c-mods) to describe drugs that modulate the physical properties, macromolecular network, composition, dynamics and/or function of specific biomolecular condensates to prevent or reverse disease. A c-mod discovery programme may have one of the three following objectives: repairing a condensatopathy; disrupting the normal functioning of a condensate implicated in disease; or preventing a target from functioning either by disabling it within its native condensate or by de-partitioning the target from its native condensate (Fig. 3a ). In each case, the drug discovery strategy will be based on a screening and validation model where a condensate optical phenotype is reliably correlated with one or more functional, disease-relevant outputs. Fig. 3: Goals and strategies for developing c-mods. a | Condensate-modifying therapeutics (c-mods) are developed to achieve one or more of the following objectives: to repair or eliminate a condensatopathy (left); to prevent a specific target from functioning by either delocalizing it from its native condensate (centre) or rendering it inactive within the condensate; or to disrupt the function of a normal condensate (right). b | Strategies to modulate the emerging properties of condensates with c-mods, described in detail in the text. These strategies can be used individually or in combination, and any one strategy can influence multiple characteristics of a condensate; for example, modulating the scaffold will probably result in changes in composition and material properties. Full size image First, for condensatopathy repair, when an aberrant condensate has clearly been implicated in causing a disease, the objective would be to restore normal condensate behaviour or remove aberrant condensates, either by preventing their formation or eliminating them once formed. This could be considered a phenotypic screening strategy, with condensate behaviour in model systems being assessed in the initial screen, and the hits further validated in a disease-relevant secondary assay, as discussed below. There need not be a specific target or pathway, nor any presumed molecular mechanism by which the effect on the condensate is achieved, although such information may be available at the pathway or target level. Second, in cases where the normal functioning of a condensate is implicated in a biological process central to a disease, the objective would be to develop a c-mod that interferes with the condensate behaviour, ideally only in the disease-relevant cells. The screening strategy would be similar to that described above. In the third type of case, the objective would be to render a specific target inactive either by ‘disabling’ its ability to function within its native condensate environment or by removing it from that environment. This is especially relevant for targets of high therapeutic interest that are often described as ‘undruggable’ due to selectivity issues or the intrinsic difficulty of finding chemical matter that interferes with their function. If new condensate knowledge indicates that such targets function within a condensate environment, novel strategies could be adopted to disable them. Programmes focused on condensatopathy repair or the disruption of the normal functioning of a condensate implicated in disease may be entirely driven by phenotype. By contrast, the focus of programmes that seek to block target function is to track the behaviour of that specific target. Those targets might be de-partitioned out of the condensate, thereby rendering them inactive. Alternatively, the targets might remain in the condensate but be prevented from engaging in the interactions necessary for function. C-mod discovery strategies A wide variety of strategies may be envisioned to identify c-mods that achieve these three objectives. The preferred strategies in any situation will depend on the desired pharmacological outcomes and the detailed knowledge of components, structure and function of the given condensate. Modulating the condensate scaffold Modulating a condensate scaffold is expected to lead to drastic effects on the stability of condensates, such as persistence, C sat (that is, formation or dissolution) (Fig. 3b ), material properties and/or composition. If a c-mod intercalates between two or more condensate components, or changes the interaction valency or the interaction strength between (co)-scaffolds (within folded and/or disordered domains), it could also change the material properties. The goal is not full inhibition of a particular protein but, rather, disruption of the composition or stability of a biomolecular condensate, which can be achieved via modest changes in the weak networking interactions. Scaffold modulation could be achieved in various ways. One approach is tuning valency. For example, a low-valency poly-PR peptide dissolved in vitro heterotypic condensates consisting of NPM1 and a multivalent poly-PR peptide 132 , suggesting that replacing multivalent interactions that mediate network-stabilizing interactions with monovalent, terminal ones is a feasible c-mod mechanism. A second approach is directly blocking or stabilizing protein–protein 144 , protein–nucleic acid or nucleic acid–nucleic acid interactions that contribute to scaffolding 147 . For example, short bait RNAs prevented formation of TDP43 inclusions in an optogenetic cellular model 148 , probably via a mechanism that outcompetes TDP43–TDP43 self-interaction. A second example is the topoisomerase inhibitor and nucleic acid intercalator mitoxantrone (Table 3 ), which inhibited stress granule formation in a phenotypic high-content screen using two different cell lines and multiple types of stress, and was shown to block the RNA-dependent recruitment of RNA-binding proteins, including TDP43. These compounds reduced persistence of TDP43 puncta in induced pluripotent stem cell-derived motor neurons 31 ; the exact mechanism of action and how it relates to the annotated activity of this compound need to be further investigated. Table 3 Examples of compounds with evidence of condensate-modulating activity Full size table Several encouraging proofs of concept for condensate-targeted antiviral drug discovery have been reported, although the exact mechanisms of action are not fully elucidated. Small molecules such as kanamycin (Table 3 ) are able to destabilize nucleocapsid-containing biomolecular condensates, both in vitro and in cultured cells 143 . Additionally, a peptide that inhibits nucleocapsid dimerization prevented condensation and viral replication, and rescued the innate immune response in live cells and mouse models 142 . Cyclopamine (Table 3 ) analogues modulated the material properties of RSV viral factories in infected cells by reducing the dynamics of the M2-1 protein recovery upon photobleaching, translating into reduced viral replication in the lungs of living mice 25 . A c-mod could stabilize non-productive conformations (for example, in folded domains or disordered regions), thereby preventing scaffolding contacts; alternatively, it can trap an aberrant or excess protein in inactive condensates (for example, depots). Sulforaphane (Table 3 ) treatment of colorectal cancer cells induces formation of β-catenin nuclear depots that partially co-localize with the transcriptional repressor PRMT5; the appearance of the nuclear depots is associated with a reduction in β-catenin-dependent transcriptional activity 149 . Modulating condensate composition C-mods can be envisioned that inhibit or promote the client–scaffold interactions to drive target exclusion or inclusion into the condensate, respectively. For example, an aberrantly de-partitioned protein could be helped to return to its ‘home’ condensate. Nucleolar protein NPM1 is aberrantly localized to the cytoplasm in acute myeloid leukaemia (AML). The natural product avrainvillamide covalently binds mutant NPM1, returning the protein to the nucleoplasm and nucleolus in cell lines from patients with AML 150 . As discussed in previous sections, changes in condensate composition can affect numerous features, from material properties (for example, viscosity and surface tension), to dynamics and ability to respond to environmental stimuli (for example, persistence and ageing), to enzymatic activity of individual components (for example, cGAS 66 , UBC9 (ref. 68 ) and Dcp1/2 (ref. 67 )). Modulating the conformational and interaction landscape C-mods that interact with the IDRs of a protein may alter the ability of that protein to partition into a condensate or prevent that protein from forming various intermolecular interactions with other biomolecules within the condensate. Because IDRs are conformationally highly dynamic, it is challenging to use traditional structure-based methods to screen for c-mods that interact with them. However, c-mods could work by engaging with IDRs to either decrease the population of functionally active states or increase the population of inactive or inhibitory states. Some drug targets, including transcription factors (for example, MYC), hormone receptors (for example, the androgen receptor) and nucleotide-binding proteins (for example, TDP43), contain IDRs and are known to localize to biomolecular condensates. Although small molecules have been identified that bind to these IDRs, they generally do so at micromolar affinities; covalent binders were reported for MYC 151 and the androgen receptor 152 IDRs (Table 3 ), and non-covalent IDR binders were reported for p27 Kip1 (refs. 153 , 154 ). The ability to produce a high local drug concentration within a condensate might allow for the development of lower affinity, but highly partitioned, drugs that are effective in targeting proteins in these families that have so far been highly challenging. Differences in protein conformation inside versus outside a condensate might also be leveraged to develop c-mods that are selective for one of the conformations, potentially minimizing off-target effects. Degraders One approach to effectively remove a specific protein from a condensate is to degrade it using proteolysis-targeting chimera (PROTAC) or molecular glue strategies 155 . There are now several reports suggesting that E3 ligases involved in protein degradation function within condensates 156 , 157 , 158 , 159 . Several PROTAC design strategies for neurodegeneration targets, including TDP43, α-synuclein, tau and huntingtin, are discussed in ref. 160 . Similar in concept, RIBOTACs are bifunctional molecules that target specific RNA molecules for ribonucleolytic degradation 161 . It has been shown that nuclear p62-containing condensates are essential to efficient proteasomal function, serving as a hub for efficient nuclear protein turnover 162 . Autophagy is also critically dependent upon phase separation. For example, autophagosome-tethering compounds are molecular glues that selectively target the mutant huntingtin to degradation via autophagy, by selectively binding to the expanded polyQ and LC3 (ref. 163 ). Using similar approaches, one can envision degrading a scaffold to reduce the effective concentration below C sat , thereby preventing or reversing condensate assembly. Modulating condensate regulatory processes Enzymes such as chaperones and helicases can play critical roles in regulating the condensate environment 95 . Such regulation could affect the condensate environment generally, or may more selectively affect the behaviour of particular proteins or nucleic acids, either by preventing them from interacting with their usual partner molecules or by dramatically affecting their properties (for example, conformation, solubility and valency), causing them to de-partition out of the condensate. Affecting turnover kinetics may be another mechanism to regulate condensate composition. Another option could be to change the post-translational state of a protein, or the epigenetic or epitranscriptomic state of DNA or RNA, respectively. As discussed earlier, this may either modify the ability of a biomolecule to nucleate the formation of the condensate, change its residence time in the condensate or alter the function of that biomolecule inside the condensate. Phosphorylation and methylation are some of the most studied PTMs that modulate protein condensation 93 . RNA post-transcriptional modifications are essential regulators of RNA function and may affect the ability of those RNA to phase separate 39 , 164 . Furthermore, epigenetic regulation via histone methylation and acetylation status tunes the phase separation of chromatin 56 . Optimize partitioning of drug into condensates Condensates contain key drug targets such as enzymes, transcription factors, DNA and coactivators. This creates a unique local microenvironment that may selectively increase or decrease the concentration of small molecules, thereby having an impact on their target engagement and therapeutic efficacy (Fig. 3b ). For example, cisplatin and JQ1 (Table 3 ) are antineoplastic compounds that act by intercalating DNA and inhibiting transcription, respectively. Both have recently been shown to specifically partition in transcriptional condensates 30 . Transformed cells often acquire transcriptional condensates at oncogenes, and high concentrations of intercalating agents or inhibitors at these key genes might account for the heightened sensitivity of cancer cells to agents that target universal cellular processes 82 . This partitioning behaviour might explain their ability to preferentially kill cancer cells, but it is not yet clear whether these compounds function inside the condensates, and a systematic comparison of efficacy versus condensate partitioning within a drug analogue series is yet to be reported. Considerations and challenges The compositional complexity, size and dynamics of biomolecular condensates pose several challenges for drug discovery. First, reliable models reflecting relevant biology and well-defined metrics for characterization of condensates are imperative. Challenges in quantitative characterization of condensates and development of model systems are discussed in refs. 33 , 34 . Condensates are exquisitely sensitive to variations in expression levels of their components and regulators, and changes in environment. For example, an overexpressed protein or a protein engineered to undergo phase separation more readily 58 , 81 , 136 , 138 , 148 , 165 , 166 , 167 in a model cell line could induce formation of condensates, whereas under endogenous expression levels in the disease-relevant cell line it exists below C sat , raising the question of relevance of the screening outcome. Such artificial model systems have been used extensively in the field of transcription, where the small size and transient nature of the condensates or hubs make their quantification via conventional microscopy methods challenging. Second, c-mod discovery will depend on cellular phenotypic assays (as described below), and so the identification of the target(s) driving the observed phenotypes will seldom be straightforward. C-mods may affect condensates via a wide range of mechanisms, from direct interactions with one or more biomolecules within the condensate, to general effects on the emergent properties of the condensate, to altered PTM of proteins, thereby preventing them from entering the condensate. For some c-mod discovery efforts, the target(s) will not be known, and the compound optimization effort will be driven solely by phenotypic cellular assays. In cases where the targets of interest are known, a different challenge will emerge: correlating often-subtle measures of condensate phenotypic behaviours with more traditional measures used in drug discovery programmes, such as biochemical read-outs, intracellular target engagement, measures of gene expression, disease-relevant functional cellular read-outs or pharmacological effects. A third challenge is how to optimize selective partitioning of the c-mod into the disease-relevant condensate. The properties of condensates vary widely and are difficult to quantify. To improve the therapeutic index, properties such as polarity, lipophilicity, hydrogen bond donor/acceptor count, charge, overall shape, flexibility, aromaticity and the presence of specific functional groups may influence partitioning and be optimized for a specific condensate environment. An additional selectivity challenge results from the fact that many biomolecular components are found in multiple condensates. We hypothesize that a c-mod which partitions non-selectively into many condensates may lead to unacceptable off-target effects. However, if a c-mod concentrates in the condensate of interest, as each condensate contains in the order of a hundred other kinds of gene products, the functional selectivity is likely to be very high because many other potential binding partners for that c-mod are not present in the condensate, leading to a high therapeutic index. In addition, if a c-mod can bind, even weakly, to multiple related proteins within the target condensate, this may yield a pharmacologically relevant effect. Building a c-mod discovery platform Converting our knowledge of biomolecular condensates and their involvement in disease into c-mods requires a novel drug discovery platform, which can be divided into four main parts (Fig. 4 ): identification of a disease-relevant target condensate, and formulation of a hypothesis on how modification of the target condensate could have desired functional effects in cellular models of disease (the condensate hypothesis ); characterization of the target condensate and development of assays to measure such effects that enable validation of the hypothesis; high-throughput screening (HTS) to identify potential c-mods that reverse or prevent the aberrant condensate phenotype; and hit-to-lead optimization based on the desired, disease-relevant functional outcome. Such a discovery platform will be built on existing (reviewed in ref. 35 ) as well as novel interdisciplinary assays. Fig. 4: Building a c-mod discovery pipeline: NUP98–HOXA9 as a case study. a | The first step is validation of the condensate hypothesis by testing the correlation between the genetic alteration ( NUP98 and HOX9 fusion), aberrant condensate phenotype and aberrant transcription of HOX genes. b | A proof-of-concept drug discovery pipeline for the NUP98–HOXA9 condensatopathy. A primary phenotypic high-throughput screen (HTS) in a cell line expressing NUP98–HOXA9 could identify compounds that change the morphology of aberrant condensates. Hit compounds with various chemotypes could be filtered and prioritized (for example, with the aid of machine learning/artificial intelligence or through traditional methods) based on various characteristics (two are shown in the graph). Selected hits would then move into secondary validation assays, where one or more functional outcomes are monitored, in disease-relevant cell lines (for example, genome occupancy by ChIP-seq, and in vitro pharmacology by proliferation kinetics) and in vivo activity (for example, tumour growth and survival rates in animal models). Lead compound characteristics would then be optimized in a panel of assays, ranging from in vitro binding studies to the target, biophysical characterization of the lead compound effects on composition and material properties of in vitro reconstituted and endogenous condensates, partitioning measurements, toxicity and off-target measurements, in addition to secondary functional assays. Parts a and b are adapted with permission from refs. 168 , 169 , CC BY 4.0 ( ), Elsevier. Full size image A foundational piece in the development of a c-mod discovery pipeline is establishing a reliable connection between the target condensate phenotype and a disease-relevant functional read-out. For example, a correlation between the decapping activity of Dcp1/2 (ref. 67 ) and condensate formation was determined by tracking the fluorescence of a dual-labelled RNA probe. MED1-IDR-induced condensation was correlated with the transcriptional output measured in an in vitro transcription assay 81 . Fluorescence microscopy was used to monitor and quantify actin polymerization in response to signalling cluster formation 65 . Splicing 26 and cardiac defects in transgenic pigs 26 were linked to the phenotypic observation of RBM20 condensatopathy. Behavioural changes (for example, crawling ability) in a Drosophila ALS model 129 and survival curves in a cardiomyopathy pig model 26 have been successfully used to correlate the condensatopathy with clinical presentations of the disease in vivo. These functional assays should also be utilized again later in the pipeline as secondary hit validation assays and to optimize c-mod efficacy. To illustrate how such a c-mod discovery pipeline could look in practice, we use the example of repairing the NUP98–HOXA9 condensatopathy in AML, based on a range of complementary assays described by Chandra et al. 168 and Xu et al. 169 (Fig. 4 ). Formulate a condensate hypothesis The initial step of condensate-centric drug discovery is to select a target condensate and formulate a condensate hypothesis. Targets could originate from several sources, including pre-existing data on the disease relevance of a condensate, de novo identification of a condensatopathy or de novo identification of a condensate-association of a conventional target. Curated databases of genetic variants with strong disease association 170 , 171 , when combined with predictors of condensation-prone features 172 of the mutated proteins, provide a rich source to draw upon for hypothesis generation and can help prioritize proteins that probably play central roles in condensate assembly 52 . Additionally, computational methods (Box 1 ) have the potential to decode, a priori, how biomolecules cooperate to form condensates or how putative c-mods affect condensates, and to aid de novo discovery of c-mods. However, the complexity of condensates creates challenges for computational methods. Computational analysis can take the form of data mining the existing knowledge of condensate composition and the properties of simplified systems in vitro to generate predictions. In this vein, several databases of phase-separating proteins or condensate components have begun to be curated (reviewed in ref. 173 ). Such databases can be an excellent first source when exploring a condensate hypothesis. A hypothesis that a NUP98–HOXA9 condensatopathy is responsible for cellular transformation in AML cells can be formulated based on the following findings (Fig. 4a ). First, human genetics data show a strong correlation between expression of NUP98 fusion oncogenes and AML clinical manifestation 174 . Second, expression of NUP98–HOXA9 in cultured cells induces formation of nuclear puncta, driven by the FG-repeat IDR of NUP98 (refs. 168 , 175 , 176 ). Box 1 Computational methods to inform the study of condensates Algorithms such as CatGranule 247 and P-score 172 have been trained on prior knowledge and are reasonably successful in predicting which protein sequences will phase separate. The PLAAC algorithm 248 , designed to detect prions, performs similarly to algorithms specifically trained to predict phase separation (reviewed in ref. 249 ), suggesting that existing predictors might be detecting a particular ‘flavour’ of disorder. However, it is expected that as the volume of training data for such predictors increases, the quality and utility of databases and predictors will also increase. Of note, the potential for a protein to homotypically phase separate is only one variable controlling the behaviour of condensates. Understanding the mechanism of action of condensate-modifying therapeutics (c-mods) or generating a condensate hypothesis requires more nuance. Such characterization can include an in-depth understanding of protein flexibility and disorder. Many different predictors of protein disorder exist, and aggregators of information on disorder such as D2P2 (ref. 250 ) or MobiDB 251 are useful in parsing the results. These databases can inform on disorder and annotated motifs, and hint at hidden structures. Similarly, when viewed through a lens of disorder or frustrated folding, predictions from AlphaFold 252 can provide information on protein conformational bias. Perhaps the most critical analyses of condensate components investigate how a protein (or nucleic acid) interacts within the condensate network. Tools that assess hidden functions within disordered regions 253 , 254 or predict short linear motifs in disordered regions 255 provide valuable clues about how flexible regions interact in condensates. This list is far from comprehensive; a wide array of protein property databases and predictors exist. The unifying feature of condensates is the array of distributed interactions with a range of affinities. Any bioinformatic tool that shows how a biomolecule interacts with its environment can aid in understanding condensate structure and the c-mod mechanism of action. Emerging advances in all-atom simulations can also be used to elucidate the binding mode (as was shown for EPI-002 and EPI-7170 to the intrinsically disordered region (IDR) of the androgen receptor) and infer a structural rationale for their differential potencies 256 . All-atom and coarse-grained molecular simulations can be integrated at several points in the drug discovery pipeline, including but not limited to identification of interesting binding features or interfaces in the target biomolecule, obtaining insights into the mechanism of action of a hit compound 195 or guiding the screening strategies and medicinal chemistry efforts to improve potency. Show more Condensate characterization To understand the condensate, one must characterize, to the greatest extent possible, the community of biomolecules which comprise it. This information can be leveraged to characterize the condensatopathy, to select a suitable HTS phenotypic assay to identify c-mods and to interrogate the mechanism of action of c-mods. The compositional analysis can be performed via subcellular proteomic and transcriptomic analysis 177 (Box 2 ) or multiplexed imaging methods 178 . Due to the compositional complexity of condensates and the various parallel modes of regulation of phase behaviour, disentangling the contributions of specific components to the phenotype and behaviour of a condensate in living cells remains challenging. Typically, live-cell fluorescence confocal microscopy is used to characterize the localization and emergent properties of condensates (reviewed in ref. 179 ). Condensates with sizes below the limit of detection of conventional confocal microscopes 136 , 180 , 181 may be visualized, albeit at the expense of speed and throughput, with the advanced techniques 182 described in Box 3 . In-cell phase boundaries of biomolecular condensates can be quantified by correlating the variable levels of expression of a fluorescently tagged marker protein with the formation of condensates 58 . This analysis can measure the effects of disease-associated mutations 183 , or identify co-scaffold interdependencies 64 , 184 . Furthermore, it can be readily implemented into the HTS pipeline to determine the identity of c-mods and obtain mechanistic insights. Complementary to cellular assays, in vitro reconstituted condensates that recapitulate a subset of relevant features of the biological condensate can be used to address more specific questions related to the nature of interactions that drive condensation or are affected by c-mods. For example, monitoring the shift in the phase boundary and changes in emergent biophysical properties (such as number, size, morphology, material properties, dynamics and composition) as a function of various parameters (such as ionic strength, pH, ligand concentration and temperature) could identify the most promising points for therapeutic intervention inside the macromolecular network (reviewed in ref. 179 ). This informs strategies for c-mod design (for example, a protein–ligand interaction, a hydrophobic-driven interaction or an electrostatically driven interaction) and hit optimization. Methods for probing material properties, such as viscosity, surface tension and component dynamics (for example, diffusion and mobile fraction), can be applied in vitro and in live cells 40 , 44 , 185 (Box 4 ); this information can be leveraged as a read-out to detect a change in condensate milieu, and/or to gain insights into the mechanisms driving a condensatopathy or a c-mod. For example, the condensate hypothesis for the NUP98–HOXA9 condensatopathy is supported by the correlation between the phenotypic observation of aberrant nuclear puncta and transcriptional reprogramming of the HOX cluster and p53 that leads to leukaemogenesis in primary cells and mouse models 168 , 169 . The composition of these aberrant condensates has been characterized by proximity labelling proteomic assays 186 (Box 2 ), as well as Co-IP and ChIP-seq 169 assays showing an expansive network of interactions with chromatin remodelling factors 168 , 169 . NUP98–HOXA9 within nuclear condensates recovers from photobleaching in the order of seconds 168 , indicating dynamic on/off binding kinetics and rapid diffusion with and within the extended condensate network. Importantly, these disease-relevant functions depend on the ability of NUP98–HOXA9 to form condensates via FG motif multivalency, and the ability to nucleate condensates by binding DNA through the HOXA9 folded domain 168 . Box 2 Methods for proximity labelling to map condensate composition Approaches such as affinity purification coupled to mass spectrometry-based proteomics are traditionally used to map stable protein networks 177 , 257 , 258 . However, these methods fail to detect weak and transient interactions that are integral to networks within biomolecular condensates because material losses occur during cell lysis and subsequent washing steps. Chemical cross-linking followed by affinity purification overcomes some of these problems at the expense of increasing the rate of false positives 257 . In proximity labelling techniques, a ‘bait’ protein is fused to an enzyme that covalently modifies ‘prey’ proteins or nucleic acids in its vicinity 177 , thus providing a means to preserve sensitivity to labile interactions, while allowing for stringent washes and minimizing carry-over of non-specific binders. Three main strategies have been developed. First, the engineered peroxidase system (APEX 259 , APEX2 (ref. 260 ) and HRP 261 ) biotinylates tyrosine residues of prey proteins, as well as nucleic acids, within a radius of about 10–20 nm, upon stimulation with peroxide. Second, biotin ligases (BioID 259 , BioID2 (ref. 262 ), BASU 263 and TurboID 264 ) create activated esters that covalently biotinylate lysine amines of proximal proteins. Last, hybridization-proximity labelling (HyPro) methods use digoxygenin-labelled antisense RNA hybridization probes, which bind to a target RNA in fixed cells, biotinylating proximal proteins and nucleic acids 265 . APEX labelling occurs on short timescales (<1 min) 257 , and therefore is ideal for probing dynamic and transient cellular processes. BioID labelling operates on longer time frames (~18 h), limiting the application to long-lived structures such as the interaction network of TDP43 aggregates 266 . In contrast to APEX, where long-term exposure to peroxide leads to cell toxicity, BioID is non-toxic, enabling in vivo measurements 267 . The HyPro method has the advantage of not requiring expression of an exogenous, modified target protein and, therefore, can be performed on unmodified cells, as close as possible to physiological conditions 265 . In all approaches, biotinylated biomolecules are enriched via affinity binding to streptavidin beads, purified and analysed by mass spectrometry (for proteins) and/or sequencing (for nucleic acids) to provide a comprehensive proteomic and transcriptomic interactome within the condensate. An in-depth comparison of the various protein–nucleic acid affinity approaches is presented in refs. 257 , 268 . Proper controls are essential to map condensate compositions, specifically to decipher whether the detected biomolecules are indeed localized to a specific condensate. Ideally, two independent measurements are performed, one with a condensate present and one without; for example, stress-inducing agents can be used to tune the formation of the stress granules. As a cautionary note, transfection of fusion proteins can lead to overexpression artefacts, mislocalizations and composition alterations; therefore, any engineered system should be validated with functional assays and independent confirmation of the identified components (for example, with immunofluorescence). Show more Box 3 Advanced microscopy techniques to study condensates Some condensates, such as transcriptional condensates 181 , are very small and therefore hard to detect with conventional confocal microscopes. In recent years, various advanced imaging technologies have been developed to study condensates across scales 182 . Specifically, super-resolution imaging techniques such as time-correlated photoactivated localization microscopy (tcPALM) 181 , live-cell single particle tracking 138 , 269 and stimulated emission depletion (STED) can achieve a spatial resolution of tens of nanometres and were successfully used to gain deep understanding into the transcriptional condensate and heterochromatin biophysics 165 . For transcriptional condensate analysis, it might be of great importance to measure whether a transcription factor such as MYC partitions into a MED1 condensate 136 . For such a co-localization analysis, STED is a well-suited tool, because it offers the optimal combination of spatial super-resolution and optimal alignment of the different colour channels. Structured illumination microscopy bridges the gap between advanced super-resolution techniques such as STED and PALM and conventional confocal microscopy, providing resolutions down to 60 nm (ref. 270 ). Moreover, dynamic live-cell imaging can be performed without the need for dedicated sample modifications. By trading resolution for speed and reducing phototoxicity, lattice-light sheet microscopy (LLSM) emerged as a powerful tool for long-term imaging of dynamic objects in living cells 271 , 272 and even to visualize condensates in Drosophila embryos 79 . Although these structured illumination microscopy approaches provided important insights on the dynamics of condensates, automated screening remains challenging because minor alignment errors such as a small tilt of the sample can be detrimental to the image quality. Moreover, the reconstruction algorithms can induce imaging artefacts. LLSM is a super-resolution method well poised for the study of the dynamic properties of condensates and the early steps of protein aggregation, due to its high spatial and temporal resolution in combination with low phototoxicity 273 . LLSM revealed that even in extended embryonic systems, the HP1A condensate can grow, fuse and dissolve 79 . Recently, light sheet microscopes have been successfully re-engineered into an inverted configuration 274 and LLSM versions are commercially available, making this exciting technology applicable for multiwell plate configurations, which expands the usability for pharma applications. In general, all of these techniques provide limited applicability for large-scale screening purposes. With dedicated optimizations and custom software development, up to 1,000 compounds can be evaluated. Despite their limited throughput, these super-resolution techniques serve as excellent choices for in-depth characterization of the model systems, hit follow-up and investigation of mechanisms of action. Super-resolution add-ons for confocal systems 275 , 276 (which we refer to as enhanced resolution) are a very good option to improve the resolution for condensate screening applications, as they are commercially available and can be used in combination with multiwell plate formats. Additionally, no sample modification is required and software integration for automation is provided. These enhanced resolution systems can be reasonably set up to screen up to tens of thousands of compounds. Phenotypic screening of small condensates, with sizes below the visible light diffraction limit, is challenging due to the requirement for specialized, low(er) throughput instrumentation. Advances in machine learning can be leveraged to compensate for some of these shortcomings 277 , 278 . For example, machine learning algorithms can be trained on high-quality, low-throughput, super-resolution images (such as STED) to enhance the data quality of conventional microscopy images 279 , enabling high-throughput screening (HTS) of small condensates. Machine learning can also be used to optimize and integrate data analysis from large imaging data sets with data from orthogonal validation assays, in order to expedite the drug discovery process. Show more Box 4 Methods for probing material properties of condensates Fluorescence recovery after photobleaching (FRAP) measures biomolecular diffusion inside condensates, reporting on a convolution between local viscosity and binding kinetics of the condensate biomarker with the macromolecular network 34 , 280 . FRAP assays are compatible with most cell lines expressing genetically encoded fluorescent tags; limitations include time-dependent changes in material properties (for example, ageing), long acquisition times and limited throughput. Molecular rotors are fluorescent dyes that are sensitive to environmental changes; they can be conjugated to genetically encoded tags (for example, HaloTag) fused to the condensate biomarker 281 to sense local changes inside specific condensates. Theoretically, this method could be adapted for high-throughput screening (HTS) applications. Time-lapse imaging can track condensate fusion and isotropic growth, which reports on the ratio between viscosity and surface tension 12 . All of these methods are compatible with in vitro reconstituted systems and live cells. Fluorescence lifetime imaging (FLIM) can visualize a change in intracellular material properties 282 . A well automatized FLIM set-up (where the acquisition time for a full field of view is ~5 s) could outcompete FRAP in terms of throughput; the non-destructive nature of FLIM is compatible with imaging in live systems (for example, developing or ageing animals) over a much longer time course. Stimulated Raman scattering (SRS) microscopy-based methods are a set of powerful label-free techniques, which have been successfully applied to detect peripheral nerve degeneration and tissue degeneration of the spinal cord in mouse models of amyotrophic lateral sclerosis (ALS) 283 , 284 . They were also used in the quantification and spectral analysis of native polyQ aggregates with subcellular resolution in live cells 285 . SRS microscopy is commercially available and can be coupled to other imaging modalities such as confocal fluorescence microscopy and non-linear imaging, enabling different cellular components such as aggregates and lipid droplets to be separated spectrally. Active physical perturbations such as defined temperature changes 286 , 287 or advanced laser-induced hydrodynamic flow changes 49 , 288 provided detailed insights into the biophysical nature of condensates in living organisms. Chemical perturbations that globally affect certain classes of interactions, such as 1,6-hexanediol (hydrophobic interactions) 79 , 289 , salt (electrostatic interactions) 286 and disassembly drugs 290 , can be used in parallel to a screen; these provide insights into the extent to which different types of interactions contribute to condensate stability, and a means of detecting changes in material properties of condensates as a function of drug treatment. As for practical considerations when applying global perturbations, short incubation times are recommended so that instantaneous physicochemical responses can be probed before toxic side effects develop. The latter should be monitored by using cell viability or toxicity assays. Although some of the biophysical perturbations described above might not be suited for large-scale screening campaigns, they can be utilized to distinguish between aggregates (irreversible) and condensates (reversible), which is essential for the characterization of the target before starting a HTS campaign. Show more Primary screens At the onset of the HTS campaign for potential c-mods, one should have the following: a validated condensate hypothesis; a robust and scalable set of phenotypic and disease-relevant functional assays; and an assay that enables investigation of structure–activity relationships for hit-to-lead optimization. C-mods are selected based on phenotypic HTS, which monitors a combination of emergent properties (for example, size, number and morphology) and/or co-localization of selected markers. Considerations for the selection of the appropriate HTS set-up that balances speed, throughput and resolution/sensitivity as appropriate for the target condensate are discussed below. High-content imaging assays can be optimized for screening of large libraries (~10 6 compounds), while monitoring the optical phenotype of condensates in live or fixed cells 31 , 32 , 187 , and in vitro reconstituted condensates 188 with sizes above the diffraction limit. This approach has been used to identify hits that inhibit stress-induced aggregation of TDP43 (ref. 187 ), stress granule formation 31 , 32 and p53–Mdm2 interaction 188 . The imaging technique depends on the size of the condensate. Generally, increasing the optical resolution and signal sensitivity comes at the expense of throughput. Initial target assessment and hit follow-up studies of 1,000 compounds might be achievable with the advanced techniques presented in Box 3 . Alternatively, the model system can be altered to artificially increase the size of the condensate, by using optogenetics 58 , 189 , 190 and repeat operon arrays 30 , 136 , 138 , 191 , 192 . These engineered systems and practical considerations for their selection are reviewed in ref. 179 . Monitoring the material properties of condensates can also identify c-mods (Box 4 ). These methods are amenable to multiplexing and HTS applications, with libraries of up to 10 4 –10 5 compounds, depending on the technique. For the example of repairing the NUP98–HOXA9 condensatopathy, a phenotypic primary HTS would identify compounds that change the optical phenotype of the NUP98–HOXA9 nuclear puncta (Fig. 4b ). A change from punctate to diffuse staining of NUP98–HOXA9 could indicate c-mods that dissolve the condensates, whereas a change to fewer, larger condensates could indicate c-mods that inhibit binding to chromatin 168 . Similar phenotypic screens have been performed to identify small molecules that prevent oxidative stress-induced TDP43 and G3BP1 cytoplasmic puncta formation in PC12 (ref. 187 ), as well as HEK293xT and neural precursor cells 31 , respectively; these serve as model systems for ALS. Secondary screens and hit optimization The primary hits from the HTS are filtered based on cytotoxicity and condensate selectivity (for example, using a phenotypic screen against a panel of unrelated condensates), validated based on disease-relevant functional assays (for example, induced pluripotent stem cell-derived or patient-derived cells) and their mechanism of action characterized via biophysical measurements. Optimization of drug partitioning inside a target condensate (Box 5 ) provides the opportunity to improve the therapeutic index by increasing exposure of a drug to its target and minimizing off-target effects. In our NUP98–HOXA9 condensatopathy example (Fig. 4b ), the primary screen hits would be evaluated and further optimized in cell proliferation/transformation (for example, proliferation rates and colony formation) and/or transcriptional reprogramming (for example, qRT-PCR and ChIP-seq) assays, followed by validation in animal models (for example, tumour growth and survival) 169 . Leptomycin B (Table 3 ), a well-characterized inhibitor of the nucleocytoplasmic transporter CRM1, exhibited c-mod properties when it inhibited formation of NUP98–HOXA9 aberrant condensates, and transcriptional reprogramming 193 . We hypothesize that the c-mod acts by inhibiting CRM1-dependent nucleation of NUP98–HOXA9 condensates on chromatin 168 , 193 , 194 . Box 5 Methods for probing small-molecule partitioning in cells Detection of small molecules inside condensates remains challenging. Fluorescence confocal microscopy allows for visualization of small molecules that are inherently fluorescent, such as mitoxantrone, or require custom modifications as applied for cisplatin 291 and tamoxifen 292 , but is not generally applicable, as most drug-like molecules are not inherently fluorescent. In vitro reconstituted systems are amenable to physical separation of the two phases (light and dense) by centrifugation, providing direct access to measuring concentrations of the small molecule using well-established, scalable analytics methods to extract partition coefficients. Reconstitution of condensates from whole cell lysates 293 derived from disease-relevant cell models more closely mimics the complexity of cellular condensates and could be used to directly measure condensate-modifying therapeutic (c-mod) partitioning as described above. Measurements of compound partitioning into in vitro reconstituted and/or in live-cell condensates can be integrated in an iterative fashion in the medicinal chemistry optimization pipeline, in combination with the functional assays described above. Stimulated Raman scattering (SRS) microscopy is an imaging method based on contrast generated by Raman-active vibrational frequency of a given chemical bond that allows visualization of label-free small molecules, as well as distinguishing between specific classes of biomolecules and types of protein secondary structure 294 . This imaging technique can be leveraged to quantify drug partitioning in condensates inside living cells. However, the generated scattering signals are inherently weak, and so only a well-tuned system in combination with Raman-active compounds, such as drugs containing an alkyne moiety (for example, ponatinib), might be detected with resolutions and sensitivities 295 that are required to localize the compound within a condensate inside a cell. Nanoscopic subcellular ion beam imaging provides a completely new avenue to visualize the 3D volumetric distributions of genomic regions, RNA transcripts and proteins with 5-nm axial resolution 296 . It was used to monitor the cisplatin distribution with subcellular resolution in cancer cells 297 . The technology requires sample fixation, which locks biological molecules in space; antibody staining is used to visualize co-localizing proteins. However, caution must be exercised as fixation might cause partitioning artefacts by altering the biophysical properties of the condensates. Therefore, either partition experiments should be performed in a live-cell setting or all biological and drug-like molecules need to be (chemically) fixed in space. Show more Outlook Biomolecular condensates are emerging as attractive novel targets for drug discovery. Many proteins and nucleic acids of high therapeutic interest, including numerous targets previously considered ‘undruggable’, operate within condensates. Importantly, there is emerging evidence that condensates are ‘druggable’. First, some approved drugs have been shown to partition into condensates 30 . Second, high-content cellular screening has identified drug-like molecules that modulate condensate behaviours in a selective manner 31 , 148 , 187 . Third, it is now understood that PTMs can strongly regulate the formation, behaviour and dissolution of condensates. Taken together, it is tempting to speculate that many approved drugs may, in part, be acting as c-mods, for example by exerting a portion of their pharmacological benefit through the modification of disease-relevant condensates. We hypothesize that condensates could represent nodes of misregulation in polygenic diseases. For example, mutations in binding motifs within IDRs (such as degrons, nuclear export and import signals) can alter regions of transient structure 120 and/or the interactome of the affected protein 126 . Consequently, changes in the IDR interactome can lead to alterations in the condensate scaffolding, composition, dynamics, material properties and functional output. Alternatively, the mutations can lie outside canonical binding regions, where they might affect condensation by changing the physicochemical properties and valency. This paradigm might explain the pathophysiology of certain diseases that exhibit stereotyped phenotypes but complex genetic and environmental causes. Each individual type of ALS/FTD-associated genetic mutation accounts for a relatively small number of patients. However, despite differences in the cause of onset, all ALS subtypes share condensate dysfunction as the common denominator, namely formation and persistence of cytoplasmic TDP43 granules in affected neurons 23 . Targeting the condensate rather than individual mutations within that molecular community might provide an avenue to deliver broader therapies to a larger patient population. It is also possible that the dysfunctional cellular processes observed in some cancers are driven by mutations in diverse genes, all of which form a single condensate. This condensate may integrate oncogenic signals into a single output, such as a high proliferative capacity or sustained signalling. The high complexity and dynamic nature of condensates raise several unique challenges and opportunities for c-mod discovery. For example, c-mods can exhibit unusual dose–response behaviour, which can vary depending on the experimental conditions. This behaviour, however, could provide insights into the mechanism of action of the c-mod, such as preferential engagement with one of the phases 195 or engagements of multiple targets 196 . For this reason, a range of biologically relevant assay time frames, windows of drug treatment and phenotypic responses must be measured. In addition, tight control over assay conditions must be maintained to achieve the necessary assay reproducibility required for HTS and medicinal chemistry. Appropriate biochemical and disease-relevant functional read-outs are required to demonstrate clear correlations with the observed condensate phenotypes. Because of the complexity of condensates, and the nature of the forces that lead to condensate formation, combinations of drugs that engage multiple components of the molecular community may be of particular importance. Furthermore, a c-mod may be envisioned that binds weakly to multiple sites on one protein or to many related proteins; such monovalent compounds, binding in a super-stoichiometric fashion, are expected to reduce the valency on the scaffolds, thereby destabilizing the condensate. To stabilize a conformational state that promotes interaction, one could adopt a molecular glue strategy to force biomolecules to remain in an associated or proximal state. Many questions and challenges are topics of active investigation by the community. For example, how do we demonstrate causality between disease and condensatopathies? How do we identify ‘hub’ condensatopathies for polygenic diseases? What are the different signalling and regulatory pathways that are dysregulated via any one target condensate? What are the most informative components for understanding the function of the condensate and the effects of c-mods? To address each of these challenges, the drug-hunter must understand the individual components of a target condensate as well as the collective behaviour of the molecular community. However, this remains challenging, both due to technological limitations of spatial and temporal resolution as well as biological complexity (for example, fluctuations in composition due to stochastic variability in protein expression or differences in cell-cycle state). The more complete the condensate map, the more opportunities for a successful drug discovery programme. The complexity of the condensate environment requires creative medicinal chemistry approaches to develop c-mods. For example, knowledge about drugging individual targets that localize to or regulate a condensate can be leveraged to create combination therapies or multifunctional drugs. This information may, in turn, address other challenges, including how to mitigate toxicity by, for example, avoiding inhibition of components that do not exclusively function within the target condensate, or overcoming drug resistance. A promising result in this direction has been reported for multiple myeloma, where patients with high expression of the protein SRC3 experience poor outcomes. Liu et al. 197 discovered that resistance to the proteasome inhibitor bortezomib results from interactions between steroid receptor coactivator SRC3 and the histone methyltransferase NSD2, leading to the stabilization and phase separation of SRC3. A small-molecule compound, SI-2 (Table 3 ), disrupts the interaction between SRC3 and NSD2, eliminating the condensate and restoring the activity of bortezomib 197 . Incorporating a ‘condensate perspective’ into the drug discovery process holds significant potential to create medicines that operate through fundamentally different mechanisms. However, to capitalize on the insights that are emerging in the condensate field, it is clear that a novel approach is required. We suggest that successful discovery of c-mods will result from integrating deep understanding of condensate properties and function, pragmatic drug discovery expertise, and robust commitment to the development and application of suitable technologies to measure emergent properties of condensates, to characterize the broad effects of c-mods on condensate behaviour and function, and to further understand the thermodynamics and kinetics of these interactions at a molecular level. Synergy between efforts in the biotechnology and pharmaceutical industries and academia, and expertise from disparate fields, has been and will continue to be the key for success in developing new medicines by targeting biomolecular condensates.
A new perspective published in Nature Reviews Drug Discovery examines the potential of biomolecular condensates to transform drug discovery. Condensates are membrane-less organelles that form dynamically throughout the cell via a process called phase separation. Over the last decade scientists have recognized the role of biomolecular condensates in cellular organization and disease, marking one of the most revolutionary areas of biology. In "Modulating biomolecular condensates: a novel approach to drug discovery," a Dewpoint Therapeutics perspective, the authors discuss the largely untapped opportunities for targeting biomolecular condensates to develop therapeutic agents for various diseases. "To our knowledge, this is the first time that a cohesive logic has been assembled outlining how a deep understanding of condensate biology can revolutionize the drug discovery process across therapeutic areas," commented Dr. Isaac Klein, Chief Scientific Officer at Dewpoint Therapeutics and corresponding author. The authors propose that condensate dysregulation may represent a node of disease origination in patients with different genetic background and environmental exposures, and that these nodes can be leveraged as drug targets. Classical drug discovery focuses on modifying the function of a single target biomolecule. By reimagining the drug target as the molecular community that resides within a condensate, researchers can modify the function of biological pathways and biomolecules that were previously considered "undruggable." Another promising aspect of condensate drug discovery is that by targeting a disease node, a single therapeutic agent might help treat a larger patient population. "Condensates are unlike anything seen before in drug discovery. Dewpoint is leading the understanding of condensates and the diverse ways new medicines could be developed to restore aberrant condensate function. No one has previously published the potential of biomolecular condensates from a drug discovery perspective, and Dewpoint scientists are bringing forward a groundbreaking perspective to the field," commented Dr. Mark Murcko, Dewpoint Therapeutics Board and Scientific Advisory Board member, and co-corresponding author. The perspective summarizes the rules that underlie the formation, dissolution, and regulation of biomolecular condensates, which have emerged in the last decade. Based on these rules, the authors discuss how condensate dysfunction drives various diseases, including neurodegeneration, cancer, cardiomyopathy and viral infection. Klein comments, "Dewpoint has developed a platform and drug discovery pipeline that exploits cutting-edge technology and a deep understanding of condensate biology to discover novel condensate modifiers, or c-mods. These molecules have the ability to tackle the root cause of complex diseases and address previously undruggable targets." Condensate-targeted drug discovery was pioneered by Dewpoint Therapeutics as the first biotech company to enter the field, in 2018. New rules for the design of condensate modifying therapeutics are continuously evolving along with the new discoveries in the field. Similarly, discovery pipelines are actively under development, bringing hope for new life-saving treatments for patients suffering from debilitating, uncurable diseases.
10.1038/s41573-022-00505-4
Nano
Researchers improve the nanopore-based technology for detecting DNA molecules
Nature Nanotechnology DOI: 10.1038/nnano.2013.240 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/nnano.2013.240
https://phys.org/news/2013-11-nanopore-based-technology-dna-molecules.html
Abstract Solid-state nanopores can act as single-molecule sensors and could potentially be used to rapidly sequence DNA molecules. However, nanopores are typically fabricated in insulating membranes that are as thick as 15 bases, which makes it difficult for the devices to read individual bases. Graphene is only 0.335 nm thick (equivalent to the spacing between two bases in a DNA chain) and could therefore provide a suitable membrane for sequencing applications. Here, we show that a solid-state nanopore can be integrated with a graphene nanoribbon transistor to create a sensor for DNA translocation. As DNA molecules move through the pore, the device can simultaneously measure drops in ionic current and changes in local voltage in the transistor, which can both be used to detect the molecules. We examine the correlation between these two signals and use the ionic current measurements as a real-time control of the graphene-based sensing device. Main Solid-state nanopores 1 can be used to sense 2 , 3 , 4 , 5 , 6 , 7 , 8 and manipulate 9 , 10 DNA molecules by threading the molecules through the pore under an applied potential and monitoring the ionic current passing through the pore. Although the size of the pore can be tuned 11 and the devices can be integrated with other single-molecule techniques 9 , 10 , 12 and detection mechanisms 13 , 14 , 15 , 16 , they have yet to deliver sequencing data. One reason for this is that the nanopores are typically fabricated in SiN x or SiO 2 membranes, which have thicknesses that are large compared to the size of a DNA base. Graphene 17 nanopores have recently been used as an alternative to traditional solid-state nanopores 18 , 19 , 20 . In these devices, graphene acts as the membrane material, which allows, in principle, single-base resolution in the ionic current due to its single-atom thickness. However, the relatively large dimensions of the graphene layer in this proposed geometry mean it cannot be used as a sensing electrode, and the reliance on ionic current measurements precludes multiplexing. A complementary approach is to integrate the standard, relatively thick SiN x membrane with a sensing electrode. A variety of sensing electrodes have been theoretically proposed for this purpose, including metallic tunnelling electrodes 21 , graphene nanogaps 22 and graphene nanoribbons (GNRs) 23 . The detection of single translocation events has also been demonstrated experimentally using metallic nanogaps and electrical tunnelling measurements 24 . Although tunnelling detection could in principle be used to achieve single base resolution, the alignment of the active tunnelling area with the nanopore remains a significant issue. Alternatively, silicon nanowire field-effect transistors have previously been combined with a solid-state nanopore to detect DNA translocation 16 . Nanopore with a GNR device To fabricate our devices ( Fig. 1 ) we began by creating a 20-nm-thick SiN x membrane ( ∼ 20 µm × 20 µm in size), followed by the transfer of a chemical vapour deposition (CVD)-grown graphene monolayer ( Fig. 2b,e ). Graphene nanoribbons were defined using electron-beam lithography (EBL) and oxygen reactive ion etching (RIE). Next, electrical contacts were fabricated by EBL, electron-beam evaporation of a Cr (5 nm)/Au (50 nm) metal double layer and lift-off ( Fig. 2c,f ). To minimize the ionic cross-conductance we carried out atomic layer deposition (ALD) of 5 nm Al 2 O 3 , which was expected to isolate the electrodes from the electrolyte solution 16 . Finally, nanopore drilling was performed in a transmission electron microscope (TEM) working in scanning mode 25 , as shown in Fig. 2d,h (for more details see ‘Device fabrication’ section of the Methods ). The resulting structure is a GNR defined on the SiN x membrane with a nanopore located in its centre. Figure 1: Schematics and characterization of the GNR transistor–nanopore measuring set-up. a , Schematic of the set-up (side view). A single DNA molecule is translocating through a nanopore fabricated in a SiN x membrane. b , Artistic representation of the device. c , Photograph of the fluidic cell. d , I – V G characteristics of a liquid-gated GNR with a nanopore in 10 mM KCl. Gating is performed by changing the transmembrane voltage while V sd = 5 mV. e , I – V characteristic of a 10 nm nanopore in 10 mM KCl buffer after connecting the GNR. Dots are experimental points, and the continuous line is the fit from which the value of the conductance is extrapolated. f , I – V characteristic of a GNR in 10 mM KCl (same device as in e ). Dots are experimental points, and the continuous line is the fit from which the value of the resistance is extrapolated. R wet means in buffer conditions as opposed to dry conditions. Full size image Figure 2: Fabrication of a solid-state nanopore with a GNR transistor. a – d , Schematics of device fabrication steps: fabrication of SiN x membrane ( a ); transfer of CVD graphene monolayer ( b ); graphene nanoribbon pattering and electrode fabrication ( c ); pore drilling ( d ). e , Optical micrograph of a SiN x membrane with a transferred CVD graphene monolayer on top. The dashed line highlights the transferred CVD graphene monolayer. Image dimensions, 250 µm × 250 µm. f , Optical micrograph of Cr/Au electrodes contacting four GNRs on the SiN x membrane. From four GNRs we select one for drilling. Image dimensions, 120 µm × 120 µm. g , TEM micrographs of a GNR before (left) and after (right) drilling. h , TEM image of a nanopore (same device as in Fig. 1e ). Full size image To perform device characterization and experiments, we placed our chips in a custom-made microfluidic chamber ( Fig. 1c ) in order to carry out simultaneous measurements of the current flowing through the GNR and the ionic current flowing through the nanopore (for more details see ‘Experimental set-up’ section of the Methods ). We performed our experiments in three different solutions: buffered 1 M KCl (10 mM Tris, 1 mM EDTA, pH 7.4), buffered 10 mM KCl (10 mM Tris, 1 mM EDTA, pH 7.4) and salt gradient conditions (10 mM KCl in the cis chamber, 100 mM KCl, 10 mM Tris, 1 mM EDTA, pH 7.4 in the trans chamber). The current–voltage ( I–V ) characteristics of the nanopores and GNRs were acquired before the translocation experiments. To evaluate the influence of the presence of the GNR on the ionic I–V characteristics of the nanopore, we first characterized the pore with the GNR disconnected from the instrumentation. We then connected the GNR and repeated the ionic current measurement. We proceeded with measurements only using pores showing linear I–V characteristics in both cases, which indicated that the integration of the GNR with the pore had not resulted in rectifying behaviour. In Fig. 1e we show a typical nanopore I–V characteristic in 10 mM KCl with the GNR connected to the instrumentation. The total noise level usually increased considerably when connecting the GNR to the instrumentation, from typically 40 pA (root mean square, r.m.s.) to 100 pA r.m.s. in 10 mM KCl ( Supplementary Fig. 5 ). In the next characterization step, we measured the resistance of the GNR. Figure 1f presents measurements performed on a GNR (with a resistance of 122 kΩ) obtained in a 10 mM KCl solution. The resistance of the same GNR in air was 275 kΩ. We attribute this decrease in resistance to doping due to the high ionic strength of our solution 26 . To exclude the possibility of leakage through the ionic solution and to verify the quality of the passivation layer, we analysed the ionic current measured as a function of the ionic bias in samples without a drilled pore ( Supplementary Fig. 6 ). This allowed us to quantify the current (measured by an Axopatch patch clamp amplifier) originating from sources other than the pore, that is, the current leaking from the graphene. In the measurement displayed in Supplementary Fig. 6 , V p was swept from −400 mV to 400 mV, a range that corresponds to the range used in translocation experiments. A maximum current of 20 pA was measured for V p = 400 mV, setting a minimum value for the oxide resistance ( R oxide of Fig. 5 and Supplementary Fig. 7 ) of more than 10 GΩ. This means that the Al 2 O 3 passivation layer on the GNRs and the electrodes isolates the GNR device from the solution. To exclude the possibility of leakage through the ionic solution in devices with a pore (the ones used in the experiments) and to verify the quality of the passivation layer, we performed a resistance measurement between two control electrodes placed in the vicinity of the GNR. In most samples we obtained resistances in the range of 1–10 GΩ, consistent with the results in Supplementary Fig. 6 . Devices with resistance lower than 100 MΩ measured between disconnected electrodes were discarded at this point. We finally added DNA into the cis chamber. For translocation experiments, a bias voltage in the 100–400 mV range was applied to the Ag/AgCl electrodes with the bias voltage across the GNR set to 20–100 mV. After some time, we began to observe events ( Fig. 3a ). The applied bias was always positive for devices with the graphene transistor placed on the trans side of the membrane. A gallery of typical events is shown in Supplementary Fig. 4 . In our experiments, the passage of the molecule through the pore results in a drop in ionic current. This is considered usual in SiN x at high ionic strengths (1 M KCl), although for lower salt concentrations current increases are expected 27 , 28 . The pores used in our experiments were drilled in a SiN x membrane onto which a layer of Al 2 O 3 had been deposited. The presence of the thin alumina layer on the walls of the pore changes the surface charge from negative to positive, allowing current drops to be negative even at very low ionic strengths (10 mM KCl) 29 , 30 . Interestingly, we observed both dips and spikes in the electrical current flowing through the GNR. This behaviour is compatible with the ambipolar nature of graphene. Figure 3: Simultaneous detection of DNA translocations in ionic and graphene current. a , Simultaneously recorded ionic current and electrical current flowing through the GNR during translocations of pNEB DNA in 10 mM KCl (transmembrane voltage, 200 mV; graphene source–drain voltage, 20 mV). Ionic current is displayed in red and graphene current in blue. Green lines indicate the position of the correlated events in the ionic and graphene current. b , Zoom-in view of a single correlated event. c , Scatter plot of the events detected in the ionic current. d , Scatter plot of the events detected in the graphene current. Correlated events are represented by filled coloured circles, and uncorrelated events by partially transparent circles. Full size image DNA detection through a nanopore using GNRs In experiments using a 10 mM KCl buffer, we translocated circular plasmid pNEB, a 2,713-bp-long derivative of the pUC19 plasmid. The initial concentration in the cis chamber was 28 µM. Figure 3a shows part of the signal acquired in these experimental conditions. A total of 125 ionic events were detected and analysed in this experiment, 70 of which show correlations with changes in the graphene current. Figure 3b presents a magnified view of a single correlated event, an event where the translocation of the DNA molecule has generated a drop in the ionic current, together with a change in the graphene current. In this particular data set, events correspond to spikes in the graphene current. Other experimental runs, performed in 10 mM and 1 M KCl buffer solutions, can also show current dips (for details see Supplementary Fig. 3 ). Translocation events in the ionic and graphene current were detected using custom event detection and classification software written in MATLAB. Event detection was run separately for the ionic current and the graphene current data, and events were identified by an abrupt change detection algorithm 31 , 32 . The dwell times and current drops were computed automatically and are represented in the scatter plots shown in Fig. 3c and d for ionic current and graphene current, respectively. A correlation analysis was then performed on the two event sets. To isolate the subset of correlated events, a cut-off delay time of 3 ms was chosen. This interval is arbitrary and was chosen to ensure the identification of all correlated events. A quantitative time delay analysis showed that all of the events considered as correlated have an actual delay time lower than 250 µs. We can isolate two types of events in the scatter plot of the events detected in the ionic current. One cluster of events has dwell times of between 200 µs and 2 ms and current drops between 0.3 nA and 0.5 nA. The second event cluster has dwell times between 40 µs and 200 µs and current drops between 0.1 nA and 0.5 nA. Most of the events belonging to the first cluster (there are only five exceptions in the discussed experiment) are correlated with an event detected in the graphene current. In contrast, almost no events belonging to the second cluster are correlated with events detected in the graphene current. We find two similar clusters in the scatter plot of the events detected in the graphene current. We can identify a cluster of mostly correlated events with dwell times between 200 µs and 3 ms and current increases between 5 nA and 10 nA. The uncorrelated events have much shorter dwell times (less than 50 µs). Under our experimental conditions, with a SiN x thickness of ∼ 20 nm and electric field strength inside the nanopore reaching 1 × 10 6 V m −1 , one can assume that the segment of the dsDNA with a persistence length of 50 nm translocates the pore in a fully extended form 33 . Although we used circular DNA plasmid in the supercoiled form, we assume that the supercoiled DNA form is (at least in the pore region) locally underwound, meaning that two dsDNA molecules are translocating the pore at the same time, fully extended. We attribute the correlated events detected in both channels to this type of translocation. Events in the graphene current can be explained by the model proposed in a recent work by Xie and colleagues 16 . In their publication they propose a DNA sensing device similar to the one presented in this Article, but a p-doped silicon nanowire field-effect transistor (FET) was used to sense the translocation of the DNA molecules 16 . Their proposed sensing mechanism does not rely on a direct interaction between the DNA and the sensor. Instead, the FET senses changes in the local electric potential in the proximity of the pore. In our case, this type of gating is compatible with both types of event because of the ambipolar nature of the charge carriers in single-layer graphene. In Fig. 1d we show the characterization of the conductivity of the GNR as a function of the voltage applied across the pore. Because the GNR is located in the trans chamber, the positive ionic electrode acts as the gate for the graphene device 34 . In the data presented here, the graphene current shows a minimum close to 0 V, corresponding to the Dirac point in graphene ( V D ). For voltages lower than V D , current is carried by holes and the material has p-type behaviour, whereas for higher voltages the current is carried by electrons and the material shows n-type behaviour. Device-to-device variability can result in devices operating in one of these regimes for positive voltages between 100 mV and 400 mV, values that are required to obtain DNA translocations through the pore. A change in the electric field near the pore due to the presence of the DNA molecule would generate current increases in the graphene current when operating the device in the p-type regime and current decreases for n-type devices. Our results are quantitatively consistent with the results of the simulation in ref. 16 . Specifically, Fig. 3 shows increases in the graphene current of the order of 10%. This is compatible with a change in the gating potential of the order of 20 mV, as shown in Fig. 1d . To better understand the magnitude of the potential change due to the presence of DNA, we performed numerical simulations (detailed in the Supplementary Fig. 4 and shown in Supplementary Fig. 12c ). To simplify the estimation of the electric potential change due to the presence of DNA in the nanopore under our experimental conditions, we adopted the approach proposed by van Dorp and colleagues 33 and assumed an additive contribution from the two DNA segments. We found that one 50-nm-long linear dsDNA segment attenuates the potential at the position of the graphene device for 8 mV ( Supplementary Fig. 12 ), resulting in a potential change of Δ V = 16 mV due to translocation of the complete DNA plasmid. Because both the pore current and the change in electrostatic potential in the proximity of the pore are proportional to the effective cross-sectional area of the DNA present in the pore, one should expect a correlation between the amplitudes of ionic and graphene events. Event correlation graphs of current drops and dwell times of correlated events are shown in Fig. 4a and b, respectively. In both cases, the observed relationship between the two quantities is almost linear; that is, long and deep events in the ionic current are correlated to long and large increases in the graphene current. Figure 4: Event correlation graphs display events detected both in ionic and graphene current. a , b , Event correlation graph of the amplitudes ( a ) and dwell times ( b ) of correlated events displayed in Fig. 3a,b . Full size image Non-correlated events, characterized by fast and shallow dips in the ionic current, could be attributed to DNA molecules bumping against the pore instead of translocating through it 35 . During such bumping events, the clogging of the pore is too weak to effectively change the electrostatic potential in the proximities of the pore and gate the GNR. Very fast events detected in the graphene current but not visible in the ionic current could be attributed to local changes in the electrical conductivity of the GNR due to the presence of charged molecules in the solution far from the pore. An electrical model of the entire device has been developed in Advanced Design Systems (ADS, Agilent Technologies). The full model is shown in Supplementary Fig. 7 , and the equivalent circuit is presented in Fig. 5 together with the circuit adapted from ref. 16 . The key block of the model is the parallel circuit composed of the resistance and the capacitance associated with the aluminium oxide deposited on the graphene ( R oxide and C oxide respectively). R oxide , in particular, needs to be as high as possible (in our case 10 GΩ) to ensure a good isolation of the graphene constriction from the electrolyte. The values of the various lumped elements governing the model for both ionic strengths of electrolyte are listed in Supplementary Table 1 . To exclude crosstalk as a possible cause for the changes in the graphene current we performed simulations as detailed in Supplementary Fig. 7 . We found that simulated sudden changes in ionic current did not result in changes in the graphene current, leaving electrostatic gating as the only physical reason for the current drops (or increases) in I g ( Supplementary Fig. 8 ). Figure 5: Equivalent circuit diagrams for a GNR nanopore device. a , Circuit diagram of device. b , Circuit diagram used for the crosstalk analysis performed in ADS software ( Supplementary Figs 7 and 8 ). The effects of gating have been neglected and GNR is modelled with resistors. The passivation layer is represented by a parallel resistor ( R oxide ) and capacitance ( C oxide ). Full size image Conclusions Our experiments show a relatively low translocation rate, which can be attributed to several factors such as low analyte concentration, low transmembrane voltage, potential distribution around the pore ( Supplementary Fig. 11 ) and restricted options for nanopore cleaning and storage treatments. To increase the translocation rate, we adopted two published strategies. We operated the device under gradient conditions (as suggested by Wanunu et al. 36 ) and increased the transmembrane voltage bias from 200 mV to 400 mV. To preserve the quality of the Al 2 O 3 passivation layer, we maintained a low ionic strength of 10 mM on the membrane side housing the GNR device, now on the cis side, as illustrated in the schematics in Supplementary Fig. 10 . Note that these gradient conditions are opposite to the gradient conditions presented in the work by Xie et al. 16 , where the ionic strength gradient is used not to increase the capture rate but to enhance the changes in the local electric potential in the proximity of the pore. Indeed, in this condition we obtained a fourfold increase in the translocation rate. In the same time, application of the higher transmembrane potential increased the current blockage amplitude and decreased the average translocation time, as shown in Supplementary Fig. 10c,d , respectively. We observed 923 events in the graphene channel (41% correlated) and 532 events in the ionic channel (71% correlated), with average amplitudes of 5 nA and 1 nA, respectively. Although we achieved a lower translocation rate at 200 mV transmembrane bias, the DNA translocation time was longer, facilitating event analysis. We have demonstrated the operation of a novel device based on the integration of a GNR with a solid-state SiN x nanopore. The device was used to detect the translocation of pNEB DNA and λ-DNA molecules through a solid-state nanopore drilled in graphene and thin SiN x membrane. Experiments were carried out at different ionic strengths between 10 mM KCl and 1 M KCl, showing that the device can operate in a broad range of ionic strength conditions. Ionic current and graphene current were recorded simultaneously, and translocation events were detected in both channels. Events show clear correlation with the translocation events detected in the ionic pore current. This is the first time that translocation events of single DNA molecules have been detected by electrical means other than the ionic current itself, using a graphene-based device integrated on a solid-state nanopore. The use of a two-dimensional material, such as graphene, in the sensor device opens up the possibility of increasing the resolution of solid-state nanopore-based devices and the integration of more than one detector per chip. Moreover, graphene is not the only suitable material for this kind of sensor. Other two-dimensional materials, such as the recently investigated MoS 2 (refs 37 , 38 ), with semiconducting properties and a stronger field effect with respect to graphene are particularly interesting from this point of view. Methods Device fabrication We prepared our samples starting from boron-doped 380-µm-thick silicon chips. Chips were coated on both sides with a 60-nm-thick layer of SiO 2 and a 20 nm top layer of low-stress SiN x . The thickness of the SiN x was chosen for structural reasons, and the thickness of the SiO 2 was chosen to optimize the visibility of the grapheme, enhancing the optical contrast with the bare substrate 39 ( Supplementary Fig. 1a ). A square window ( ∼ 500 µm × 500 µm) was opened in the SiO 2 /SiN x layer on the back side by EBL and RIE. Chips were then wet etched in KOH to remove the silicon and the frontside SiO 2 layer, resulting in a square SiN x membrane ( ∼ 20 µm × 20 µm, Fig. 2a ). Large-area graphene films were grown on copper foils 40 . The growth took place under the flow of a methane/argon/hydrogen reaction gas mixture at a temperature of 1,000 °C. At the end of the growth, the temperature was decreased rapidly and the gas flow turned off. The copper foils were then coated with poly(methylmethacrylate) (PMMA) and the copper etched away, resulting in a centimetre-scale graphene film ready to be transferred onto the chips with membranes ( Fig. 2b,e ). This graphene was single layer, continuous and had good electronic properties. We measured a room-temperature mobility on SiO 2 of µ = 2,700 cm 2 V −1 s −1 ( Supplementary Fig. 1b ). Before depositing graphene on the chip, the substrate was prepatterned using EBL, opening an ∼ 200 µm × 200 µm square in a methyl methacrylate (MMA)/PMMA electron-beam resist double layer. A subsequent liftoff process was used to remove the graphene layer everywhere but on the opening, that is, the region over and around the membrane, as shown in Fig. 2b 41 . GNRs were then patterned by EBL and oxygen RIE and contacted by EBL, followed by evaporation of a Cr 5 nm/Au 50 nm metal double layer and liftoff ( Fig. 2c,f ). Chips were then cleaved to a size of 8 mm × 4 mm to fit the TEM holder. A 15-min-long immersion in N -methyl-2-pyrrolidone (NMP) at 75 °C was carried out before Al 2 O 3 ALD deposition in order to functionalize the graphene surface and allow uniform adhesion of the thin oxide layer on it. ALD was performed by cyclically pumping trimethyl aluminium (TMA) and water vapour (H 2 O) at a temperature of 200 °C. Precursors were diluted in a constant N 2 flux at a base pressure of 60 mbar. Al 2 O 3 was typically deposited up to a thickness of 5 nm before pore drilling. Electron beam drilling 25 was performed using a Philips JEOL 2200FS TEM. We found drilling in the STEM mode to be more suitable for our purposes. We operated the microscope at an acceleration voltage of 200 kV. Before being loaded into the microscope, the samples were cleaned at 400 °C under H 2 /Ar flux to remove any residual organic material left on the surface by the microfabrication process. Samples were first imaged in TEM mode to identify the GNR with the best characteristics. Nanoribbons with a width of ∼ 100 nm and well-defined edges were usually chosen for drilling (smaller nanoribbons are usually very highly resistive after drilling). After imaging, the microscope operating mode was switched to STEM. A mechanical stabilization period of at least 30 min was mandatory to avoid drifting of the sample during drilling and to drill round pores. A 2 nm beam in analytical magnification mode was chosen for drilling. To minimize the sample exposure time at the highly energetic beam, only a very fast (low resolution) image was taken in STEM mode. The raster scan was then stopped, and the beam blanked and driven via software to the place where the hole was to be drilled. We observed a typical drilling time of 3 min for a 20 nm, low-stress SiN x membrane, with graphene and 5 nm of Al 2 O 3 on top. The opening of the pore could be monitored in real time by recording the changes occurring in the Ronchigram displayed on the fluorescent screen. This allowed the user to stop the drilling as soon as the pore was open, avoiding additional exposure of the GNR to the electron beam. A TEM image of a GNR before and after drilling is shown in Fig. 2g , and a magnified image of a typical pore is shown in Fig. 2h . Properties of working devices are listed in Supplementary Table 2 . Experimental set-up The chip was sealed by silicone o-rings between two PMMA chambers that work as reservoirs. The microfluidics support was equipped with six metallic connectors, connected to the instrumentation through a set of pins and cables. Ohmic contact between the metallic connectors on the microfluidics support and the device was obtained using drops of silver paste deposited on the gold pads on the chip. No particular chemical cleaning was performed on these nanopores. Standard wetting treatments like cleaning in piranha solution or soft O 2 plasma cleaning were not applicable on our samples because they would remove graphene. This could account for the observed low DNA molecule capture rate. After mounting the sample in the microfluidic set-up, the wetting of the pore was promoted by flushing with 50% water–50% ethanol solution for 8–12 h. We used a FEMTO 400 kHz current amplifier to preamplify the graphene current and an Axopatch 200B patch clamp amplifier (100 kHz acquisition rate, 10 kHz Bessel filter) for the ionic current. We used an NI PXI-4461 card for data digitalization and custom-made LabView software for data acquisition. We found a good signal-to-noise ratio by filtering at 10 kHz and sampling at 100 kHz. Chlorinated Ag/AgCl electrodes were inserted into both reservoirs and connected to the Axopatch 200B. COMSOL modelling Finite-element simulations of the electric potential distributions were performed using COMSOL Multiphysics v. 4.2. To simulate our experimental situation, we used the full Nernst–Planck equations for the ionic concentrations and Poisson's equation for the electrostatic potential. The system was analysed in the steady state by placing each chamber in contact with a bath maintained at specified concentrations and under different surface charge conditions (zero, −8 mC m −2 and 50 mC m −2 ). Translocating and trapped DNA were modelled as previously described 33 , 42 .
If we wanted to count the number of people in a crowd, we could make on the fly estimates, very likely to be imprecise, or we could ask each person to pass through a turnstile. The latter resembles the model that EPFL researchers have used for creating a "DNA reader" that is able to detect the passage of individual DNA molecules through a tiny hole: a nanopore with integrated graphene transistor. The DNA molecules are diluted in a solution containing ions and are driven by an electric field through a membrane with a nanopore. When the molecule goes through the orifice, it provokes a slight perturbation to the field, detectable not only by the modulations in ionic current but also by concomitant modulation in the graphene transistor current. Based on this information, it is possible to determine whether a DNA molecule has passed through the membrane or not. This system is based on a method that has been known for over a dozen years. The original technique was not as reliable since it presented a number of shortcomings such as clogging pores and lack of precision, among others. "We thought that we would be able to solve these problems by creating a membrane as thin as possible while maintaining the orifice's strength", said Aleksandra Radenovic from the Laboratory of Nanoscale Biology at EPFL. Together with Floriano Traversi, postdoctoral student, and colleagues from the Laboratory of Nanoscale Electronics and Structures, she came across the material that turned out to be both the strongest and most resilient: graphene, which consists of a single layer of carbon molecules. The strips of graphene or nanoribbons used in the experiment were produced at EPFL, thanks to the work carried out at the Center for Micro Nanotechnology (CMI) and the Center for Electron Microscopy (CIME). "Through an amazing coincidence, continued the researcher, the graphene layer's thickness measures 0.335 nm, which exactly fits the gap existing between two DNA bases, whereas in the materials used so far there was a 15 nm thickness." As a result, while previously it was not possible to individually analyze the passage of DNA bases through these "long" tunnels – at a molecular scale –, the new method is likely to provide a much higher precision. Eventually, it could be used for DNA sequencing. However they are not there yet. In only 5 milliseconds, up to 50'000 DNA bases can pass through the pores. The electric output signal is not clear enough for "reading" the live sequence of the DNA strand passage. "However, the possibility of detecting the passage of DNA with graphene nanoribbons is a breakthrough as well as a significant opportunity", said Aleksandra Radenovic. She noted that, for example, the device is also able to detect the passage of other kinds of proteins and provide information on their size and/or shape. This crucial step towards new methods of molecular analysis has received an ERC grant and is featured in an article published in Nature Nanotechnology.
10.1038/nnano.2013.240
Biology
The tale teeth tell about the legendary man-eating lions of Tsavo
Larisa R. G. DeSantis et al, Dietary behaviour of man-eating lions as revealed by dental microwear textures, Scientific Reports (2017). DOI: 10.1038/s41598-017-00948-5 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-017-00948-5
https://phys.org/news/2017-04-tale-teeth-legendary-man-eating-lions.html
Abstract Lions ( Panthera leo ) feed on diverse prey species, a range that is broadened by their cooperative hunting. Although humans are not typical prey, habitual man-eating by lions is well documented. Fathoming the motivations of the Tsavo and Mfuwe man-eaters (killed in 1898 in Kenya and 1991 in Zambia, respectively) may be elusive, but we can clarify aspects of their behaviour using dental microwear texture analysis. Specifically, we analysed the surface textures of lion teeth to assess whether these notorious man-eating lions scavenged carcasses during their depredations. Compared to wild-caught lions elsewhere in Africa and other large feliforms, including cheetahs and hyenas, dental microwear textures of the man-eaters do not suggest extreme durophagy (e.g. bone processing) shortly before death. Dental injuries to two of the three man-eaters examined may have induced shifts in feeding onto softer foods. Further, prompt carcass reclamation by humans likely limited the man-eaters’ access to bones. Man-eating was likely a viable alternative to hunting and/or scavenging ungulates due to dental disease and/or limited prey availability. Introduction Lions ( Panthera leo ) once inhabited much of Africa, southeastern Europe, and southwestern Asia 1 . Currently, lions ( Panthera leo ) occupy savannas and deserts in sub-Saharan Africa (excluding rainforests and the Sahara), with an isolated population located in the Gir Forest of India. They are highly social, and males and females each live in persistent bonded groups 2 . Lion behaviours, diets, and social groupings all vary enormously in response to spatial or temporal shifts in prey availability and habitat structure 3 . Smaller groups and females prey mainly on zebra and wildebeest whereas larger groups and males feed differentially on buffalo 4 , 5 . Lions are known to consume a diverse suite of prey with preferences for gemsbok, buffalo, wildebeest, giraffe, zebra, Thomson’s gazelle, warthog, kongoni, and topi 4 , 6 . Further, habitat and droughts can affect lion preferences and prey vulnerability to predation, with lions increasing the proportion of elephant calves consumed during droughts 7 . Their current range collapse, to 20% of historic values, is driven not by limited adaptability but rather by habitat loss and fragmentation, prey depletion, and direct persecution 8 . Man-eating, or consumption of humans as women and children are often victims, has occasionally been a dietary strategy of lions and other pantherines 9 , 10 . Two notorious lions (popularized in the 1996 film The Ghost and the Darkness ) terrorized people near Tsavo by repeatedly killing and consuming railway workers in 1898, and one from Mfuwe, Zambia consumed six people as recently as 1991 11 . Colonel J. H. Patterson, who eventually killed the Tsavo man-eaters in December 1898, estimated that they had killed and eaten 135 people 12 . However, stable isotope analysis of their hair and bone collagen suggests that they had consumed ~35 people, representing roughly 30% of the first man-eater’s diet (FMNH 23970) and ~13% of the second man-eater’s diet (FMNH 23969) 13 . The reasons for the lions’ differential reliance on humans, and for man-eating by lions in general, remain unclear. Many hypotheses have been proposed regarding the motivations of the man-eating lions, including an extended drought, a 1898 rinderpest outbreak that ravaged prey populations, various cultural causes, and/or dental disease 11 , 14 , 15 . Evidence of dental disease is quite clear in two of the three man-eating lions. One lion (the first Tsavo man-eater), with a broken canine, developed a periapical abscess and lost three lower right incisors 15 (see Fig. 1 ). The pronounced toothwear and extensive cranial remodeling suggests that the lion had broken his canine several years earlier 15 . The second Tsavo man-eater had minor injuries including a fractured upper left carnassial and subsequent pulp exposure, although these types of injuries are fairly common and were unaccompanied by disease 16 . Similarly, the man-eater from Mfuwe had fractured its right mandibular ramus. These injuries may have been decisive factors influencing their consumption of humans. While it is difficult to assess the motivations of the man-eating lions, we can clarify aspects of their behaviour prior to their death. Most notably, we can assess if their circumstances caused them to rely more heavily on scavenging carcasses shortly before they were killed, as suggested by contemporary reports of the sounds of bone-crunching on the edge of camp 12 . Dental microwear texture analysis (DMTA) can clarify the textural properties of consumed food, including durophagy in carnivorans, and clearly distinguishes feliforms that eat primarily flesh (the cheetah, Acinonyx jubatus ), from generalists ( P. leo ), and various hyenas which are known to fully consume carcasses, including bone 17 , 18 . Figure 1 Images of injuries to Tsavo’s 1st man-eater ( a ), FMNH 23970 and the Mfuwe man-eater ( b ), FMNH 163109. Image ( a ), Field Museum of Natural History image Z-94320_11c by John Weinstein documents a broken lower right canine (which had a periapical abscess) and loss of the lower three right incisors - presumably from the kick of a struggling prey - and subsequent over-eruption of the upper right incisors and rotation of the upper right canine both labially and mesially in the absence of the interlocking lower canine. In ( b ), multiple oval-shaped intraosseous lesions are visible on the right mandible, superficial to an occluded mandibular canal and associated with a chronically draining fistula 15 . Again, these injuries are consistent with blunt trauma from a powerful ungulate kick. Full size image In contrast to two-dimensional dental microwear, which relies on human identification and counting of microscopic wear features such as pits and scratches, 3D DMTA quantifies surfaces using scale-sensitive fractal analysis 19 , 20 , 21 , 22 . Complexity ( Asfc ) distinguishes taxa that consume brittle foods from taxa that consume softer ones 19 , 20 , 21 , 22 , 23 . Anisotropy ( epLsar ), the degree to which features share similar orientations, instead indicates tough food consumption when values are high 19 , 20 , 21 , 22 , 23 . Textural fill volume ( Tfv ) is a measure of the difference in volume filled by large (10 µm) and small (2 µm) diameter square cuboids; high values indicate many deep features between these sizes 21 , 23 . For extant carnivorous taxa, increased complexity and increased textural fill volume are associated with increased durophagy 17 , 18 , 23 , 24 , 25 , 26 . Here, we compare dental microwear attributes of man-eating lions from Tsavo (from 1898) and from Mfuwe (from 1991) to wild-caught lions from throughout their range to assess if the dental microwear textures of man-eaters suggest extreme durophagy shortly before death. A secondary aim is to improve our understanding of how age, sex, and body size may influence access to carcasses in wild-caught lions. We test the hypothesis that the diets of man-eating lions consisted primarily of hard objects (e.g. bone), and that these lions mainly engaged in scavenging (perhaps out of desperation) prior to their death. Results Results are illustrated in Figs 2 and 3 and summarized in Table 1 (all primary data are included in electronic supplementary materials, Supplemental Tables 1 and 2 ). As previously documented, complexity values of A. jubatus (median = 2.180) are significantly lower than for P. leo (p < 0.001) and all hyenas (p-values are all < 0.0001) 18 . P. leo (excluding the man-eaters) have complexity values that range from 0.258 to 11.096 (median = 3.669; Table 1 ) and are significantly lower than all hyenas (median values range from 5.316 in Parahyaena brunnea , 6.328 in Hyaena hyaena , to 7.354 in Crocuta crocuta ; p-values are all < 0.05). Anisotropy values are indistinguishable between all feliform species here examined (as noted in prior work 18 ). Textural fill volume of wild-caught feliforms is lowest in A. jubatus , followed by P. leo , with A. jubatus and P. leo having significantly lower Tfv values than all hyenas (p < 0.05). The captive lions have the lowest Tfv values (median = 3.486). Figure 2 Digital elevation models of microwear surfaces of ( a , b ) wild-caught lions (FMNH 20762; FMNH 33479), ( c ) a captive lion (FMNH 54639), and ( d – f ) man-eating lions (Tsavo 1 st man-eater, FMNH 23970; Tsavo 2 nd man-eater, FMNH 23969; and Mfuwe man-eater, FMNH 163109). All models noted here represent 204 × 274 μm in area with relevant z-scale bars noted for each image (μm). Full size image Figure 3 Bivariate plot of anisotropy ( epLsar ) and complexity ( Asfc ) of cheetahs, hyenas (multiple species), captive lions, man-eating lions, and wild-caught lions. Full size image Table 1 Descriptive statistics for each DMTA variable of Panthera leo by category (captive, man-eater, wild-caught). Full size table The man-eating lions have DMTA values ( Asfc , epLsar , and Tfv ) indistinguishable from other wild-caught P. leo (all p-values > 0.69; Tables 1 and S1 ; Fig. 2 ), and captive and man-eating lions have nearly identical mean Asfc values (3.266 and 3.097, respectively). All man-eater Asfc values (ranging from 2.581 to 3.403) fall below the mean and median values for all hyenas 18 . Man-eaters are indistinguishable from all extant taxa in all DMTA attributes (likely due to their limited sample size, n = 3); however, statistical comparisons of Asf c values of man-eating lions to C. crocuta and H. hyaena (p = 0.062, p = 0.102, respectively) identify marginally significant differences in dietary behavior (all p-values ≤ 0.10). Further, man-eating lion Asfc values range from 2.581 to 3.403 while 83% of all hyena Asfc values (all three species combined) and 88% of C. crocuta Asfc values exceed the highest man-eating lion Asfc value of 3.403. In contrast, 83% of A. jubatus specimens have Asfc values less than 3.403 (and all are < 4.6). Wild-caught male and female lions are indistinguishable in all DMTA attributes (including and excluding the male man-eaters), although females have significantly greater variance of epLsar (Levene’s median test, p = 0.016, p = 0.042, respectively; Supplemental Table 1 ). Correlations between all DMTA attributes and skull measurements (indicative of body size) in a subset of lions were not significant (Supplemental Table 2 ). However, age was negatively correlated with Tfv in wild-caught lions (only when excluding the man-eaters from analyses; Pearson’s correlation co-efficient = −0.512; p = 0.036). Discussion Wild-caught lions not known to hunt humans have highly variable complexity values (total range of 10.838; Table 1 ). Females actively hunt and take down prey while coalition males often gain preferential access to fresh kills 2 . While males and females don’t differ in mean values for any DMTA attributes here examined, females do have significantly greater variance of anisotropy (with values ranging from 0.0006 to 0.0072 in females as compared to 0.0006 to 0.0048 in males). This suggest that some females eat a mixture of flesh and bones while others have access to fresh kills (eating a greater proportion of tougher flesh) consistent with previous work documenting highly variable feeding behavior in lions 3 . Interestingly, body size and age does not appear to dictate durophagous behaviour as inferred from dental microwear textures. In contrast, DMTA data (most notably low Asfc values) suggest that the man-eating lions examined were not fully consuming carcasses prior to their death. Despite Patterson’s colorful accounts of bone-crunching outside the camp at Tsavo 12 , such behaviour of the Tsavo man-eaters is not supported by DMTA. DMTA attribute values of man-eating lions appear not only typical but overlap in ‘ Asfc - eplsar ’ space with those of captive lions, which are typically fed softer foods (e.g. horsemeat, beef 27 ). These similarities, in addition to the absence of Asfc values close to or exceeding mean and median hyena values (7.946 and 6.474, respectively, when combining all hyena species included in previous work 18 ), suggest that, in the final weeks or months of their lives, the man-eating lions consumed softer parts of humans and other prey and did not fully consume carcasses. The absence of bone consumption/processing, as inferred from low man-eating lion Asfc values, may have been due to either their own preferences/limitations (potentially due to injury). Further, the reclamation of human carcasses at daybreak before lions could completely consume them, may have played a role in limiting durophagy. However, isotopic studies of the Tsavo man-eaters document that humans comprised a minor component of their prey consumption so that human carcass recovery could only play an ancillary role 13 . Two of the three man-eaters (FMNH 23970, FMNH 163109; see Fig. 1 ) had serious infirmities to their jaws and/or canines, potentially hindering consumption of hard food items and/or reduced prey handling ability (prey are seized and held with teeth and jaws). Tooth breakage per se does not produce dietary shifts as most older lions display some sort of wear or breakage to their dentition 28 . However, dental disease is another matter, and incapacitation via an abscessed or a fractured mandible may have prompted the Tsavo and Mfuwe lions to seek more easily subdued prey. Infirmities such as these were frequently associated with man-eating incidents by tigers and leopards in colonial India 9 , 29 , 30 . The second Tsavo lion had less pronounced injuries, consistent with mandibular damage sustained during normal feeding behavior (i.e. a fractured upper carnassial tooth with pulp exposure) 15 . The second man-eating Tsavo lion also consumed a smaller percentage of humans (~13%) than the first man-eating Tsavo lion (~30%) during the last few months of its life (as inferred from stable isotopes in hair) 13 . However, dental injury is fairly common in lions 16 , 28 with 40% of lions from one study 28 having damaged dentitions. Contrary to expectations, only 23% of “problem lions” in Tsavo East National Park (lions killed by park rangers attacking people or livestock) had dental damage 28 , suggesting that minor to moderate levels of damage unaccompanied by disease does not trigger man-eating or marauding. The second Tsavo lion may have simply shared meals through his social bonds with the first man-eater. Further, it should be noted that bone collagen values (which reflect diet over multiple years) and hair tufts (which reflect consumption during the final months of life) suggest that man-eating behavior varied over the life of the individual lions and both lions may have consumed similar amounts of human prey earlier in their life 13 . DMTA data here suggests that man-eating lions didn’t completely consume carcasses of humans or ungulates. Instead, humans likely supplemented an already diverse diet 31 . Anthropological evidence suggests that humans were a frequent prey item of leopards and other large felids, which dragged their victims up into trees or down into caves for latter consumption 32 , 33 , 34 , 35 . Further, evidence of man-eating by pantherines continues, with more than 563 humans killed between January 1990 and September 2004 by lions in Tanzania 10 . Although lions today seldom hunt humans as compared to other prey species 6 , increasing human populations and declining prey numbers may cause man-eating to become a viable option for lions. Materials and Methods The man-eating lions from Tsavo, Kenya (FMNH 23969, FMNH 23970) and Mfuwe, Zambia (FMNH 163109) were here analysed and compared to extant wild-caught P. leo specimens from throughout Africa, with two individuals (AMNH 54995, AMNH 54996) from Gir Forest, India (n = 55; 26 here examined and 29 from previously published work 18 ). We also included five captive zoo lions in the analysis, as a separate group, and compared lion groups (captive, man-eaters, and wild-caught) with the following extant feliforms: Acinonyx jubatus (cheetah, n = 36), Crocuta crocuta (spotted hyaena, n = 26), Hyaena hyaena (striped hyena, n = 35), and Parahyaena brunnea (brown hyena, n = 11) from previously published work 18 . The enamel region of the lower carnassial shearing facet of the m1 trigonid was examined on all specimens as described in prior work 17 , 18 , 24 , 25 . This tooth is used by carnivores both to slice meat and to crush bone. All specimens were scanned on a Sensofar PLu neox optical profiler (at Vanderbilt University) in three dimensions in four adjacent fields of view, for a total sampled area of 204 × 276 µm 2 and subsequently analysed using SSFA software (ToothFrax and SFrax, Surfract Corp., to characterise tooth surfaces according to the following variables: (i) complexity ( Asfc ); (ii) anisotropy ( epLsar ); and, (iii) textural fill volume ( Tfv ) 19 , 20 , 21 , 22 , 23 . As most DMTA variables are non-normally distributed, we used non-parametric statistical tests (Kruskal–Wallis and Dunn’s procedure) to compare differences between groups. DMTA attribute values were compared between male and female P. leo specimens (Supplemental Table 1 ) using Mann-Whitney tests. Correlations were also assessed between DMTA values and body size proxies (greatest length of skull, zygomatic width) and age (based on toothwear and suture closure criteria from prior work 36 ) in a subset of African lions from which these data were available (Supplemental Table 2 ).
An analysis of the microscopic wear on the teeth of the legendary "man-eating lions of Tsavo" reveals that it wasn't desperation that drove them to terrorize a railroad camp in Kenya more than a century ago. "Our results suggest that preying on people was not the lions' last resort, rather, it was simply the easiest solution to a problem that they confronted," said Larisa DeSantis, assistant professor of earth and environmental studies at Vanderbilt University. The study, which she performed with Bruce Patterson, MacArthur Curator of Mammals at The Field Museum of Natural History in Chicago, is described in a paper titled "Dietary behavior of man-eating lions as revealed by dental microwear textures" published online Apr. 19 by the journal Scientific Reports. "It's hard to fathom the motivations of animals that lived over a hundred years ago, but scientific specimens allow us to do just that," said Patterson, who has studied the Tsavo lions extensively. "Since The Field Museum preserves these lions' remains, we can study them using techniques that would have been unimaginable a hundred years ago." In order to shed light on the lion's motivations, DeSantis employed state-of-the-art dental microwear analysis on the teeth of three man-eating lions from the Field Museum's collection: the two Tsavo lions and a lion from Mfuwe, Zambia which consumed at least six people in 1991. The analysis can provide valuable information about the nature of animal's diet in the days and weeks before its death. DeSantis and Patterson undertook the study to investigate the theory that prey shortages may have driven the lions to man-eating. At the time, the Tsavo region was in the midst of a two-year drought and a rinderpest epidemic that had ravaged the local wildlife. If the lions were desperate for food and scavenging carcasses, the man-eating lions should have dental microwear similar to hyenas, which routinely chew and digest the bones of their prey. The skull of one of the Tsavo man-eaters shows evidence of dental disease. Credit: Bruce Patterson and JP Brown, The Field Museum "Despite contemporary reports of the sound of the lion's crunching on the bones of their victims at the edge of the camp, the Tsavo lion's teeth do not show wear patterns consistent with eating bones," said DeSantis. "In fact, the wear patterns on their teeth are strikingly similar to those of zoo lions that are typically provisioned with soft foods like beef and horsemeat." The study provides new support for the proposition that dental disease and injury may play a determining role in turning individual lions into habitual man eaters. The Tsavo lion which did the most man-eating, as established through chemical analysis of the lions' bones and fur in a previous study, had severe dental disease. It had a root-tip abscess in one of its canines—a painful infection at the root of the tooth that would have made normal hunting impossible. "Lions normally use their jaws to grab prey like zebras and buffalos and suffocate them," Patterson explained. "This lion would have been challenged to subdue and kill large struggling prey; humans are so much easier to catch." Lt. Colonel John Patterson in 1898 with one of the Tsavo man-eaters that he shot. Credit: The Field Museum The diseased lion's partner, on the other hand, had less pronounced injuries to its teeth and jaw—injuries that are fairly common in lions which are not man eaters. According to the same chemical analysis, it consumed a lot more zebras and buffalos, and far fewer people, than its hunting companion. The fact that the Mfuwe lion also had severe structural damage to its jaw provides additional support for the role of dental problems in triggering man-eating behavior, as do a number of reports of man-eating incidents by tigers and leopards in colonial India that cite similar infirmities, the researchers pointed out. The man-eating Tsavo lions are currently on display at The Field Museum in Chicago. Credit: John Weinstein, The Field Museum "Our data suggests that these man-eating lions didn't completely consume the carcasses of their human or animal prey," said DeSantis. "Instead, people appear to have supplemented their already diverse diet. Anthropological evidence suggests that humans have been a regular item on the menu of not only lions, but also leopards and the other great cats. Today, lions seldom hunt people but as human populations continue to grow and the numbers of prey species decline, man-eating may increasingly become a viable option for many lions."
10.1038/s41598-017-00948-5
Nano
Graphene nanoscrolls are formed by decoration of magnetic nanoparticles
Sharifi, T. et al. Formation of nitrogen-doped graphene nanoscrolls by adsorption of magnetic ?-Fe2O3 nanoparticles, Nature Communications (2013), DOI: 10.1038/ncomms3319 Journal information: Nature Communications
http://dx.doi.org/10.1038/ncomms3319
https://phys.org/news/2013-08-graphene-nanoscrolls-magnetic-nanoparticles.html
Abstract Graphene nanoscrolls are Archimedean-type spirals formed by rolling single-layer graphene sheets. Their unique structure makes them conceptually interesting and understanding their formation gives important information on the manipulation and characteristics of various carbon nanostructures. Here we report a 100% efficient process to transform nitrogen-doped reduced graphene oxide sheets into homogeneous nanoscrolls by decoration with magnetic γ-Fe 2 O 3 nanoparticles. Through a large number of control experiments, magnetic characterization of the decorated nanoparticles, and ab initio calculations, we conclude that the rolling is initiated by the strong adsorption of maghemite nanoparticles at nitrogen defects in the graphene lattice and their mutual magnetic interaction. The nanoscroll formation is fully reversible and upon removal of the maghemite nanoparticles, the nanoscrolls return to open sheets. Besides supplying information on the rolling mechanism of graphene nanoscrolls, our results also provide important information on the stabilization of iron oxide nanoparticles. Introduction Graphene-based materials have emerged as a key material in nanotechnology and continuously find new areas of applications 1 , 2 , 3 , 4 , 5 . Reduced graphene oxide (rGOx) nanoscrolls are new members in the graphene family, formed by rolling single graphene sheets from one side or from a corner to form Archimedean-type spirals, more commonly known as Swiss rolls. Owing to their unique structure, graphene nanoscrolls are predicted to retain the excellent high conductance of single graphene sheets 6 . This is in contrast to single-walled carbon nanotubes where structural constraints on the electron wavefunctions make them either metallic or semiconducting, depending on their diameter and chirality 7 . Concurrent with unique electronic properties, the appropriate layer spacing of nanoscrolls might allow for energy storage applications 8 , 9 . Despite fascinating properties and a handful of theoretical studies of graphene nanoscrolls 10 , 11 , 12 , 13 , 14 , only few experimental studies 15 , 16 , 17 , 18 are reported for rGOx nanoscrolls. Viculis et al . 17 showed that exfoliation of graphite by alkali metals followed by very strong sonication led to the formation of nanoscrolls, whereas Wang et al . 18 showed that the aggregation of Ag particles on graphene sheets together with high-power sonication initiated the rolling of the rGOx nanoscrolls. However, in none of these studies a thorough explanation is given on how and why the graphene sheets form rolled structures, and with few exceptions 15 the efficiency is low. Here we describe a simple and efficient solution-based process, complemented by a detailed analysis of the rolling mechanism. We show that symmetric nitrogen-doped (13%) rGOx(N-rGOx) nanoscrolls are formed by decorating N-rGOx with magnetic iron oxide particles in the form of maghemite γ-Fe 2 O 3 . In contrast, we show that a similar decoration by less magnetic hematite (α-Fe 2 O 3 ) or non-magnetic palladium oxide particles never leads to any rolling. Similarly, control experiments with low-level or non-N-rGOx decorated by maghemite particles show no rolling, indicating that the nitrogen defects are largely involved in the rolling mechanism. Different to earlier reports 18 and to theoretical predictions 14 , we show that the rolling process is fully reversible and that the N-rGOx can be fully transformed back to open graphene sheets by the removal of iron oxide particles through acid washing. We conclude that the rolling is initiated by the strong adsorption of maghemite nanoparticles at nitrogen defects in the graphene lattice and their mutual magnetic interaction. Our experimental observations are supported by ab initio density functional theory calculations. Results Physical properties of the nanoscrolls The basis of our study relies on attempts to decorate N-rGOx sheets by hematite and tungsten oxide nanoparticles to obtain materials appropriate for water oxidation. In the context of this work, we have come across a very peculiar effect. When the N-rGOx sheets unintentionally were decorated by maghemite nanoparticles, a metastable form of hematite that stabilizes under slightly different conditions, the micrographs of transmission electron microscopy (TEM) consistently revealed the presence of almost 100% of N-rGOx nanoscrolls instead of the commonly open N-rGOx sheets obtained by hematite decoration. Both hematite and maghemite have technologically interesting properties: hematite is used as catalyst for water oxidation 19 , 20 or as paint pigment, whereas maghemite is used as a contrast media in magnetic resonance or as magnetic recording media 21 . Despite many similarities, the two materials also display significant differences. Hematite crystallizes in a rhombohedral structure, whereas maghemite forms a cubic structure 21 , 22 . The difference in crystal structure rationalizes the magnetic properties of maghemite and hematite, where maghemite is ferrimagnetic and displays a much stronger magnetic moment (~540 mT) than the weakly ferromagnetic hematite (~1 mT). However, nanoparticles of both maghemite and hematite exhibit superparamagnetic behaviour, as each nanoparticle forms single magnetic domains 23 , 24 . We show in our study that the magnetically interacting maghemite particles and their strong adsorption on the N-rGOx sheets seem to be the main reasons for the formation of the N-rGOx nanoscrolls. The rolling mechanism of N-rGOx nanoscrolls is revealed by conducting a large number of control experiments to tune and test the synthesis conditions. The conditions leading to the highest ratio of, and most symmetric, nanoscrolls were achieved by adding iron(III) chloride, tungsten hexacarbonyl, and polyvinylpyrrolidone (PVP) and hydrazine into a N-rGOx methanol dispersion, followed by heating under reflux for 24 h (see Methods ). The scheme for the optimal conditions leading to nearly 100% (the only exception was some few very large graphene sheets, >5 μm) of N-rGOx nanoscrolls is shown in Fig. 1 (route a), together with the conditions used when hematite is formed and the decorated N-rGOx remain as open flat sheets ( Fig. 1 , route b). The formation of maghemite and hematite can be controlled by slight changes in the solution. When using a pure methanol solution, formation of maghemite particles is observed, whereas replacing a part of the methanol by water leads to the formation of hematite. Figure 1: Schematic process for the adsorption of iron oxide nanoparticles. (route a) Maghemite-decorated nitrogen doped reduced graphene oxide (N-rGOx) nanoscrolls, and (route b) hematite-decorated N-rGOx sheets. Full size image The TEM micrograph in Fig. 1 (route a) shows N-rGOx nanoscrolls with diameters of about 38 nm decorated by small maghemite nanoparticles, in the range of 3–5 nm, whereas the hematite nanoparticles on the N-rGOx sheets are larger and are in the range of 25–40 nm ( Fig. 1 , route b). The nanoscrolls tend to arrange in bundles with up to 20 scrolls in each bundle ( Supplementary Figs S1–S3 ). Statistical analysis of >100 nanoscrolls reveals an average diameter of ~38±1 nm (s.d., σ =10.5 nm; see histogram in Supplementary Fig. S3a ). The X-ray diffraction patterns of N-rGOx nanoscrolls and N-rGOx sheets ( Fig. 2a ) show diffraction patterns characteristic for maghemite and hematite crystallized in cubic and rhombohedral structures, respectively 22 . From Scherrer’s formula 25 , we estimate the diameter of the nanoparticles to be 3.5 nm for the maghemite particles and 25 nm for the hematite particles, in good agreement with the TEM micrographs. By simply calculating the number of spirals (turns) of a single nanoscrolls, considering the average diameter of 38 nm, and assuming an interlayer distance of 5 nm (considering the particle size, C-Fe length and 0.3 Å as van der Waals interaction with the next graphene layer), the resulting nanoscrolls may consist of a single graphene sheets rolled ~3–5 turns. This estimation seems to be consistent with TEM micrographs in which the maghemite-decorated N-rGOx nanoscrolls are seen as transparent material under electron irradiation. Further support for the fact that the structures are indeed formed as nanoscrolls is obtained using scanning transmission electron microscopy and high resolution TEM (HRTEM) using diffraction, thickness and carbon contrast for the HRTEM mode ( Supplementary Fig. S4 ) and the corresponding simulations of the HRTEM data ( Supplementary Fig. S5 ). In Supplementary Fig. S4d , the stacked carbon atoms in the graphene planes parallel to the electron beam are consistent with the Archimedean structures simulated in Supplementary Fig. S5 (see Supplementary Note 1 ). Figure 2: X-ray and Raman characterization. ( a ) X-ray diffraction pattern of maghemite-decorated rolled N-rGOx nanoscrolls (red line) and hematite-decorated N-rGOx sheets (black curve). ( b ) Observed Raman spectrum of decorated N-rGOx. Peak-fitted and assigned spectrum of ( c ) maghemite-decorated rolled N-rGOx nanoscrolls and ( d ) hematite-decorated N-rGOx, (peaks originating from W-O vibrational modes are assigned with ‘*’). Full size image The formation of maghemite and hematite in the two morphologically different samples is further supported by the Raman spectra of maghemite-decorated N-rGOx nanoscrolls and hematite-decorated N-rGOx sheets shown in Fig. 2b–d , respectively. The high wave number region in Raman spectra of both samples ( Fig. 2b ) comprises a D-band and a G-band region and can be fitted by a 4-peak model with peaks at 1,185, 1,339, 1,470 and 1,584 cm −1 in agreement with previous reports of nitrogen-doped carbon nanostructures 26 , 27 . More importantly, the low-frequency region of maghemite-decorated N-rGOx nanoscrolls in Fig. 2c displays features that match well with the Raman active modes of maghemite ( A 1g + E g +3 T 2g ) with the most pronounced mode, A 1g , at ~696 cm −1 . Two weaker peaks at ~506 and ~361 cm −1 are assigned to the T 2g and E g modes 28 , respectively, but are slightly down-shifted owing to the interaction of nanoparticles with graphene. The low-frequency region of hematite-decorated N-rGOx sheets in Fig. 2d displays characteristic features of hematite 29 , 30 , 31 , with the largest peaks at 224 and 289 cm −1 . Fig. 2c, d display the symmetry assignment of each peak, but for a detailed analysis of the Raman spectra and a corresponding table with wavenumbers and symmetries we refer to Supplementary Table S1 . Weak peaks in the range between 400 and 1,000 cm −1 present in the Raman spectra of both types of samples are assigned to W-O vibrations 32 . According to the thermo gravimetric analysis (TGA), the loading of maghemite and hematite is around 40 mass percent for both maghemite-decorated N-rGOx nanoscrolls and hematite-decorated N-rGOx sheets, respectively ( Supplementary Fig. S7 ). The heat-induced oxidation of the rGOx around 450 °C is much more distinct and correlated to a sharper heat flow curve for the hematite-decorated N-rGOx sheets compared with the corresponding oxidation of maghemite-decorated N-rGOx nanoscrolls. This is rationalized by the fact that the sheets are much more reactive than the nanoscrolls because of their larger surface area exposed to oxygen. Mechanism of nanoscroll formation The strong magnetic moment of maghemite compared with hematite is a plausible reason for the formation of the nanoscrolls. However, we also note that the adsorbed hematite particles are significantly larger than the maghemite particles. The separation distance, and hence the size of the absorbed nanoparticles, is a key factor in the van der Waals exchange force between colloidal particles 33 . The larger size of the hematite particles might be related to a swelling mechanism of the physisorbed surfactant correlated to the introduction of water in the synthesis of the hematite particles, as described by Chen et al . 34 , 35 The swelling leads to a partial removal of the surfactant leading to the aggregation of partially unprotected hematite particles. Supplementary Fig. S8 shows the aggregation of small hematite particles into larger clusters. By changing the pH value for the synthesis, we have successfully decreased the size of the hematite particles to almost the same size as the maghemite particles (4–8 nm for hematite particles versus 3–5 nm for maghemite particles), without finding any rolled hematite-decorated N-rGOx sheets. Further support that the size does not has a decisive role for the rolling of the N-rGOx is achieved by homogeneously decorating N-rGOx sheets with palladium oxide nanoparticles having the same diameter as the maghemite particles (3–5 nm). Also, in this case no rolling is found ( Supplementary Fig. S9 ). This strengthens our hypothesis that the magnetic character of maghemite has a significant role in the rolling mechanism. The magnetic character of the maghemite-decorated N-rGOx nanoscrolls (particle size 3–5 nm) and the hematite-decorated N-rGOx sheets (particle size 4–8 nm) was probed by superconducting quantum interference device magnetization experiments. Figure 3 , shows M versus T at zero field-cooled (ZFC) and field-cooled (FC) conditions. From these data we observe, besides the significantly stronger magnetization of the maghemite nanoparticles, that both samples show a characteristic superparamagnetic behaviour with a maximum in the ZFC curves corresponding to the blocking temperature, T B . The superparamagnetic blocking temperature is defined as the temperature at which the superparamagnetic relaxation time equals the timescale of the experimental technique 23 . The higher T B of the maghemite particles is rationalized by higher interaction between nanoparticles as well as their smaller average size compared with the hematite particles 36 . The stronger interaction between maghemite particles is further confirmed by the FC curve that flattens out at temperatures lower than T B but continue to increase for hematite nanoparticles. The plateau-shaped character for the maghemite nanoparticles is related to a transition to an ordered collective spin-glass-like state caused by the stronger dipole interactions of maghemite nanoparticles. The inset in Fig. 3 shows that hematite nanoparticles undergo a weakly pronounced Morin transition as expected for small hematite nanoparticles (see also Supplementary Fig. S10 and Supplementary Note 2 ). Figure 3: Magnetic superconducting quantum interference device (SQUID) characterization. Magnetization versus temperature at H =1,000 Oe of ( a ) maghemite-decorated rolled N-rGOx nanoscrolls at ZFC conditions (black open circles) and at FC conditions (red up-triangles), and ( b ) hematite-decorated N-rGOx sheets at ZFC conditions (blue open squares) and at FC conditions (purple down-triangles). Inset shows an enlargement of the high temperature region for hematite-decorated N-rGOx sheets. Full size image The rolling is significantly improved by adding tungsten hexacarbonyl together with the FeCl 3 salt, as displayed in Fig. 1 . A typical TEM micrograph of maghemite-decorated N-rGOx nanoscrolls without adding tungsten hexacarbonyl is shown in Supplementary Fig. S11 . Although a major fraction of the N-rGOx still is rolled, the obtained nanoscrolls are less homogeneous and have larger diameter. The complete role of tungsten is not clear, but it is known that tungsten can shift the isoelectric point of iron oxide particles to higher pH, indicating that, at the experimental conditions (pH=7.5–9), tungsten will make the iron oxide particles slightly positively charged and hence increase their interaction with the negatively charged nitrogen-doped graphene sheets. However, we note that adding only tungsten hexacarbonyl to the synthesis (without using FeCl 3 ) resulted in only tungsten-decorated N-rGOx without any signs of rolling ( Supplementary Fig. S12 ). As in the case of tungsten, the addition of PVP is not crucial for the rolling process, and by removing the PVP from route a (see Fig. 1 ) less homogeneous and larger nanoscrolls are obtained. A statistical analysis of 85 nanoscrolls in the absence of PVP reveals an average diameter of ~260±8 nm ( σ =76 nm, see histogram Supplementary Fig. S6 ). The promoting properties of PVP probably originate from its ability to attach onto the N-rGOx, thereby improving the dispersability of N-rGOx in polar environments, such as methanol or methanol–water systems. Moreover, PVP might manipulate the nucleation, growth and agglomeration of metal oxide nanoparticles by interacting with the metal oxide precursor via its hydrophilic head groups 37 , thereby preventing agglomeration of formed particles. The early steps of the nanoscroll formation, as probed by a time series analysis, are displayed in Fig. 4 , which shows TEM micrographs for synthesis that were (a) quenched in an ice bath only 1 min after the addition of hydrazine, (b) refluxed for 5 h and (c) for 24 h. We conclude that the rolling process starts by the decoration of the upper layer, which starts to roll after achieving a sufficient loading, thereby exposing the underneath layer for decoration leading to a subsequent rolling. Figure 4: Time sequence analysis of scroll formation. TEM micrographs showing the rolling process at different time steps for N-rGOx nanoscrolls, where edges, topological defects and doping sites act as nucleation points for iron oxide nanoparticles ( a ) quenched in an ice bath only 1 min after the addition of hydrazine, ( b ) refluxed for 5 h after the addition of hydrazine and ( c ) refluxed for 24 h. ( d – f ) Schematic illustrations for the various steps. When an optimum nanoparticles loading is reached, the net attractive forces start to roll and fold the graphene layers into cylindrical structures. Scale bars, 100, 500 and 100 nm in a , b and c , respectively. Full size image From the results above, it seems as a high degree of maghemite decoration is important to initiate the formation of nanoscrolls. Nitrogen defects in carbon nanostructures may cause additional binding sites for metal particles 38 . In our case, the surface iron atoms in iron oxide act as Lewis acids and can readily coordinate with molecules/atoms that donate lone-pair electrons 39 . Most amines (organic compounds or functional groups that contain a basic nitrogen) are Lewis bases, such as pyrrole (C 4 H 4 NH), aniline (C 6 H 5 NH 2 ) or pyridine (C 5 H 5 N), and because of the high nitrogen incorporation (13 at%) in our samples we expect that the N-rGOx surfaces contain a large number of sites with Lewis base behaviour, which might promote the nucleation of the iron oxide nanoparticles. The role of specific nitrogen defects was elucidated by ab initio calculations to study the adsorption of iron oxide clusters on nitrogen-doped graphene sheets (N-doped GNS). Seven different nitrogen defects were introduced in the centre of the 2D graphene sheet followed by the allocation of an iron oxide (Fe 20 O 32 ) cluster near the defect ( Supplementary Figs S13 and S14 ). After geometrical relaxation the adsorption energy ( E ads ) is calculated. The results are shown in Fig. 5a , where a negative E ads signifies an energetically stable configuration. Figure 5: Density functional theory (DFT) studies of scroll formation and adsorption energies. ( a ) Adsorption energy of an iron oxide cluster (52 atoms, Fe 20 O 32 with maghemite structure) on pristine (N 0 Vc 0 ) and seven different nitrogen functionalities at a nitrogen-doped reduced graphene oxide (N-rGOx) sheet. The different functionalities refer to substitutional nitrogen atom N 1 Vc 0 , pyrrolic N 1 H 1 Vc 1 , pyridinic N 1 Vc 1 , triple pyridinic N 3 Vc 1 , double pyrrolic N 2 Vc 2 , pyrrolic-pyridinic N 2 Vc 1 and tetra pyridinic N 4 Vc 2 , where N (H) refers to the number of nitrogen (hydrogen) atoms and Vc the number of vacancies generated in the structure. ( b ) Final geometrical configuration of the N 2 Vc 1 –Fe 20 O 32 super cell. As seen, the C atoms surrounding the defect are also responsible for the favourable cluster adsorption. ( c ) Rolling energy of graphene nanoribbons into nanoscrolls of pure carbon-rGOx (rGOx-NS, black line), maghemite (Fe 20 O 32 )-decorated rGOx nanoscrolls (M-rGOx Red line), maghemite N-rGOx nanoscrolls (M-N-rGOx Blue line), maghemite N-rGOx nanoscrolls decorated with two Fe 20 O 32 nanoparticles, one at each end of the graphene nanoribbon (2-M-N-rGOx magenta line) and maghemite N-rGOx nanoscrolls decorated with two Fe 20 O 32 nanoparticles taking into account the spin–spin interactions of the two nanoparticles (2-M-N-rGOx-SP purple line). ( d ) Final geometrical structure at each rolling step of maghemite N-rGOx nanoscrolls decorated with two nanoparticles (2-M-N-rGOx). As displayed in c , the energy needed for the initial rolling decreased from 0.63 eV to 0.08 eV (in step 2) after nitrogen doping and decoration by two Fe 20 O 32 clusters the original carbon-rGOx nanoscroll. In the subsequent step 3, the rolling energy of 2-M-N-rGOx is 1.49 eV lower than that of rGOx, and the rolling energy of the full nanoscrolls structure is lowered by >3.3 eV compared with that of rGOx nanoscroll. Full size image Among the studied defects, four different nitrogen-doping configurations, N 2 Vc 1 , N 3 Vc 1 , N 2 Vc 2 and N 4 V 2 , strongly interact with the iron oxide cluster. All these pyridinic defects exhibit the largest adsorption energies for the Fe 20 O 32 maghemite nanoparticle, and also are the most abundant in our maghemite-decorated N-rGOx nanoscrolls sample according to the X-ray photoemission spectroscopy data ( Supplementary Fig. S16 ). These results are in good agreement with our experimental observations, where high N-concentration is needed for an optimal loading of nanoparticles. Fig. 5b depicts the final geometry of the N 2 Vc 1 –Fe 20 O 32 , where the C atoms surrounding the nitrogen defect are also involved in chemical bonds between metal oxide and graphene surface ( Supplementary Fig. S14 ). This originates from the fact that the nitrogen atoms modify the local electron properties of the nearest carbon atoms 40 , 41 . Other studies point towards an asymmetric spin density caused by the introduction of nitrogen atoms 42 . Both models, however, lead to an increased chemical reactivity. As observed from Fig. 5a , the pure carbon (N 0 Vc 0 ) graphene surface system exhibits the lowest E ads, suggesting a poor interaction (a physisorption state). This is related to the homogeneous electron density along the graphene surface, contrary to the N-defects studied here, which exhibits highly localized states around the doping site. For the case when N 1 Vc 0 , N 1 H 1 Vc 1 and N 1 Vc 1 are present, steric repulsions generate low interaction between systems, suggesting that the space left by the vacancy is also needed to improve the interaction between nanoparticles and graphene. Our theoretical findings are probed by replacing the highly N-rGOx (13%) by a non-N-rGOx (GOx was treated and reduced at 750 °C in argon environment). Identical treatment conditions and procedure to decorate this sample as described in Fig. 1 (route A) led to a significantly lower maghemite loading, with only weak signs of rolling ( Supplementary Fig. S17 ). This is well in line with our theoretical calculations that the nitrogen defects promote the formation of metal nanoparticles, which in turn initiates the formation of the nanoscrolls. Besides the nitrogen defects at the N-rGOx sheets serving as initial nucleation sites for the iron oxide particles, they are involved in two additional important mechanisms; in mechanism (i), the maghemite particle can take part in a multiple binding process, indicating that after the rolling process has initiated, the unbound upper part of the maghemite particle can form additional bonds to the N-rGOx sheet. The process is visualized in Fig. 5c that displays the rolling energy of four different hydrogen-passivated nine-armchair graphene nanoribbons (9-aGNR) initially rolled as an Archimedean spiral. The first studied structure is rGOx nanoscroll, an undoped system 9-aGNR. The second one is M-rGOx, an undoped 9-aGNR decorated with iron oxide (Fe 20 O 32 ). The third one is M-N-rGOx, a nitrogen-doped 9-aGNR with pyridinic defects (N 3 Vc 1 , 14%At of N) and one Fe 20 O 32 collocated at the edge. Finally, the fourth structure is 2M-N-rGOx, the same N-doped 9-aGNR as in structure (3) but with two Fe 20 O 32 clusters at each edge of the nanoribbons. The optimized geometries are depicted in Supplementary Fig. S15 . Fig. 5c shows that the progress of rolling energy for the pure carbon graphene sheet (rGOx-NS, black line) is not linear and initially increases, but as soon as the graphene layers start to interact, the rolling energy decreases because of the mutual interlayer interaction. This trend is in agreement with a study by Braga et al . 14 More interestingly, when nitrogen defects are incorporated in the graphene lattice, and a maghemite, Fe 20 O 32 cluster, is adsorbed on the upper layer, both the energy barrier for the initial rolling step as well as the formation energy for the complete roll decreases (M-N-rGOx, blue curve). The adsorption of an additional Fe 20 O 32 cluster at the other end of the nanoribbon (2-M-N-rGOx, Fig. 5d ) lowers the formation energy further, supporting our experimental observations that a high loading seems essential for an efficient rolling. In addition, by introducing also magnetic spin–spin interaction for two neighbouring maghemite nanoparticles with opposite spins (consistent with the observed superparamagnetism for our system), we see a further decrease in the rolling energy, especially for the later steps of the rolling energy when the spin–spin interactions become larger because of the decreased distance between the maghemite nanoparticles. In mechanism (ii), we observed that the N-rGOx sheets not only promote the formation of the maghemite nanoparticles, but actually are essential for their stabilization. Hematite is the most stable phase of iron oxide, whereas maghemite is metastable and readily transforms to hematite. In nature, both hematite and maghemite can be found in minerals as well as in nanoparticle form, but it is generally accepted that a nanoparticle size-dependent transition from maghemite to hematite occurs around 10 nm 43 . Furthermore, the stabilization of maghemite nanoparticles has been observed at maghemite silica-based supports 44 . We believe that the N-rGOx sheets in our study have a similar role in stabilizing the maghemite nanoparticles by providing sufficient and suitable adsorption sites, thus inhibiting their transformation to hematite particles. This is clearly demonstrated by removing N-rGOx from the synthesis process displayed in Fig. 1 . Keeping the conditions identical otherwise leads to a formation of hematite for the methanol/water route b, whereas the methanol route a gives no formation of maghemite particles. At this condition, the absence of the N-rGOx has the consequence that the iron chloride salt simply remains dissolved in the solution after adding hydrazine and the nanoparticle formation is fully inhibited. Discussion The nanoparticle formation and their stabilization on the N-rGOx sheets is a rather complex process. The full description will be described elsewhere but we point out some important aspects; during the synthesis, the pH for both routes displayed in Fig. 1 starts around 8.5 (we realize however that the pH in pure methanol is poorly defined). With time, the pH of route a gradually increases to about 9, whereas the pH of route B lowers to about 7.5. Previous studies have shown that hydroxylation of ferric ions in solution with pH>3 leads to the formation of the thermodynamically unstable ferrihydrite 45 . Depending on the acidity of the medium, ferrihydrite transforms to different forms of iron oxide. Ferrihydrite transforms to hematite at 5≤pH≤8 46 , with an increase in hematite particle size by an increasing acidity of the medium 47 . pH values>8.5 are expected to promote the formation of maghemite. The role of the pH is demonstrated by adding 4 μl of HCl for route A. The slight modification of the pH has the consequence that maghemite formation is interchanged by a hematite, again, with no signs of nanoscrolls formation. We note so far that the strong dipole moment of maghemite nanoparticles and their strong interaction with the nitrogen functionalities on the N-rGOx seems to be the two most important causes to initiate the formation of the nanoscrolls. Clearly, the maghemite formation requires the support of nitrogen defects on the graphene sheets and the solution pH has a significant role in directing the iron oxide crystallization towards maghemite instead of hematite. However, it is likely that water, besides balancing the pH to more neutral condition, also could have other roles in the ordering of the hematite nanoparticles by, for example, altering the molecular arrangement of methanol 48 . This could affect the polarity and size of methanol/water clusters, which are important for the solvation forces between a colloidal particle and the solvent 49 , and therefore also vital in the nanoparticle formation/aggregation. We conclude our study with a surprising observation. Earlier reports for graphene nanoscrolls indicated that the scrolls, when formed, are very stable and do not unroll or unfold, even by strong ultrasonication. As our study indicates that the adsorption of the maghemite particles is essential for the formation of the nanoscrolls, we investigated this by washing the maghemite-decorated N-rGOx nanoscrolls with acid (1:3 volume ratio of HCl: HNO 3 mixture for 24 h at 45 °C). Amazingly and in accordance with our model, the nanoscrolls go back completely to the initial open N-rGOx structure upon removal of the maghemite nanoparticles. Supplementary Fig. S18a shows the TEM micrograph of the acid-washed sample, where the quadratic particles are attributed to tungsten sheets not removed by the acid treatment ( Supplementary Figs S19 and S20 ). Acid treatment of maghemite-decorated N-rGOx nanoscrolls that do not contain tungsten shows a very nice sample comprising single or few layers of N-rGOx ( Supplementary Fig. S18b ). If the washing procedure is shortened, half-open structures can also be found, as seen in Supplementary Fig. S18c . Interestingly, we also observe that the washed samples, now comprising N-rGOx, again can be decorated with maghemite particles leading to similar formation of maghemite-decorated N-rGOx nanoscrolls as before, however, with larger diameter, probably originating from the defects introduced in the acid treatment ( Supplementary Fig. S21 ). In summary, N-rGOx nanoscrolls can form by adsorption of maghemite nanoparticles in a simple reflux method. We explain the efficient rolling process by a combination of the magnetic interaction between the maghemite nanoparticles and a strong interaction of maghemite nanoparticles with nitrogen functionalities at the graphene sheets. The rolling process is fully reversible. Upon removal of the adsorbed maghemite nanoparticles by acid treatment, the nanoscrolls return back to their original flat open structure. Our study gives important insight for the manipulation and synthesis of graphene nanoscrolls and the stabilization of iron oxide nanoparticles, which is relevant also for other geochemical, ecological and environmental processes in nature 50 . Methods Synthesis N-rGOx was produced by a method similar to the previous reports 51 , by first producing graphene oxide (GOx), with a modified Hummers’ method. GOx was dispersed in mili-Q water (0.5 g l −1 ) by ultrasonication. Then, melamine (C 3 H 6 N 6 , 99%, Sigma-Aldrich) was added to the GOx suspension (2.5 g l −1 ) and stirred until significant agglomeration was observed. The resulting dispersion was dried at 80 °C in an oven and finally the solid material was grinded into fine powders using a mortar. The GOx–melamine complex was pyrolysed at 750 °C for 45 min under an Ar atmosphere to obtain N-rGOx. GOx was treated and reduced at high temperature (750 °C) under argon environment for 45 min. Decoration of N-rGOx by route a The N-rGOx powder was mixed with high purity methanol (1 g l −1 ) and dispersed by ultrasonication for 30 min, and was labelled as (1). Meanwhile, 77.7 μM iron(III) chloride hexahydrate (FeCl 3 , Sigma-Aldrich),60 μM tungsten hexacarbonyl(W (CO) 6 , 97%, Sigma-Aldrich) and PVP (average Mw 40,000, Sigma-Aldrich) were added in a separated flask containing methanol, stirred until dissolved completely, and were labelled as (2). Finally, both (1) and (2) were mixed under magnetic stirring, followed by a gradual heating up to 75 °C under reflux condition. When the desired temperature was reached, 0.1 ml of hydrazine (N 2 H 4 , Merck) was added to the mixture drop wise to promote the hydrolysis of FeCl 3 . The reaction was continued for a predefined time. The resulting material was washed and centrifuged in water and ethanol for several times and finally dried at 85 °C and stored for further characterization. Decoration of N-rGOx by route b The same procedure as the route A was followed but the solvent was replaced by methanol/mili-Q water mixture (1:1). PdO decoration N-rGOx was mixed with methanol (0.2 g l −1 ) and dispersed by ultrasonication for 10 min. Then, Pd acetate (1 g l −1 ) was added and sonicated again. The sonicated suspension was stirred overnight at 400 r.p.m. Tungsten decoration of N-rGOx was synthesized following route A, but FeCl 3 was completely absent in the process. Non-PVP-assisted hematite and maghemite decoration of N-rGOx was synthesized following route A and route B but in the absence of PVP. Computational methods Electronic calculations were performed using density functional theory 52 , 53 , using the generalized gradient approximation with Perdew–Burke–Ernzerhof parameterization as an exchange–correlation functional 54 , as implemented in the SIESTA code 55 . For all systems, the valence electrons were represented by a linear combination of pseudo-atomic numerical orbitals 56 using a double-ζ (DZ) basis for H, C and N atoms, and DZ-polarized basis for O and Fe atoms. In the case of iron, we have used eight valence electrons (Fe:3d74s1), and this electronic configuration has been shown to improve the description of iron oxide systems 57 . The real-space grid used for charge and potential integration is equivalent to a plane wave cut-off energy of 200 Ry, and for spin-polarized systems 350 Ry was used. Periodic boundary conditions were used and a minimum of 20 Å of vacuum was employed to avoid lateral interactions. Sampling of the 2D Brillouin zone was carried out with (1 × 3 × 3) Monkhorst–Pack grids. All structures were relaxed by conjugate gradient minimization until the maximum force was <0.05 eV Å −1 . We performed two different simulations that helped us to understand the experimental observations; these are described below. Cluster adsorption Two-dimensional (2D) periodic structures of N-doped GNS were constructed using a square super cell of 240 atoms, where seven different nitrogen-doping defects were introduced in the centre of the graphene surface: (1) substitutional nitrogen atom, N 1 Vc 0 , (2) pyrrolic, N 1 H 1 Vc 1 , (3) pyridinic, N 1 Vc 1 , (4) pyrrolic-pyridinic, N 2 Vc 1 , (5) triple pyridinic, N 3 Vc 1 , (6) double pyrrolic defect, N 2 Vc 2 and (7) tetra pyridinic, N 4 Vc 2 , where N (H) refers to the number of nitrogen (hydrogen) atoms and Vc number of vacancies generated in the structure; also the (8) undoped case is studied for comparison. The graphene super cell and N-doped defects are shown in Supplementary Fig. S13 . Previously, an iron oxide cluster containing 52 atoms (Fe 20 O 32 ) was obtained directly from iron oxide with maghemite structure, and then was geometrically optimized (The maghemite cubic cell has 56 atoms in total, where we have eliminated four Fe atoms to avoid highly uncoordinated atoms, resulting in 52 atoms (Fe 20 O 32 ) iron oxide cluster). The cluster adsorption was performed by setting the Fe 20 O 32 cluster above the seven defective sites followed by an energy minimization. Finally, we calculated the adsorption energy E ads , defined as E ads = E cmplx −( E surf + E np ), where E cmplx is the total energy of the N-doped GNS and the iron oxide cluster; the E surf ( E np ) is the total energy of the isolated N-doped GNS (iron oxide cluster). The resulting structures are depicted in Supplementary Fig. S14 . Graphene nanoscrolls Hydrogen-passivated 9-aGNRs containing 272 atoms (12 unit cells) were used as starting material. The 9-aGNRs was initially rolled as an Archimedes’ spiral, which in polar coordinates ( r,θ ) is described by the equation r=a+bθ , where a is the inner core radius, r is the radial distance, θ is the polar angle and b is the interlayer distance equal to 3.34 Å. The rolling procedure was divided in five steps until we reached a full graphene scroll, and then the structures were geometrically relaxed. To study the effect of nitrogen doping and iron oxide decoration on the rolling mechanism of graphene, we selected the following systems. ( 1 ) Structure rGOx-NS , undoped (pure carbon) system 9-aGNR. ( 2 ) Structure M-rGOx , undoped 9-aGNR decorated with iron oxide, where one Fe 20 O 32 cluster is employed; the same is used for the adsorption test and collocated at one edge of the graphene nanoribbon. ( 3 ) Structure M-N-rGOx ; a nitrogen-doped 9-aGNR is created by introducing pyridinic defects (N 3 Vc 1 ) along the structure; 14%At of N is reached, which is similar to our experimental N-doped GNS. Then, one Fe 20 O 32 is collocated at one edge of the 9-aGNR. Finally (4) , the structure 2M-N-rGOx , the same N-doped 9-aGNR, structure 3 , is now decorated with two Fe 20 O 32 clusters at each edge of the nanoribbon. We observed that all structures (undoped, N-doped and iron oxide decorated) changes after geometric optimization, resulting in quasi-Archimedean spirals. The final structures are depicted in Supplementary Fig. S15 . We have also defined the rolling energy E roll as E roll = E gnr − E scroll , where E gnr is always the total energy of the flat (step 1) graphene and E scroll is the energy of the rolled structure at one specific rolling step (that is, 2–5), which allows to identify and compare the energy needed to perform the rolling of each structure. The resulting geometries are shown in Supplementary Fig. S15 . Characterization TEM was carried out on a JEOL JEM-1230 transmission electron microscope at 80 keV. HRTEM was carried out on a JEM-2100F at 200 keV with a Gatan Imaging Filter. Scanning transmission electron microscope (STEM) measurements were performed using a Zeiss Merlin FEGSEM (30 kV and a beam current of 210 pA) or a JEM-2100F (200 kV) operated in STEM mode with angular dark-field detector. The Raman spectroscopy was conducted on a Renishaw InVia Raman spectrometer with a charge-coupled device detector (excitation wavelength, 633 nm). The X-ray photoemission spectroscopy was recorded on a Kratos axis ultra delay-line detector electron spectrometer using a monochromatic Al Kα source operated at 150 W. TGA were measured on a Mettler Toledo TGA/DSC 1 LF/948 at a heating rate of 5 °C min −1 for up to 1,000 °C in pure oxygen. X-ray diffraction was carried out on a Siemens D5000 diffractometer with wavelength (Cu Kα) of 1.5418 Å and accelerating voltage 40 kV. A superconducting quantum interference device magnetometer (Quantum Design MPMS 5T) was used for the magnetization measurements. The temperature dependence of the magnetisation was recorded in a field of 1,000 Oe using ZFC and FC protocols ( Fig. 3 ). The ZFC magnetisation was recorded on increasing temperature after cooling the sample in zero field to the lowest temperature of the experiment. The FC magnetisation was recorded on re-cooling the sample in the same magnetic field as in the ZFC measurement. The magnetisation versus field curves ( Supplementary Fig. S10 ) were recorded in the field range 0–1,000 Oe in steps of 100 Oe. Additional information How to cite this article: Sharifi, T. et al . Formation of nitrogen-doped graphene nanoscrolls by adsorption of magnetic γ-Fe 2 O 3 nanoparticles. Nat. Commun. 4:2319 doi: 10.1038/ncomms3319 (2013).
Researchers at Umea University, together with researchers at Uppsala University and Stockholm University, show in a new study how nitrogen doped graphene can be rolled into perfect Archimedean nano scrolls by adhering magnetic iron oxide nanoparticles on the surface of the graphene sheets. The new material may have very good properties for application as electrodes in for example Li-ion batteries. Graphene is one of the most interesting materials for future applications in everything from high performance electronics, optical components to flexible and strong materials. Ordinary graphene consists of carbon sheets that are single or few atomic layers thick. In the study the researchers have modified the graphene by replacing some of the carbon atoms by nitrogen atoms. By this method they obtain anchoring sites for the iron oxide nanoparticles that are decorated onto the graphene sheets in a solution process. In the decoration process one can control the type of iron oxide nanoparticles that are formed on the graphene surface, so that they either form so called hematite (the reddish form of iron oxide that often is found in nature) or maghemite, a less stable and more magnetic form of iron oxide. "Interestingly we observed that when the graphene is decorated by maghemite, the graphene sheets spontaneously start to roll into perfect Archimedean nano scrolls, while when decorated by the less magnetic hematite nanoparticles the graphene remain as open sheets, says Thomas Wågberg, Senior lecturer at the Department of Physics at Umeå University. Snapshot of a partially re-opened nanoscroll. The atomic layer thick graphene resembles a thin foil with some few wrinkles. The nanoscrolls can be visualized as traditional "Swiss rolls" where the sponge-cake represents the graphene, and the creamy filling is the iron oxide nanoparticles. The graphene nanoscrolls are however around one million times thinner. The results that now have been published in Nature Communications are conceptually interesting for several reasons. It shows that the magnetic interaction between the iron oxide nanoparticles is one of the main effects behind the scroll formation. It also shows that the nitrogen defects in the graphene lattice are necessary for both stabilizing a sufficiently high number of maghemite nanoparticles, and also responsible for "buckling" the graphene sheets and thereby lowering the formation energy of the nanoscrolls. The process is extraordinary efficient. Almost 100 percent of the graphene sheets are scrolled. After the decoration with maghemite particles the research team could not find any open graphene sheets. Moreover, they showed that by removing the iron oxide nanoparticles by acid treatment the nanoscrolls again open up and go back to single graphene sheets. "Besides adding valuable fundamental understanding in the physics and chemistry of graphene, nitrogen-doping and nanoparticles we have reasons to believe that the iron oxide decorated nitrogen doped graphene nanoscrolls have very good properties for application as electrodes in for example Li-ion batteries, one of the most important batteries in daily life electronics, " says Thomas Wågberg. The study has been conducted within the "The artificial leaf" project which is funded by Knut and Alice Wallenberg foundation to physicist, chemists, and plant science researchers at Umeå University.
10.1038/ncomms3319
Physics
Laser allows solid-state refrigeration of a semiconductor material
Anupum Pant et al, Solid-state laser refrigeration of a composite semiconductor Yb:YLiF4 optomechanical resonator, Nature Communications (2020). DOI: 10.1038/s41467-020-16472-6 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-020-16472-6
https://phys.org/news/2020-06-laser-solid-state-refrigeration-semiconductor-material.html
Abstract Photothermal heating represents a major constraint that limits the performance of many nanoscale optoelectronic and optomechanical devices including nanolasers, quantum optomechanical resonators, and integrated photonic circuits. Here, we demonstrate the direct laser refrigeration of a semiconductor optomechanical resonator >20 K below room temperature based on the emission of upconverted, anti-Stokes photoluminescence of trivalent ytterbium ions doped within a yttrium-lithium-fluoride (YLF) host crystal. Optically-refrigerating the lattice of a dielectric resonator has the potential to impact several fields including scanning probe microscopy, the sensing of weak forces, the measurement of atomic masses, and the development of radiation-balanced solid-state lasers. In addition, optically refrigerated resonators may be used in the future as a promising starting point to perform motional cooling for exploration of quantum effects at mesoscopic length scales, temperature control within integrated photonic devices, and solid-state laser refrigeration of quantum materials. Introduction Photothermal heating is a perennial challenge in the development of advanced optical devices at nanometer length scales given that a material’s optical index of refraction, bandgap, and Young’s modulus all vary with temperature. For instance, reducing the mechanical motion of an optomechanical resonator to its quantum ground state requires that the temperature ( T ) must be much less than h ν / k B , where ν is the mode frequency, h and k B are Planck and Boltzmann constants, respectively 1 . Critically, incident laser irradiances must be kept low enough to avoid photothermal heating of the resonator above cryogenic temperatures 1 , 2 , 3 , 4 , 5 . Here, we demonstrate an approach for the photothermal cooling of nanoscale optoelectronic devices through the emission of anti-Stokes photoluminescence. In particular, we used a micrometer-scale grain of 10% Yb 3+ -doped YLiF 4 (Yb:YLF) located at the end of a semiconductor optomechanical resonator (CdS) to cool the resonator >20 K below room temperature following excitation with a continuous wave laser source with wavelength λ 0 = 1020 nm. The idea of refrigerating metallic sodium vapors using anti-Stokes luminescence was first proposed by Pringsheim in 1929 6 . Following the development of the laser, Doppler cooling of metallic vapors led to the first observation of Bose–Einstein condensates in 1995 7 . The first experimental report of solid-state laser cooling came in 1995 using Yb 3+ -doped ZBLANP glass (Yb:ZBLANP) 8 . Since then, two decades of research in the area of solid-state laser refrigeration has culminated in the development of a solid-state optical cryo-cooler with bulk Yb:YLF single crystals grown using the Czochralski method 9 , which has cooled crystals to 91 K from room temperature. The primary advantage of using crystalline materials for solid-state laser cooling is the existence of well-defined crystal-field levels, which minimizes inhomogenous broadening of rare-earth (RE) absorption spectra. Recently, this has enabled the first experimental demonstrations of cold Brownian motion 10 since Einstein’s seminal paper 11 on Brownian motion in 1905. The increased optical entropy of the blue-shifted photons makes this cooling cycle consistent with the second law of thermodynamics 12 . Recently, laser refrigeration of a macroscopic Czochralski grown Yb:YLF crystal was used to cool a semiconductor FTIR detector (HgCdTe) to 135 K 13 . In contrast, in this work the temperature of a nanoscale semiconductor optomechanical resonator (CdSNR) is reduced using laser refrigeration of a hydrothermally synthesized Yb:YLF microcrystal attached to it. The cooling of a load using a microscopic cooler enables local cooling. In addition, it also offers a route towards rapidly achieving a thermal steady (μs to ms scale) temperature state within nanoscale devices. The device is suspended in vacuum from a silicon wafer to reduce the potential for photothermal heating of the adjacent silicon substrate. Van der Waals bonding is used to attach a low-cost, hydrothermal ceramic Yb:YLF microcrystal 10 to the end of the resonator cavity. RE (Yb 3+ ) point-defects within the YLF emit anti-Stokes photoluminescence, which cools both the YLF microcrystal, and also the underlying semiconductor optomechanical resonator. The YLF serves both as a local thermometer (discussed in more detail below) and also as a heat sink, which extracts thermal energy from the cantilever, increasing its Young’s modulus, and thereby blue-shifting the cantilever’s optomechanical eigenfrequency. The transmitted laser causes minimal heating of the cantilever, supporting the YLF crystal owing to its small thickness (150 nm) and extremely low absorption coefficient of CdS at 1020 nm 14 . The temperature of the source and the cantilever system are measured using two independent non-contact temperature measurement methods—differential luminescence thermometry 15 and optomechanical thermometry 16 , respectively, which agree well with each other. The results below suggest several potential applications for using solid-state laser refrigeration to rapidly cool a wide range of materials used in scanning probe microscopy 17 , 18 , 19 , cavity optomechanics 2 , 5 , 20 , 21 , integrated photonics 22 , 23 , 24 , the sensing of small masses and weak forces 25 , 26 , 27 , 28 , quantum information science 29 , and radiation-balanced lasers 30 . Results Optomechanical thermometry A CdSNR was placed at the end of a clean silicon substrate, and a hydrothermally grown 10% Yb:YLF crystal was placed at the free end of the CdSNR cantilever. CdS was chosen because of its wide bandgap and low-cost, though in principle any material with low near-infrared (NIR) absorption can be used. A bright-field optical image of a representative sample is shown in Fig. 1b . The silicon substrate was loaded inside a cryostat chamber such that the free end of the cantilever was suspended over the axial hole in the cryostat, and the system was pumped to ~10 −4 Torr. As shown in Fig. 1a , a 1020-nm laser was focused onto the Yb:YLF crystal at the end of the cantilever. The time-dependent intensity of the forward-scattered 1020 nm laser was measured by focusing it onto an avalanche photodiode (APD). To measure the cantilever’s eigenfrequencies the voltage vs. time signal was Fourier-transformed to obtain its thermomechanical noise spectrum 31 , 32 . A representative power spectrum measured on the sample at 300 K using a laser irradiance of 0.04 MW cm −2 is shown in Fig. 1c . A sharp peak, fitted using a standard Lorentzian with a peak position at 3648.9 Hz, was attributed to the first natural resonant frequency mode (“diving board mode”) of the fluorescence-cooled nanoribbon (FCNR) system (Supplementary Fig. 1 ). As shown in Fig. 1a , the backscattered photoluminescence was collected from the rear end of the objective, transmitted through a beamsplitter, was filtered using a 1000-nm short-pass filter and focused into the spectrometer slit. Photoluminescence (PL) spectra were recorded at different grating positions, with appropriate collection times to avoid saturating the detector, and they were stitched together. Ten spectra were collected using 0.04 MW cm −2 of laser irradiance for 0.1 s and averaged. The intense Yb 3+ transitions 33 , 34 in the range of 800–1000 nm, with major peaks at 960 ( E 6 – E 1 ), 972 ( E 5 – E 1 ), and 993 ( E 5 – E 3 ) nm were observed (Fig. 1d ). A longer acquisition time (50×) was used to collect the weaker luminescence signal from the other rare-earth (RE) impurities that were not explicitly added during synthesis. The up-converted green and red emission peaks at 520, 550, and 650 nm are attributed to the transitions from 2 H 11/2 , 4 S 3/2 and 4 F 9/2 , respectively, of trivalent erbium ions (Er 3+ ) 35 , 36 . Other minor transitions are labeled. Fig. 1: Experimental setup. a The schematic of the eigenfrequency and up-converted fluorescence measurement setup. FL, M, SPF, DL, BS, VND, and APD stand for focusing lens, mirror, 1000 nm short-pass filter, diode laser, beamsplitter, variable neutral density filter, and avalanche photodiode, respectively. b A bright-field optical image of the CdSNR cantilever supported using a silicon substrate with a Yb 3+ :YLF crystal placed at the free end. c A peak in the thermomechanical noise spectrum originating from the fundamental eigenfrequency of the CdSNR with Yb 3+ :YLF sample obtained at the 0.04 MW cm −2 at 300 K. d A stitched, up-converted fluorescence spectrum obtained at room temperature using a 1020-nm excitation source (0.04 MW cm −2 ) focused on the suspended Yb 3+ :YLF crystal. A 1000-nm short-pass filter was used to cut off the laser line. Full size image Power spectra normalized using the maximum value at different laser irradiances obtained from the sample are plotted in Fig. 2a . When fit to a standard Lorentzian, the peak values show a blue-shift in the eigenfrequency of the FCNR system as the laser power is increased. The fitted peak values of these power spectra are shown in Fig. 2b . As the laser irradiance was increased, the Yb:YLF source reached lower temperatures, thereby extracting more heat from the CdSNR cantilever and causing a blue-shift in the frequency owing to an increased Young’s modulus of the CdS at lower temperatures. Using a 980 nm laser with an irradiance of 0.5 MW cm −2 resulted in the irreversible photothermal melting of the cantilever device shown in Supplementary Fig. 2 . When the Yb:YLF crystal was removed from the CdSNR cantilever in Fig. 2b , the fundamental frequency measured at 0.04 MW cm −2 increased to a higher value of 17384.3 Hz owing to the removal of mass from the system (~1.3 × 10 −9 g, Supplementary Fig. 3 ). As a control experiment, the eigenfrequency of the CdSNR cantilever itself was measured after the removal of the Yb:YLF crystal. The eigenfrequency of the cantilever without the crystal was then measured as a function of the laser power and is shown in Fig. 2b . The eigenfrequency red-shifted as the laser irradiance was increased, suggesting greater heating of the cantilever at higher irradiances owing to the decreasing Young’s modulus at higher temperatures 16 . The temperature of the FCNR device was calibrated by increasing the temperature of the cryostat from 160 to 300 K, which showed a linear red-shift in the eigenfrequency of the cantilever. The slope of −0.389 Hz K −1 obtained using this calibration was used to measure the temperature change of the cantilever system during laser refrigeration experiments. The maximum blue-shift in the eigenfrequency as a function of laser irradiance of the was +20.6 Hz at an irradiance of 0.97 MW cm −2 , compared with the lowest irradiance of 0.04 MW cm −2 . Based on the isothermal temperature calibration, ignoring temperature gradients and other optomechanical effects on the cantilever owing to increased irradiance, this blue-shift of +20.6 Hz corresponds to a temperature change of 53 K below room temperature (assuming a negligible change in temperature at a laser irradiance of 0.04 MW cm −2 ). Fig. 2: Eigenfrequency thermometry. a Normalized power spectra for a representative laser refrigeration measurement at each laser irradiance with an ambient reference temperature of 295 K ( f 0 = 3632.2 Hz). b The frequency shift ( f − f 0 ) with laser power at 295 K for both a plain CdSNR (red) and CdSNR with Yb:YLF (blue). Each data point was obtained by taking the mean of peak position obtained from Lorentzian fits of six thermomechanical noise spectra and error bars represent one standard deviation. Note that for small standard deviations, the error bars overlap with the data point. f 0 is 3632.2 Hz and 17384.4 Hz, respectively. c Temperature calibration of the CdSNR with Yb:YLF obtained by measuring the frequency shift f − f 0 ( f 0 = 3653.6 Hz) as a function of the cryostat temperature. The data points take into account the uncertainties in measurement by averaging the frequency value obtained from six thermomechanical noise spectra recorded at the given temperature. The error bars represent one standard deviation. d Eigenfrequency measurements using co-focused 980 nm and 1020 nm lasers. The data points take into account the uncertainties in measurement by averaging the frequency value obtained from six thermomechanical noise spectra recorded at the given laser power. The error bars represent one standard deviation. Full size image To evaluate the effect on the eigenfrequency at other wavelengths, we co-focused a 980 nm laser with the 1020 nm beam. The thermomechanical noise spectra of a representative device were measured using both wavelengths (Supplementary Fig. 4 and Note 1 ). Figure 2d shows that with increasing 1020 nm laser irradiance the eigenfrequency blue-shifts, indicating that the device cools. In contrast, using a 980 nm laser leads to a heating-induced red-shift of the cantilever’s eigenfrequency with increasing irradiance. To limit damage (Supplementary Fig. 2 ) to the device we maintained the 980 nm irradiance below 0.2 MW cm −2 . To obtain the absolute change in temperature it is important to consider the system using modified Euler–Bernoulli beam theory and include the effects of the laser-trapping forces 37 on the Yb:YLF crystal, which acts as a spring at the end of the cantilever (Supplementary Fig. 5 ). The eigenfrequency of the cantilever increases with increasing laser irradiance, owing to the increased optical spring constant. Analytically the eigenfrequency ( f i ) in hertz, of a uniform rectangular beam is given by ref. 38 : $${f}_{i}=\frac{1}{2\pi }\frac{{\Omega }_{i}^{2}}{{L}^{2}}\sqrt{\frac{EI}{\rho }}.$$ (1) Here L is the length, ρ is the linear density, E is the Young’s modulus, and I is the area moment inertia of the cross-section of the beam. The i th eigenvalue of the non-dimensional frequency coefficient Ω i satisfies the following equation for a uniform rectangular cantilever with a mass M 0 and spring of spring constant K attached at the free end of the cantilever of mass m 0 . $$-\left(\frac{K}{{\Omega }_{i}^{3}}-\frac{{\Omega }_{i}{M}_{0}}{{m}_{0}}\right)[\cos ({\Omega }_{i})\sinh ({\Omega }_{i})-\sin ({\Omega }_{i})\cosh ({\Omega }_{i})]\\ +\cos ({\Omega }_{i})\cosh ({\Omega }_{i})+1=0.$$ (2) To experimentally probe the effects of the laser-trapping forces, the power-dependent eigenfrequency measurements were performed at a constant cryostat temperature of 77 K. At temperatures as low as 77 K, the cooling efficiency of the Yb:YLF crystal decreases owing to diminishing resonant absorption and red-shifting of the mean fluorescence wavelength 15 . Owing to negligible cooling with increased irradiance, and with the equilibrium temperature being maintained by the crysostat, it is assumed that any blue-shift in the eigenfrequency of the system was solely due to the greater laser-trapping force at higher irradiance (Supplementary Fig. 6 ). Therefore, the excessive blue-shift at room temperature (6 ± 2.2 Hz) can be attributed to the change in Young’s modulus due to cooling of the CdSNR cantilever (Supplementary Fig. 7 ). According to this calibration, the cantilever’s temperature is reduced 15.4 ± 5.6 K below room temperature (Supplementary Fig. 8 ). As cantilever eigenfrequencies are calibrated at isothermal conditions, the temperatures measured via cantilever eigenfrequencies during laser refrigeration do not directly measure the coldest point within the cantilever, but rather a lower bound of the absolute minimum achievable temperature decrease 16 . This is a consequence of temperature gradients within the cantilever that lead to gradients of the cantilever’s Young’s modulus. Based on finite element eigenfrequency modeling of the cantilever with a spatially varying Young’s modulus, the coldest point in the cantilever can be calculated (Supplementary Fig. 9 ). Below we present a steady-state, heat-transfer model of the laser-cooled cantilever system to quantify how thermal gradients within CdSNR cantilevers affect eigenfrequency measurements during laser cooling experiments. Heat-transfer analysis A cantilever of length ‘ L ’, width ‘ W ’, and thickness ‘ H ’ is modeled with a YLF crystal placed at the free end (see Fig. 3a ). The YLF crystal is approximated as a cuboid with sides of H YLF = 6, L YLF = 7.5 and W YLF = 6 μm, such that the volume and aspect ratio were similar to the tetragonal bi-pyramidal YLF crystal used experimentally. Fig. 3: Heat-transfer analysis. a The geometry of FCNR system used for analytical and finite element heat-transfer modeling. b The steady-state temperature along the length of the CdSNR calculated using analytical one-dimensional solution obtained assuming all of the cooling power produced by the YLF crystal flows through the CdSNR cross-section at L c . Full size image At steady-state the temperature distribution in the nanoribbon satisfies the energy equation given by: $$\frac{{\partial }^{2}T}{\partial {x}^{2}}+\frac{{\partial }^{2}T}{\partial {y}^{2}}+\frac{{\partial }^{2}T}{\partial {z}^{2}}=0.$$ (3) Heat transfer to the surroundings by conduction and convection is absent owing to the vacuum surrounding the cantilever. Radiant (blackbody) energy transfer to or from the surroundings is negligible owing to the relatively low temperatures of the cantilever and its small surface area. Therefore, the heat flow within the cantilever is one-dimensional and Eq. ( 3 ) reduces to: $$\frac{{d}^{2}T}{d{x}^{2}}=0,$$ (4) which has the general solution: $$T(x)={C}_{1}x+{C}_{2}.$$ (5) Assuming negligible interfacial resistance between the cantilever and the underlying silicon substrate, the temperature at the silicon/CdS interface ( x = 0) is the cryostat temperature T 0 . Consequently, the boundary condition at the base of the nanoribbon is T (0) = T 0 . If all of the heat generated in the YLF crystal is transferred to or from the CdSNR across the interface at x = L c , the heat flux at the interface is given by: $$\kappa \frac{dT}{dx}({L}_{{\rm{c}}})=\frac{{\dot{Q}}_{{\rm{c}}}}{HW},$$ (6) in which \({\dot{Q}}_{c}\) is the rate of heat removal from the YLF crystal, and κ is the thermal conductivity of CdS. Applying the boundary conditions, the temperature distribution in the CdSNR becomes: $$T(x)=\frac{{\dot{Q}}_{{\rm{c}}}}{\kappa HW}x+{T}_{0}.$$ (7) It is assumed that the relatively large thermal conductivity of the YLF crystal (~6 W m −1 K −1 ) will lead to a nearly uniform temperature in the crystal given by: $$T({L}_{{\rm{c}}})=\frac{{\dot{Q}}_{{\rm{c}}}}{\kappa HW}{L}_{{\rm{c}}}+{T}_{0}.$$ (8) The rate of laser energy absorbed per unit volume \(Q^{\prime\prime\prime}={Q}_{{\rm{abs}}}/V\) is given by: $$Q^{\prime\prime\prime}=\frac{4\pi n^{\prime} n^{\prime\prime} }{{\lambda }_{0}{Z}_{0}}({\bf{E}}\cdot {{\bf{E}}}^{* }).$$ (9) Here \(n=n^{\prime} -in^{\prime\prime}\) is the complex refractive index of the medium, λ 0 is the wavelength in vacuum, Z 0 is the free space impedance ( Z 0 = 376.73 Ω), and E * is the complex conjugate of the electric field vector within the YLF crystal. Up-converted, anti-Stokes luminescence follows laser absorption, cooling the crystal. We neglect the absorption of the incident laser by the underlying CdS cantilever due to its small thickness (154 nm) and low absorption coefficient at 1020 nm (6.7 × 10 −13 cm −1 ) relative to what has been reported 15 for YLF (~1 cm −1 ). Discussion Given that eigenfrequency measurements can only provide a lower bound of the cantilever’s temperature, a more direct approach must be used to measure the temperature at the end of the cantilever. Differential luminescence thermometry (DLT) 9 , 34 was used to measure the temperature of the YLF at the end of the cantilever based on using a Boltzmann distribution to analyze emission from different crystal-field (Stark) levels. We obtained a temperature drop of 23.6 K below room temperature ( \(\Delta {T}_{\max }\) ) at an irradiance ( I 0 ) of 0.97 MW cm −2 corresponding to an incident power P 0 = 40.1 mW and spot radius w 0 = 1.15 μm (Supplementary Fig. 10 ). Using the measured value of T ( L c ) − T 0 = 23.6 K, H = 150 nm, W = 2.5 μm, L c = 53 μm, and κ = 20 W m −1 K −1 39 , we calculate a cooling power of \({\dot{Q}}_{{\rm{c}}}\) = 3.34 × 10 −6 W (Supplementary Fig. 11 ). The resultant temperature gradient along the length of the device is shown in Fig. 3b . Based on the temperature gradient, by modeling a spatially varying Young’s modulus, the coldest point in the cantilever from eigenfrequency measurements was calculated to be between 26 and 58 K below room temperature (Supplementary Fig. 9 ). This agrees well with the coldest temperature measured using DLT. An absorption coefficient and cooling efficiency of 0.61 cm −1 and 1.5%, respectively, have been reported previously for a bulk YLF crystal doped with 10% Yb-ions 15 . Based on this absorption coefficient, considering full illumination of the Yb:YLF crystal, a maximum cooling power of 2.2 μW would be generated when irradiated by a 40.1 mW pump laser. This cooling power is smaller than the experimental cooling power reported above. The discrepancy can be explained by two factors related to the symmetric morphology of the YLF microcrystals. First, the size of the YLF microcrystals is within the Mie-regime for light scattering and internal optical fields may be enhanced considerably owing to morphology dependent cavity resonances. Supplementary Fig. 12 presents two-dimensional finite-difference time-domain calculations showing that internal optical power within a YLF microcrystal can be twice as large as the incident power owing to internal cavity resonances. Consequently, first-order linear absorption calculations may underestimate the cooling power owing to an underestimation of the internal optical power of the pumping laser. Second, a combination of light-scattering and multiple internal reflections of the pump beam within the microcrystal can excite a larger volume of the crystal compared with the incident spot size. Supplementary Fig. 13 demonstrates that fluorescence is emitted throughout YLF microcrystal, including far from where the excitation laser is focused. In conclusion, we demonstrate an approach to decrease the temperature of a nanoscale semiconductor optomechanical resonator by >20 K below room temperature using solid-state laser refrigeration of a Yb:YLF crystal. Thermometry and calibration of the fabricated device are performed using two independent methods—optomechanical eigenfrequencies and differential luminescence thermometry, respectively—which compare well with each other. A modified Euler–Bernoulli model is used to account for the laser-trapping forces, and the measured temperatures are validated using heat-transfer theory. A maximum drop in temperature of 23.6 K below room temperature was measured near the tip of the cantilever. Among other applications in scanning probe microscopy and exploration of quantum effects at mesoscopic length scales 28 , optical refrigeration of a mechanical resonator could have significant implications for weak force and precision mass sensing applications 26 , 27 , in the development of composite materials for radiation-balanced lasers 30 , and local temperature control in integrated photonic devices 23 , 24 . In the future, solid-state laser refrigeration may also assist in the cooling of optomechanical devices by enabling the use of higher laser irradiances in the absence of detrimental laser heating. Methods Cadmium sulfide nanoribbon synthesis The CdSNRs were synthesized using a chemical vapor transport method discussed in a previous publication 16 . A precursor cadmium sulfide (CdS) powder in an alumina boat, which was placed at the center of a quartz tube. The silicon (100) substrates were prepared by dropcasting gold nanocrystals in chloroform. The precursor was heated to 840 °C. A carrier gas consisting of argon and 5% hydrogen was used to transport the evaporated species over to a growth substrate placed at the cooler upstream region near the edge of the furnace. Ytterbium-doped lithium yttrium fluoride synthesis The hydrothermal method used to synthesize single crystals of 10% ytterbium-doped lithium yttrium fluoride (Yb:YLF) nanocrystals was performed following modifications to Roder et al. 10 . Yttrium chloride (YCl 3 ) hexahydrate and ytterbium chloride hexahydrate (YbCl 3 ) were of 99.999% and 99.998% purity, respectively. Lithium fluoride (LiF), lithium hydroxide monohydrate (LiOH ⋅ H 2 O), ammonium bifluoride (NH 4 HF 2 ), and ethylenediaminetetraacetic acid (EDTA) were analytical grade and used directly in the synthesis without any purification. All chemicals were purchased from Sigma-Aldrich. For the synthesis of Yb:YLF, 0.585 g (2 mmol) of EDTA and 0.168 g (4 mmol) LiOH ⋅ H 2 O were dissolved in 10 mL Millipore DI water and heated to ~80 °C while stirring. After the EDTA was dissolved, 1.8 mL of 1.0 M YCl 3 and 0.2 mL of 1.0 M YbCl 3 were added and continually stirred for 1 hour. This mixture is denoted as solution A. Subsequently, 0.105 g (4 mmol) of LiF and 0.34 g (8 mmol) of NH 4 HF 2 were separately dissolved in 5 mL Millipore DI water and heated to ~70 °C while stirring for 1 h. This solution is denoted as solution B. After stirring, solution B was then added dropwise into solution A while stirring to form a homogeneous white suspension. After 30 minutes, the combined mixture was then transferred to a 23 mL Teflon-lined autoclave (Parr 4747 Nickel Autoclave Teflon liner assembly) and heated to 180 °C for 72 h in an oven (Thermo Scientific Heratherm General Protocol Oven, 65 L). After the autoclave cooled naturally to room temperature, the Yb:YLF particles were sonicated and centrifuged at 4000 rpm with ethanol and Millipore DI water three times. The final white powder was then dried at 60 °C for 12 hours followed by calcination at 300 °C for 2 hours in a Lindberg blue furnace inside a quartz tube. Device fabrication Using a tungsten dissecting probe (World Precision Instruments) with a sufficiently small tip radius-of-curvature (<1 μm), mounted on to a nano-manipulator (Märzhäuser-Wetzlär), the CdSNRs were picked up and placed at the edge of a clean silicon substrate. Yb:YLF crystal was then placed at the free end of the cantilever using the same process. Eigenfrequency measurement The optomechanical thermometry setup consists of a 1020 nm diode laser (QPhotonics) focused, using a 50× long working distance objective, onto the sample placed inside a cryostat (Janis ST500) modified by drilling an axial hole through the sample stage. The forward-scattered light was collected through the axial hole and was focused onto an APD (Thorlabs APD430A). The time-domain voltage signal from the APD was then Fourier-transformed to obtain the thermomechancial noise spectrum with characteristic peaks from the fundamental eigenfrequency modes of the cantilever. For temperature calibration, the thermomechanical noise spectrum was recorded by varying the cryostat temperature and fitting the resulting peaks using a standard Lorentzian. Each data point represents the average of six measurements and the error bars represent the standard deviation. Data availability The data that support the findings of this study are available from the corresponding authors on reasonable request. Code availability Code or algorithm used to generate results in this study are available from the corresponding authors on reasonable request.
To the general public, lasers heat objects. And generally, that would be correct. But lasers also show promise to do quite the opposite—to cool materials. Lasers that can cool materials could revolutionize fields ranging from bio-imaging to quantum communication. In 2015, University of Washington researchers announced that they can use a laser to cool water and other liquids below room temperature. Now that same team has used a similar approach to refrigerate something quite different: A solid semiconductor. As the team shows in a paper published June 23 in Nature Communications, they could use an infrared laser to cool the solid semiconductor by at least 20 degrees C, or 36 F, below room temperature. The device is a cantilever—similar to a diving board. Like a diving board after a swimmer jumps off into the water, the cantilever can vibrate at a specific frequency. But this cantilever doesn't need a diver to vibrate. It can oscillate in response to thermal energy, or heat energy, at room temperature. Devices like these could make ideal optomechanical sensors, where their vibrations can be detected by a laser. But that laser also heats the cantilever, which dampens its performance. "Historically, the laser heating of nanoscale devices was a major problem that was swept under the rug," said senior author Peter Pauzauskie, a UW professor of materials science and engineering and a senior scientist at the Pacific Northwest National Laboratory. "We are using infrared light to cool the resonator, which reduces interference or 'noise' in the system. This method of solid-state refrigeration could significantly improve the sensitivity of optomechanical resonators, broaden their applications in consumer electronics, lasers and scientific instruments, and pave the way for new applications, such as photonic circuits." The team is the first to demonstrate "solid-state laser refrigeration of nanoscale sensors," added Pauzauskie, who is also a faculty member at the UW Molecular Engineering & Sciences Institute and the UW Institute for Nano-engineered Systems. The results have wide potential applications due to both the improved performance of the resonator and the method used to cool it. The vibrations of semiconductor resonators have made them useful as mechanical sensors to detect acceleration, mass, temperature and other properties in a variety of electronics—such as accelerometers to detect the direction a smartphone is facing. Reduced interference could improve performance of these sensors. In addition, using a laser to cool the resonator is a much more targeted approach to improve sensor performance compared to trying to cool an entire sensor. In their experimental setup, a tiny ribbon, or nanoribbon, of cadmium sulfide extended from a block of silicon—and would naturally undergo thermal oscillation at room temperature. An image of the team's experimental setup, taken using a bright-field microscope. The silicon platform, labeled "Si," is shown in white at the bottom of the image. The nanoribbon of cadmium sulfide is labeled "CdSNR." At its tip is the ceramic crystal, labeled "Yb:YLF." Scale bar is 20 micrometers. Credit: Pant et al. 2020, Nature Communications At the end of this diving board, the team placed a tiny ceramic crystal containing a specific type of impurity, ytterbium ions. When the team focused an infrared laser beam at the crystal, the impurities absorbed a small amount of energy from the crystal, causing it to glow in light that is shorter in wavelength than the laser color that excited it. This "blueshift glow" effect cooled the ceramic crystal and the semiconductor nanoribbon it was attached to. "These crystals were carefully synthesized with a specific concentration of ytterbium to maximize the cooling efficiency," said co-author Xiaojing Xia, a UW doctoral student in molecular engineering. The researchers used two methods to measure how much the laser cooled the semiconductor. First, they observed changes to the oscillation frequency of the nanoribbon. "The nanoribbon becomes more stiff and brittle after cooling—more resistant to bending and compression. As a result, it oscillates at a higher frequency, which verified that the laser had cooled the resonator," said Pauzauskie. The team also observed that the light emitted by the crystal shifted on average to longer wavelengths as they increased laser power, which also indicated cooling. Using these two methods, the researchers calculated that the resonator's temperature had dropped by as much as 20 degrees C below room temperature. The refrigeration effect took less than 1 millisecond and lasted as long as the excitation laser was on. "In the coming years, I will eagerly look to see our laser cooling technology adapted by scientists from various fields to enhance the performance of quantum sensors," said lead author Anupum Pant, a UW doctoral student in materials science and engineering. Researchers say the method has other potential applications. It could form the heart of highly precise scientific instruments, using changes in oscillations of the resonator to accurately measure an object's mass, such as a single virus particle. Lasers that cool solid components could also be used to develop cooling systems that keep key components in electronic systems from overheating.
10.1038/s41467-020-16472-6
Chemistry
Researchers find a way to produce free-standing films of perovskite oxides
Dianxiang Ji et al. Freestanding crystalline oxide perovskites down to the monolayer limit, Nature (2019). DOI: 10.1038/s41586-019-1255-7 Press release Journal information: Nature
http://dx.doi.org/10.1038/s41586-019-1255-7
https://phys.org/news/2019-06-free-standing-perovskite-oxides.html
Abstract Two-dimensional (2D) materials such as graphene and transition-metal dichalcogenides reveal the electronic phases that emerge when a bulk crystal is reduced to a monolayer 1 , 2 , 3 , 4 . Transition-metal oxide perovskites host a variety of correlated electronic phases 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , so similar behaviour in monolayer materials based on transition-metal oxide perovskites would open the door to a rich spectrum of exotic 2D correlated phases that have not yet been explored. Here we report the fabrication of freestanding perovskite films with high crystalline quality almost down to a single unit cell. Using a recently developed method based on water-soluble Sr 3 Al 2 O 6 as the sacrificial buffer layer 13 , 14 we synthesize freestanding SrTiO 3 and BiFeO 3 ultrathin films by reactive molecular beam epitaxy and transfer them to diverse substrates, in particular crystalline silicon wafers and holey carbon films. We find that freestanding BiFeO 3 films exhibit unexpected and giant tetragonality and polarization when approaching the 2D limit. Our results demonstrate the absence of a critical thickness for stabilizing the crystalline order in the freestanding ultrathin oxide films. The ability to synthesize and transfer crystalline freestanding perovskite films without any thickness limitation onto any desired substrate creates opportunities for research into 2D correlated phases and interfacial phenomena that have not previously been technically possible. Main Two-dimensional (2D) materials have recently generated substantial interest owing to their remarkable electronic properties and potential for electronic applications 1 , 2 , 3 , 4 . In conventional 2D materials, such as graphene and transition-metal dichalcogenides, these properties are largely controlled by the weakly interacting electrons of the s and p orbitals 1 , 4 . By contrast, the strongly interacting electrons of the d orbital in transition-metal oxide perovskites give rise to a rich spectrum of exotic phases, including high-temperature superconductivity 5 , 6 , colossal magnetoresistance 7 , 8 , Mott metal–insulator transitions 9 , 10 and multiferroicity 11 , 12 . Like conventional 2D materials, 2D transition-metal oxide perovskites are expected to exhibit new fundamental properties and to enable the development of multifunctional electronic devices. This prospect is, however, hindered by the technical challenges of exfoliating three-dimensional oxide crystals or lifting strongly bonded ultrathin-oxide films from the substrate. Many techniques have been used to synthesize freestanding films, involving selective etching of the buffer layer using acid 15 , 16 , dissolving the NaCl substrate using water 17 and melting the film–substrate interface using laser 18 and ion implantations 19 . These techniques are difficult to generalize to a wide range of perovskite oxides. Recently, Lu et al. developed a method of synthesizing high-quality freestanding perovskite oxides using water-soluble Sr 3 Al 2 O 6 (SAO) as a sacrificial buffer layer 13 , 14 , providing a step towards the search for similarly exotic 2D correlated phases in perovskite oxides. In this work we show that high-quality freestanding perovskite oxides such as SrTiO 3 (STO) and BiFeO 3 (BFO) films as thin as a single unit cell can be synthesized and transferred onto any desired substrates, for example, silicon wafer and holey carbon. Surprisingly, freestanding BFO films exhibit a rhombohedral-like (R-like) to tetragonal-like (T-like) phase transition and show both a large c / a ratio and giant polarization when approaching the ultimate 2D limit. Epitaxial SAO and the subsequent perovskite oxide films were grown by reactive molecular beam epitaxy instead of the pulsed laser deposition used in earlier works 13 , 14 . The growth condition of the SAO sacrificial buffer layers was carefully optimized to show clear fourfold reconstructed electron diffraction patterns that exhibit four intensity oscillation periods during the growth of the one-unit-cell SAO, as described in Methods and Extended Data Fig. 1 . A series of freestanding STO and BFO films of various thicknesses were grown and transferred. All samples exhibit atomic flat surfaces, possessing clear atomic steps and terraces (see Extended Data Fig. 1 ). Freestanding films several millimetres square were released by dissolving the SAO buffer layer in deionized water with mechanical support from a polydimethylsiloxane (PDMS) tape or silicone-coated polyethylene terephthalate (PET), and then transferred onto the desired substrate for scanning transmission electron microscopy (STEM) imaging: silicon wafer for cross-sectional imaging and holey carbon TEM grids for plan-view imaging (Fig. 1 ). Fig. 1: Growth and transfer of ultrathin freestanding SrTiO 3 films. a , Schematic of a film with an SAO buffer layer. b , The sacrificial SAO layer is dissolved in water to release the top oxide films with the mechanical support of PDMS. c , New heterostructures and interfaces are formed when the freestanding film is transferred onto the desired substrate. d , e , Atomically resolved cross-sectional ( d ) and low-magnification plan-view ( e ) HAADF images of a two-unit-cell freestanding STO film transferred to a silicon wafer and a holey carbon TEM grid, respectively. f , g , Atomically resolved cross-sectional ( f ) and low-magnification plan-view ( g ) HAADF images of a representative four-unit-cell freestanding STO film, showing the excellent flexibility of ultrathin freestanding films. Full size image We first demonstrate that ultrathin single-crystalline freestanding STO films as thin as one unit cell can be fabricated and transferred. As shown in Fig. 1d, e and 2 , freestanding STO films of various thicknesses were synthesized by oxide molecular beam epitaxy and characterized by TEM using selected-area electron diffraction (SAED) and plan-view and cross-sectional high-angle annular dark-field (HAADF) imaging with atomic resolution. The SAED diffraction spots observed are sharp, narrow and round, indicating that all films, including the one-unit-cell-thick film, are of single-crystalline form. Atomically resolved cross-sectional and plan-view HAADF images show high crystalline quality in four-, three- and two-unit-cell-thick freestanding STO films, well below the previously reported five-unit-cell critical thickness below which the crystalline lattice will collapse 14 . We note that with decrease in thickness the freestanding STO films become extremely sensitive to electron beams, experiencing knock-on damage and radiolysis under high-energy beams; damage was observed even at low-dose SAED measurements (about 0.6 e Å −2 ; where e is the electron charge; Extended Data Fig. 3 ). As a result, no high-quality HAADF images could be acquired for the one-unit-cell sample owing to the much higher irradiation (around 2 × 10 4 e Å −2 ) required for that measurement. Nonetheless, the high-quality SAED data obtained for all thicknesses and the atomically resolved HAADF images for the four-, three-, and two-unit-cell-thick films indicate that there is no critical thickness limitation for freestanding STO films. In fact, ultrathin freestanding oxide films exhibit ‘ripples’, or roughened features (Extended Data Fig. 8 ) that are similar to those seen in graphene 20 and that may help to stabilize the freestanding oxide films at the 2D limit. Interestingly, freestanding oxide films, although ceramic-like (brittle) in bulk form, become flexible at a thickness of a few unit cells and can be bent and even folded in on themselves (Fig. 1f, g ), suggesting excellent potential for ultrathin freestanding oxides in flexible multifunctional electronics applications. Fig. 2: Synthesis of ultrathin freestanding STO films of high crystalline quality. a – c , Cross-sectional HAADF images ( a ), SAED patterns ( b ), and plan-view HAADF images ( c ) of ultrathin freestanding STO films of various unit-cell thicknesses, showing no critical thickness limitation for the synthesis of freestanding crystalline oxide perovskite films. As the one-unit-cell freestanding film is extremely sensitive to the electron beam, it can survive only at low-dose SAED measurements. Full size image In addition to non-polar oxides like STO in which each atomic layer is charge neutral, we also fabricated ultrathin freestanding polar BFO, namely the extensively investigated single-phase multiferroic compound. At room temperature, bulk BFO is rhombohedral with a large ferroelectric polarization pointing along the <111> directions of its pseudocubic unit cell ( a = 3.96 Å, α = 89.4°). In the thin film form, epitaxial strain imposed by the substrate can stabilize BFO into a wide range of crystal structures, including the T-like, R-like, orthorhombic, monoclinic and triclinic phases 21 , 22 , 23 , 24 , 25 , 26 . In particular, under large compressive strain the T-like phase exhibits a large c / a ratio of 22 , 27 , 28 about 1.25 and a giant ferroelectric polarization as large as 29 150 μC cm −2 , which is of great interest for applications such as piezoelectrics with enhanced electromechanical response. Structural characterizations indicate the high crystalline quality of freestanding BFO films down to monolayer thickness (Extended Data Fig. 2 ). Like freestanding STO films, the crystalline lattice of two-unit-cells-thick and one-unit-cell-thick freestanding BFO films survive only at low-dose SAED measurements. Remarkably, ultrathin freestanding BFO films show a structural transition from an R-like phase in the as-grown films to a T-like phase in their freestanding form (Fig. 3a, b ). Further study reveals that this structural transition takes place only in four-unit-cell and thinner freestanding films (Figs. 3c , 4d ). As the thickness of freestanding BFO films decreases, the in-plane lattice shrinks and the out-of-plane lattice expands, resulting in an abnormally large c / a ratio (up to 1.22) and polarization (140 μC cm −2 ) along the out-of-plane direction (Fig. 3c, d ), in contrast to a c / a value of about 1.0 and a polarization of 90–100 μC cm −2 in the unstrained bulk rhombohedral phase 30 . This resembles the strain-driven T-like phase 22 , 27 , 28 , 29 , but no strain is needed in these ultrathin freestanding BFO films. Fig. 3: Giant polarization and lattice distortion in ultrathin freestanding BFO films. a , b , Cross-sectional HAADF images of a three-unit-cell BFO film before ( a ) and after ( b ) releasing the film, showing an R-like phase with polarization along the <111> directions and a T-like phase with polarization along the <001> direction, respectively. c , d , The c / a ratio ( c ) and the offset ( δc and δa ) of Fe ions from the centres of four neighbouring Bi ions ( d ) as a function of the thickness of freestan d ing BFO films, showing the evolution from an R-like to a T-like phase transition as the thickness of the film decreases. The error bars in c and d represent the fitting error of the lattice constants. e , f , PFM amplitude–voltage butterfly loop ( e ) and phase–voltage hysteresis loop ( f ) of a four-unit-cell freestanding BFO film on a conductive silicon substrate, showing that the polarization is switchable. d 33 , out-of-plane piezoelectric coefficient. Full size image Fig. 4: Calculated giant polarization and lattice distortion in ultrathin freestanding BFO films. a , Structure of a three-unit-cell-thick BFO film. The off-centre displacement ( δ cz ) is defined as the distance along the out-of-plane direction between the centres of the neighbouring Bi ions (dotted black line) and Fe ions (dotted blue line). b , c , Evolution of the average c / a ratio ( b ) and δ cz ( c ) as a function of thickness shows the increase of c / a and the polarization in freestanding BFO films when approaching the 2D limit. Full size image Moreover, out-of-plane piezoresponse force microscopy (PFM) measurements on all freestanding BFO films show clear hysteresis loops (Fig. 3e, f and Extended Data Fig. 4 ), indicating that the polarization is switchable even in a film as thin as two unit cells. A strong in-plane domain contrast appears in thicker freestanding BFO films and disappears at thinner (two and four unit cells thick) films (Extended Data Figs. 4 , 5 ). This indicates the structural transition from an R-like phase (polarization along the <111> directions) in thick freestanding films to a T-like phase (polarization along the <001> direction) in ultrathin freestanding films, consistent with the phase transition observed in the HAADF measurements. The unexpected giant tetragonality and polarization in ultrathin freestanding BFO films highlights the possibility of discovering unexpected phases in ultrathin freestanding perovskite oxides. To understand the origin of the lattice distortion and giant polarization in ultrathin freestanding BFO films, we performed first-principles calculations on BFO slabs of various thicknesses (Fig. 4 ). We relaxed ultrathin slabs of bulk rhombohedral BFO and found that the off-centre displacement δ cz —defined as the distance between the Fe ions and the centre of four neighbouring Bi ions along the out-of-plane direction—and the c / a ratio exhibit large values, consistent with our experimental observations. Moreover, thickness-dependent calculations show an R-like to T-like phase transition as the freestanding BFO film approaches the 2D limit (Fig. 4b, c ). In ultrathin BFO films, the c / a ratio and the off-centre displacement are both large, and decrease with increasing thickness to the values corresponding to an R-like phase ( c / a ≈ 1). In bulk rhombohedral BFO, the stereochemical activity of a lone pair of Bi electrons drives the off-centring; in a T-like phase, Fe also displaces from the centrosymmetric position, the combined effects of which result in a large c / a ratio. In ultrathin films, the surface electric field further displaces the polar surfaces and enhances the c / a ratio (see Extended Data Fig. 6 ). This, in part, is responsible for the recently reported T-like distortion and slightly greater c / a value of 1.04 in the one-unit-cell-thick BFO film grown on the SRO-buffered STO substrate 31 . By releasing the clamping imposed by the substrate in our freestanding films, the in-plane lattice constants shrink and the out-of-plane lattice constants expand further, resulting in a giant c / a value of 1.22 (Fig. 3b ). We conclude by emphasizing that freestanding perovskite oxide films down to a thickness of one unit cell can be synthesized and transferred with high crystalline quality through a bottom-up layer-by-layer grow-and-release technique. These advantages mean that this bottom-up technique can also produce metastable phases and thin layers of uncleavable three-dimensional crystal, greatly expanding the range of available 2D material systems. With this ability, we anticipate that 2D perovskite oxides could become as useful as graphene in the discovery of unconventional 2D correlated quantum phases. As a promising example, we have shown that ultrathin freestanding BFO films without epitaxial strain exhibit unexpected large tetragonality and giant polarization. Also, the extreme flexibility of freestanding perovskite films provides an unprecedented very large strain gradient, the flexoelectric/flexomagnetic effects of which are yet to be explored. Moreover, the ability to transfer high-crystalline-quality ultrathin perovskite films onto any desired substrate provides opportunities for discovering novel interfacial physics and applications in new types of heterostructure. For example, similar to the recent discovery of unconventional superconductivity magic-angle-twisted bilayer graphene heterostructures 32 , 33 , more exotic correlated interfacial phases are yet to be explored in two misaligned sheets of perovskites. The ability to transfer any crystalline freestanding perovskite films onto silicon or other semiconducting wafers is likely to enable the direct incorporation of strongly correlated properties in conventional semiconductors, paving the way for a new generation of multifunctional electronic devices. Methods Epitaxial film growth and transfer The water-soluble SAO layer was grown first on (001) STO single-crystalline substrate followed by the growth of a thin film (STO or BFO) by oxide molecular beam epitaxy. The SAO films were grown with an oxidant (10% O 3 and 90% O 2 ) background partial pressure of oxygen \({p}_{{{\rm{O}}}_{{\rm{2}}}}\) of 1 × 10 −6 Torr and at a substrate temperature T substrate of 750 °C. The STO films were grown with \({p}_{{{\rm{O}}}_{{\rm{2}}}}\) = 1 × 10 −6 Torr and at T substrate = 650 °C. The SAO and STO films were grown layer by layer, for each of which the thickness was monitored by reflection high-energy electron diffraction (RHEED) oscillations. The BFO films were grown with an oxidant (distilled O 3 ) background pressure of 1 × 10 −5 Torr and at T substrate = 380 °C. Owing to the volatilility of bismuth, BFO films were grown in adsorption-controlled mode with a fixed Bi:Fe flux ratio of 7:1 and the thickness was controlled by the deposition time of iron. The RHEED electron beam was blanked during the growth of BFO films to improve the film quality. For plan-view STEM and TEM characterizations, the surface of the as-grown film was attached to the carbon film side of a holey carbon TEM grid, and these two parts were then immersed together in deionized water at room temperature until the sacrificial SAO layer was completely dissolved, with the freestanding film left on the carbon TEM grid. To transfer the freestanding oxide film to other substrates (such as silicon, Nb-doped STO and so on), the sample was stuck onto PDMS or silicone-coated PET and released in the same manner. After dissolving in water, the film/PDMS or film/silicone-coated PET was attached to the new substrate. Finally, the freestanding film remained on the new substrate after peeling off the PDMS or silicone-coated PET. The size of the crystalline freestanding films varies with their film thickness, from millimetres in thicker (tens of unit cells) films to micrometres in ultrathin films (that is, about 10 μm × 10 μm for 2-unit-cell-thick SrTiO 3 films, as shown in Fig. 1e ). TEM cross-sectional sample preparation High-quality Pt/Au (conductive protection layer)/BFO(STO)/Si (conductive silicon substrate) cross-sectional samples were fabricated using the focused ion beam technique with the FEI Helios 600i dual-beam system, as schematically shown in Extended Data Fig. 7 . The applications of the conductive substrate and capping layer help to reduce the beam damage. The cross-sectional lamellas were thinned by gallium ion beam at an accelerating voltage of 30 kV with a beam current of 0.79 nA and followed by gentle milling using an ion beam at an accelerating voltage of 2 kV with a beam current of several tens of picoamperes to reduce the superficial amorphous layers induced by ion implantation damage. We note that in the abovementioned manual transfer process, the freestanding films were transferred onto Si wafer with random in-plane orientation. There was no guaranteed alignment in any in-plane orientation between the freestanding BFO films and the Si wafers beneath, and Si has a low atomic number. As a result, the Si atomic columns were not resolved at the same time as the freestanding films were in the STEM-HAADF images. SAED, TEM and STEM experiments SAED patterns were acquired on a FEI Tecnai F20 TEM at 200 kV from a flat area of the samples suspended on holey carbon films or micro carbon grids. We used low electron beam current (0.045 nA) and short exposure time (2.0 s) to reduce electron beam damage. Atomic-resolution STEM-HAADF images were obtained on a double spherical aberration-corrected STEM/TEM FEI Titan G2 60-300 at 300 kV with a field emission gun or on a JEOL Grand ARM with double spherical aberration correctors. The probe convergence angle on the Titan electron microscope was 25 mrad, and the angular range of the HAADF detector was from 79.5 mrad to 200 mrad. Quantitative analysis of polarization Polarization values were extracted from the STEM-HAADF images by measuring the relative displacements of B -site Fe atoms with respect to the centre of four surrounding A -site Bi atoms following equation ( 1 ) 34 . This method has been widely adopted in the literature 35 , 36 , 37 . $${{\boldsymbol{P}}}_{i}=\frac{e}{{\Omega }_{i}}\sum _{j}{Z}_{i,j}^{\ast }{\rm{\delta }}{{\boldsymbol{u}}}_{i,j}$$ (1) where Ω i is the volume of the i th unit cell, \({Z}_{i,j}^{\ast }\) is the Born effective charge tensor and \({\rm{\delta }}{{\boldsymbol{u}}}_{i,j}\) is the relative displacement of the j th atom. Ultrathin freestanding oxide films exhibit ripples and roughening characteristics, which result in slanted features as observed in the HAADF-STEM images. Therefore, in our measurement, only those regions where the atomic columns are sharp and round were analysed, to minimize the potential slant effect. In addition, the simulation (Extended Data Fig. 9 ) shows that the deviation of the quantitative analysis of c / a is negligible compared to the giant c / a value, owing to the reduced dimensionality. PFM measurements The local ferroelectric properties were measured on freestanding films on a conducting Si wafer using an Asylum Research Cypher scanning probe microscope. Olympus AC240TM Pt/Ti-coated Si cantilevers were used in the PFM measurements. Hysteresis loops were collected in the dual a.c. resonance tracking (DART) mode with a triangular pulse of 6.0V in amplitude applied at the tip. PFM images were taken in the DART mode with driving voltage (0.5 V a.c.) applied at the tip. During the domain writing, the voltage was also applied at the tip. Computational methods The atomic and electronic structure of the system was obtained using density functional theory as implemented in the Vienna Ab initio Simulation Package (VASP) 38 , 39 . The projected augmented plane wave method is used to approximate the electron-ion potential 40 . The exchange and correlation potential is calculated with the generalized density approximation. In the calculation, we use a kinetic energy cutoff of 340 eV for the plane wave expansion of the projected augmented plane waves and an 8 × 8 × 1 grid of k points 41 for the Brillouin zone integration. For each slab in-plane lattice constants and internal coordinates are relaxed until the Hellman–Feynman force on each atom is less than |0.01| eV Å −1 . The exchange and correlation beyond the generalized density approximation were taken into account by introducing an onsite Coulomb repulsion with Hubbard U = 5.0 eV (ref. 42 ) for Fe 3 d orbitals in rotationally invariant formalism 43 , as implemented in VASP. The relaxation behaviour, however, is found to be qualitatively similar even without the U parameter. The rhombohedral slabs were constructed as \(\left(11\bar{2}0\right)\) cuts of the ground-state R3c phase. All calculations are spin-polarized and G-type anti-ferromagnetic configuration of Fe ions is assumed. Spin orbit coupling is not included in the calculations. Data availability The data that support the plots within this paper and other findings of this study are available from the corresponding authors upon reasonable request.
A team of researchers from Nanjing University in China, the University of Nebraska and the University of California in the U.S. has found a way to produce free-standing films of perovskite oxide. In their paper published in the journal Nature, the group describes the process they developed and how well it worked when tested. Yorick Birkhölzer and Gertjan Koster from the University of Twente have published a News and Views piece on the work done by the team in the same journal issue. Birkhölzer and Koster point out that many new materials are made by going to extremes—making them really big or really small. Making them small has led to many recent discoveries, they note, including a technique to make graphene. One area of research has focused on ways to produce transition-metal oxides in a thinner format. It has been slow going, however, due to their crystalline nature. Unlike some materials, transition-metal oxides do not naturally form into layers with a top layer that can be peeled off. Instead, they form in strongly bonded 3-D structures. Because of this, some in the field have worried that it might never be possible to produce them in desired forms. But now, the researchers with this new effort have found a way to produce two transition-metal oxides (perovskite oxides strontium titanate and bismuth ferrite) in a thin-film format. The process developed by the researchers involved using molecular beam epitaxy to apply a buffer layer onto a substrate followed by a layer of perovskite. Once the sandwich of materials was made, the researchers used water to dissolve the buffer layer, allowing the perovskite to be removed and placed onto other substrates. The researchers report that their process worked so well they were able to extract films of perovskite near the theoretical limit—one square unit cell (with approximately 0.4-nanometer sides). “Through our successful fabrication of ultrathin perovskite oxides down to the monolayer limit, we’ve created a new class of two-dimensional materials,” says Xiaoqing Pan, professor of materials science & engineering and Henry Samueli Endowed Chair in Engineering at UCI. “Since these crystals have strongly correlated effects, we anticipate they will exhibit qualities similar to graphene that will be foundational to next-generation energy and information technologies.” Credit: Xiaoqing Pan / UCI Birkhölzer and Koster point out that the work done by the combined Chinese and American team demonstrated that it is possible to produce at least some transition-metal oxides in a thin film format. Their research also allayed fears that such a film would collapse, making it unusable.
10.1038/s41586-019-1255-7
Other
Dino-bird dandruff research head and shoulders above rest
Maria E. McNamara et al. Fossilized skin reveals coevolution with feathers and metabolism in feathered dinosaurs and early birds, Nature Communications (2018). DOI: 10.1038/s41467-018-04443-x Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-018-04443-x
https://phys.org/news/2018-05-dino-bird-dandruff-shoulders-rest.html
Abstract Feathers are remarkable evolutionary innovations that are associated with complex adaptations of the skin in modern birds. Fossilised feathers in non-avian dinosaurs and basal birds provide insights into feather evolution, but how associated integumentary adaptations evolved is unclear. Here we report the discovery of fossil skin, preserved with remarkable nanoscale fidelity, in three non-avian maniraptoran dinosaurs and a basal bird from the Cretaceous Jehol biota (China). The skin comprises patches of desquamating epidermal corneocytes that preserve a cytoskeletal array of helically coiled α-keratin tonofibrils. This structure confirms that basal birds and non-avian dinosaurs shed small epidermal flakes as in modern mammals and birds, but structural differences imply that these Cretaceous taxa had lower body heat production than modern birds. Feathered epidermis acquired many, but not all, anatomically modern attributes close to the base of the Maniraptora by the Middle Jurassic. Introduction The integument of vertebrates is a complex multilayered organ with essential functions in homoeostasis, resisting mechanical stress and preventing pathogenic attack 1 . Its evolution is characterised by recurrent anatomical innovation of novel tissue structures (e.g., scales, feathers and hair) that, in amniotes, are linked to major evolutionary radiations 2 . Feathers are associated with structural, biochemical and functional modifications of the skin 2 , including a lipid-rich corneous layer characterised by continuous shedding 3 . Evo-devo studies 4 and fossilised feathers 5 , 6 , 7 have illuminated aspects of early feather evolution, but how the skin of basal birds and feathered non-avian dinosaurs evolved in tandem with feathers has received little attention. Like mammal hair, the skin of birds is thinner than in most reptiles and is shed in millimetre- scale flakes (comprising shed corneocytes, i.e., terminally differentiated keratinocytes), not as large patches or a whole skin moult 2 . Desquamation of small patches of corneocytes, however, also occurs in crocodilians and chelonians and is considered primitive to synchronised cyclical skin shedding in squamates 8 . Crocodilians and birds, the groups that phylogenetically bracket non-avian dinosaurs, both possess the basal condition; parsimony suggests that this skin shedding mechanism was shared with non-avian dinosaurs. During dinosaur evolution, the increase in metabolic rate towards a true endothermic physiology (as in modern birds) was associated with profound changes in integument structure 9 relating to a subcutaneous hydraulic skeletal system, an intricate dermo-subcutaneous muscle system, and a lipid-rich corneous layer characterised by continuous shedding 3 . The pattern and timing of acquisition of these ultrastructural skin characters, however, is poorly resolved and there is no a priori reason to assume that the ultrastructure of the skin of feathered non-avian dinosaurs and early birds would have resembled that of their modern counterparts. Dinosaur skin is usually preserved as an external mould 10 and rarely as organic remains 11 , 12 or in authigenic minerals 13 , 14 , 15 . Although mineralised fossil skin can retain (sub-)cellular anatomical features 16 , 17 , dinosaur skin is rarely investigated at the ultrastructural level 14 . Critically, despite reports of preserved epidermis in a non-feathered dinosaur 10 there is no known evidence of the epidermis 18 in basal birds or of preserved skin in feathered non-avian dinosaurs. The coevolutionary history of skin and feathers is therefore largely unknown. Here we report the discovery of fossilised skin in the feathered non-avian maniraptoran dinosaurs Beipiaosaurus, Sinornithosaurus and Microraptor , and the bird Confuciusornis from the Early Cretaceous Jehol biota (NE China; Supplementary Fig. 1 ). The ultrastructure of the preserved tissues reveals that feathered skin had evolved many, but not all, modern attributes by the origin of the Maniraptora in the Middle Jurassic. Results and discussion Fossil soft tissue structure Small patches of tissue (0.01–0.4 mm 2 ; Fig. 1a–d and Supplementary Figs. 2 – 6 ) are closely associated with fossil feathers (i.e., usually within 500 µm of carbonaceous feather residues, Supplementary Fig. 2e, g, j, k, o, s, t ). The patches are definitively of fossil tissue, and do not reflect surface contamination with modern material during sample preparation, as they are preserved in calcium phosphate (see 'Taphonomy', below); further, several samples show margins that are overlapped, in part, by the surrounding matrix. The tissues have not, therefore, simply adhered to the sample surface as a result of contamination from airborne particles in the laboratory. Fig. 1 Phosphatised soft tissues in non-avian maniraptoran dinosaurs and a basal bird. a – h Backscatter electron images of tissue in Confuciusornis (IVPP V 13171; a , e , f ), Beipiaosaurus (IVPP V STM31-1; b , g ), Sinornithosaurus (IVPP V 12811; c , h ) and Microraptor (IVPP V 17972A; d ). a – d Small irregularly shaped patches of tissue. e Detail of tissue surface showing polygonal texture. f Focused ion beam-milled vertical section through the soft tissue showing internal fibrous layer separating two structureless layers. g , h Fractured oblique section through the soft tissues, showing the layers visible in f Full size image The tissue patches are typically 3–6 µm thick and planar (Fig. 1a–e ). Transverse sections and fractured surfaces show an inner fibrous layer (1.0–1.2 µm thick) between two thinner structureless layers (0.2–0.5 µm thick) (Fig. 1f–h ). The external surface of the structureless layer is smooth and can show a subtle polygonal texture defined by polygons 10–15 µm wide (Fig. 1e, h ). The fibrous layer also shows polygons (Figs. 1f, h and 2a–e , and Supplementary Fig. 6 ) that contain arrays of densely packed fibres 0.1–0.5 µm wide (Fig. 2f–i and Supplementary Fig. 5f ). Well-preserved fibres show helicoidal twisting (Fig. 2h, i ). Fibres in marginal parts of each polygon are 0.1–0.3 µm wide and oriented parallel to the tissue surface; those in the interior of each polygon are 0.3–0.5 µm wide and are usually perpendicular to the tissue surface (Fig. 2b, h and Supplementary Fig. S6d ). In the marginal 1–2 µm of each polygon, the fibres are usually orthogonal to the lateral polygon margin and terminate at, or bridge the junction between, adjacent polygons (Fig. 2f, g and Supplementary Fig. 6e ). The polygons are usually equidimensional but are locally elongated and mutually aligned, where the thick fibres in each polygon are sub-parallel to the tissue surface and the thin fibres, parallel to the polygon margin (Fig. 2j, k and Supplementary Fig. 6g–l ). Some polygons show a central depression (Fig. 2c–e and Supplementary Fig. 6a–c ) in which the thick fibres can envelop a globular structure 1–2 µm wide (Fig. 2e ). Fig. 2 Ultrastructure of the soft tissues in Confuciusornis (IVPP V 13171). a , b Backscatter electron micrographs; all other images are secondary electron micrographs. a , b Closely packed polygons. c Detail of polygons showing fibrous contents, with d interpretative drawing. e – g Polygon ( e ) with detail of regions indicated showing tonofibrils bridging ( f ) and abutting at ( g ) junction between polygons. h , i Helical coiling in tonofibrils. h Oblique view of polygon with central tonofibrils orientated perpendicular to the polygon surface. j , k Polygons showing stretching-like deformation Full size image Fossil corneocytes The texture of these fossil tissues differs from that of conchostracan shells and fish scales from the host sediment, the shell of modern Mytilus , modern and fossil feather rachis and modern reptile epidermis (Supplementary Fig. 7a–n ). The elongate geometry of some polygons (Fig. 2j, k and Supplementary Fig. 6g, l ) implies elastic deformation of a non-biomineralized tissue due to mechanical stress. On the basis of their size, geometry and internal structure, the polygonal structures are interpreted as corneocytes (epidermal keratinocytes). In modern amniotes, these are polyhedral-flattened cells (1–3 µm × ca. 15 µm) filled with keratin tonofibrils, lipids and matrix proteins 18 , 19 , 20 (Fig. 3a, b and Supplementary Figs 2u –x, 8 , 9 ). The outer structureless layer of the fossil material corresponds to the cell margin; it is thicker than the original biological template, i.e., the corneous cell envelope and/or cell membrane, but this is not unexpected, reflecting diagenetic overgrowth by calcium phosphate (see 'Taphonomy'). The fibres in the fossil corneocytes are identified as mineralised tonofibrils: straight, unbranching bundles of supercoiled α-keratin fibrils 0.25–1 µm wide 18 , 21 that are the main component of the corneocyte cytoskeleton 22 and are enveloped by amorphous cytoskeletal proteins 22 . In the fossils, the thin tonofibrils often abut those of the adjacent cell (Fig. 2g and Supplementary Fig. 6e ), but locally can bridge the boundary between adjacent cells (Fig. 2f ). The latter recalls desmosomes, regions of strong intercellular attachment between modern corneocytes 23 . The central globular structures within the fossil corneocytes resemble dead cell nuclei 24 , as in corneocytes of extant birds (but not extant reptiles and mammals) 24 (Supplementary Fig. 8 ). The position of these pycnotic nuclei is often indicated by depressions in the corneocyte surface in extant birds 24 (Fig. 3b ); some fossil cells show similar depressions (Fig. 2c and Supplementary Fig. 6a–c ). Fig. 3 Corneocytes in extant birds. a – d Scanning electron micrographs of shed skin in extant zebra finch ( Taeniopygia guttata ( n = 1); a – d ). a Corneocytes defining polygonal texture. b Central depression (arrow) marks position of pycnotic nucleus. c , d Shed skin flakes entrained within feathers Full size image Taphonomy Keratin is a relatively recalcitrant biomolecule due to its heavily cross-linked paracrystalline structure and hydrophobic nonpolar character 23 . Replication of the fossil corneocytes in calcium phosphate is thus somewhat unexpected as this process usually requires steep geochemical gradients characteristic of early decay 25 and usually applies to decay-prone tissues, such as muscle 26 and digestive tissues 27 . Recalcitrant tissues such as dermal collagen can, however, be replicated in calcium phosphate where they contain an inherent source of calcium and, in particular, phosphate ions that are liberated during decay 28 . Corneocytes contain sources of both of these ions. During terminal differentiation, intracellular concentrations of calcium increase 29 and α-keratin chains are extensively phosphorylated 23 . Further, corneocyte lipid granules 30 are rich in phosphorus and phosphate 31 . These chemical moieties would be released during degradation of the granules and would precipitate on the remaining organic substrate, i.e., the tonofibrils. In extant mammals, densely packed arrays of tonofibrils require abundant interkeratin matrix proteins for stability 32 . These proteins, however, are not evident in the fossils. This is not unexpected, as the proteins are rare in extant avian corneocytes 33 and, critically, occur as dispersed monomers 34 and would have a lower preservation potential than the highly cross-linked and polymerised keratin bundles of the tonofibrils. The outer structureless layer of the fossil corneocytes is thicker than the likely biological template(s), i.e., the corneous cell envelope (a layer of lipids, keratin and other proteins up to 100 nm thick that replaces the cell membrane during terminal differentiation 34 ) and/or cell membrane. This may reflect a local microenvironment conducive to precipitation of calcium phosphate: during terminal differentiation, granules of keratohyalin, an extensively phosphorylated protein 35 with a high affinity for calcium ions 36 , accumulate at the periphery of the developing corneocytes 37 . The thickness of the outer solid layer of calcium phosphate in the fossils, plus the gradual transition from this to the inner fibrous layer, suggests that precipitation of phosphate proceeded from the margins towards the interior of the corneocytes. In this scenario, phosphate availability in the marginal zones of the cells would have exceeded that required to replicate the tonofibrils. The additional phosphate would have precipitated as calcium phosphate in the interstitial spaces between the tonofibrils, progressing inwards from the inner face of the cell margin. Skin shedding in feathered dinosaurs and early birds In extant amniotes, the epidermal cornified layer is typically 5–20 cells thick (but thickness varies among species and location on the body 38 ). The patches of fossil corneocytes, however, are one cell thick (Fig. 1f and Supplementary Figs. 5c , 10 ). This, plus the consistent small size (<400 μm) of the patches and the remarkably high fidelity of preservation, is inconsistent with selective preservation of a continuous sheet of in situ tissue. In a minority ( n = 8) of examples, the skin occurs at the edge of the sample of fossil soft tissues and thus could potentially represent a smaller fragment of an originally larger piece of fossil skin (with the remainder of the piece on the fossil slab). In most examples, however, the entire outline of the skin fragment is contained within the margin of a sample. Examination of the margins of various samples at high magnification reveals that the sample and surrounding sediment are often in exactly the same plane (e.g., Supplementary Fig. 10 ). Even where the margin of the sample of skin is covered by sediment, the sample is unlikely to have been much bigger than the apparent size as the fossil skin, being almost perfectly planar, forms a natural plane of splitting. There is no evidence that the preserved thickness of the skin is an artefact of preparation or erosion. During splitting of a rock slab, the plane of splitting frequently passes through the soft tissues in an uneven manner, exposing structures at different depths. In the fossils studied here, the plane of splitting usually passes through the corneocytes (exposing their internal structure), and rarely along the outer face of the corneocyte layer. There is no evidence for removal of more than one layer of corneocytes: FIB sections show preservation of only one layer and several SEM images show complete vertical sections through the preserved skin (where the relationship with the over- and underlying sediment is visible), with evidence for only a single layer of corneocytes. The fibrous internal fill of the fossil corneocytes is exposed where the plane of splitting of the fossil slab passes through the patches of tissue. The topography of the fossil corneocytes, however, varies with the position of the plane of splitting, which can vary locally through the soft tissues on a millimetre scale: the corneocytes can present with raised margins and a central depression, or with depressed margins and a central elevated zone (Fig. S9 ). The size, irregular geometry and thickness of the patches of corneocytes resemble shed flakes of the cornified layer (dandruff-like particles 39 ; Fig. 3 ). In extant birds, corneocytes are shed individually or in patches up to 0.5 mm 2 that can be entrained within feathers (Fig. 3c, d and Supplementary Fig. 2u, v ). The fossils described herein provide the first evidence for the skin shedding process in basal birds and non-avian maniraptoran dinosaurs and confirm that at least some non-avian dinosaurs shed their skin in small patches 40 . This shedding style is identical to that of modern birds 18 (Fig. 3c, d ) and mammals 20 and implies continuous somatic growth. This contrasts with many extant reptiles, e.g., lepidosaurs, which shed their skin whole or in large sections 21 , but shedding style can be influenced by factors such as diet and environment 41 . Evolutionary implications of fossil corneocyte structure The fossil corneocytes exhibit key adaptations found in their counterparts in extant birds and mammals, especially their flattened polygonal geometry and fibrous cell contents consistent with α-keratin tonofibrils 16 . Further, the fossil tonofibrils (as in extant examples 22 ) show robust intercellular connections and form a continuous scaffold across the corneocyte sheet (Fig. 2b, c, j and Supplementary Fig. 6 ). In contrast, corneocytes in extant reptiles contain a homogenous mass of β-keratin (with additional proteins present in the cell envelope) and fuse during development, forming mature β-layers without distinct cell boundaries 42 . The retention of pycnotic nuclei in the fossil corneocytes is a distinctly avian feature not seen in modern reptiles (but see ref. 20 ). Epidermal morphogenesis and differentiation are considered to have diverged in therapsids and sauropsids 31 . Our data support other evidence that shared epidermal features in birds and mammals indicate convergent evolution 43 and suggest that lipid-rich corneocyte contents may be evolutionarily derived characters in birds and feathered non-avian maniraptorans. Evo-devo studies have suggested that the avian epidermis could have arisen from the expansion of hinge regions in ‘protofeather’-bearing scaly skin 20 . While fossil evidence for this transition is lacking, our data show that the epidermis of basal birds and non-avian maniraptoran dinosaurs had already evolved a decidedly modern character, even in taxa not capable of powered flight. This does not exclude the possibility that at least some of the epidermal features described here originated in more basal theropods, especially where preserved skin lacks evidence of scales (as in Sciurumimus 44 ). Refined genomic mechanisms for modulating the complex expression of keratin in the epidermis 45 , terminal differentiation of keratinocytes and the partitioning of α- and β-keratin synthesis in the skin of feathered animals 32 were probably modified in tandem with feather evolution close to the base of the Maniraptora by the late Middle Jurassic (Fig. 4 ). Existing fossil data suggest that this occurred after evolution of the beak in Maniraptoriformes and before evolution of the forelimb patagia and pterylae (Fig. 4 ); the first fossil occurrences of all of these features span ca. 10–15 Ma, suggesting a burst of innovation in the evolution of feathered integument close to and across the Lower-Middle Jurassic boundary. The earliest evidence for dermal musculature associated with feathers is ca. 30 Ma younger, in a 125 Ma ornithothoracean bird 17 . Given the essential role played by this dermal network in feather support and control of feather orientation 18 , its absence in feathered non-avian maniraptorans may reflect a taphonomic bias. Fig. 4 Schematic phylogeny, scaled to geological time, of selected coelurosaurs showing the pattern of acquisition of key modifications of the skin. The phylogeny is the most likely of the maximum likelihood models, based on minimum-branch lengths (mbl) and transitions occurring as all-rates-different (ARD). Claws and footpads are considered primitive in coelurosaurs. Available data indicate that modified keratinocytes, and continuous shedding, originated close to the base of the Maniraptora; this is predicted to shift based on future fossil discoveries towards the base of the Coelurosauria to include other feathered taxa Full size image In certain aspects, the fossil corneocytes are distinctly non-avian and indicate that feathered dinosaurs and early birds had a unique integumentary anatomy and physiology transitional between that of modern birds and non-feathered dinosaurs. In extant birds, corneocyte tonofibrils are dispersed loosely among intracellular lipids 19 ; this facilitates evaporative cooling in response to heat production during flight and insulation by plumage 46 . In contrast, the fossil tonofibrils are densely packed and fill the cell interior. There is no evidence for post-mortem shrinkage of the fossil corneocytes: the size range is consistent with those in modern birds, and there is no evidence for diagenetic wrinkling, contortion or separation of individual cells. This strongly suggests that the preserved density of tonofilaments in the fossil corneocytes reflects originally higher densities than in extant birds. This is not a function of body size: extant birds of disparate size (e.g., zebra finch and ostrich) exhibit loosely dispersed tonofibrils 47 . The fossil birds are thus likely to have had a lower physiological requirement for evaporative cooling and, in turn, lower body heat production related to flight activity 46 than in modern birds. This is consistent with other evidence for low basal metabolic rates in non-avian maniraptoran dinosaurs 47 , 48 and basal birds 47 and with hypotheses that the feathers of Microraptor 49 and, potentially, Confuciusornis 48 (but see ref. 50 ) were not adapted for powered flight, at least for extended periods 50 . Methods Fossil material This study used the following specimens in the collections of the Institute for Vertebrate Palaeontology and Paleanthropology, Beijing, China: Confuciusornis (IVPP V 13171), Beipiaosaurus (IVPP V STM31-1), Sinornithosaurus (IVPP V 12811) and Microraptor (IVPP V 17972A). Small (2–10 mm 2 ) chips of soft tissue were removed from densely feathered regions of the body during initial preparation of the specimens and stored for later analysis. Precise sampling locations are not known. Modern bird tissues Male specimens of the zebra finch Taeniopygia guttata ( n = 1) and the Java sparrow Lonchura oryzivora ( n = 2) were euthanased via cervical dislocation. Individual feathers dissected from T. guttata and moulted down feathers from a male specimen of the American Pekin duck ( Anas platyrhynchos domestica ) were not treated further. Small (ca. 10–15 mm 2 ) pieces of skin and underlying muscle tissue were dissected from the pterylae of the breast of reproductively active male specimens of L. oryzivora raised predominantly on a diet of seeds in October 2016. Tissue samples were fixed for 6 h at 4 °C in 4% paraformaldehyde. After snap freezing in isopentane, tissue was coronal sectioned (10 µm thickness) with a Leica CM1900 cryostat. All sections were allowed to air dry at room temperature for 3 h and stored at −80 °C prior to immunohistology. Ethics The authors have complied with all relevant ethical regulations. Euthanasia of T. guttata and L. oryzivora was approved by the Health Products Regulatory Authority of Ireland via authorisation AE19130-IO87 (for T. guttata ) and CRN 7023925 (for L. oryzivora ). Electron microscopy Samples of soft tissues were removed from fossil specimens with sterile tools, placed on carbon tape on aluminium stubs, sputter coated with C or Au and examined using a Hitachi S3500-N and a FEI Quanta 650 FEG SEM at accelerating voltages of 5–20 kV. Untreated feathers and fixed and dehydrated samples of skin from modern birds were placed on carbon tape on aluminium stubs, sputter coated with C or Au and examined using a Hitachi S3500-N and a FEI Quanta 650 FEG SEM at accelerating voltages of 5–20 kV. Selected histological sections of L. oryzivora were deparaffinized in xylene vapour for 3 × 5 min, sputter coated with Au, and examined using a FEI Quanta 650 FEG SEM at an accelerating voltage of 15 kV. The brightness and contrast of some digital images were adjusted using Deneba Canvas software. Focussed ion beam-scanning electron microscopy Selected samples of fossil tissue were analysed using an FEI Quanta 200 3D FIB-SEM. Regions of interest were coated with Pt using an in situ gas injection system and then milled using Ga ions at an accelerating voltage of 30 kV and a beam current of 20 nA–500 pA. Immunohistology Histological sections were incubated in permeabilization solution (0.2% Triton X-100 in 10 mM phosphate-buffered saline (PBS)) for 30 min at room temperature, washed once in 10 mM PBS and blocked in 5% normal goat serum in 10 mM PBS for 1 h at room temperature. Sections were incubated in primary antibody to cytokeratin (1:300; ThermoFisher) in 2% normal goat serum in 10 mM PBS overnight at 4 °C. Following. 3 × 5 min wash in 10 mM PBS, sections were incubated with a green fluorophore-labelled secondary antibody (1:500; Invitrogen) for 2 h at room temperature. After a 3 × 10 min wash in 10 mM PBS, sections were incubated in BisBenzimide nuclear counterstain (1:3000; Sigma-Aldrich) for 4 min at room temperature. Sections were washed briefly, mounted and coverslipped with PVA-DABCO. Confocal microscopy Digital images were obtained using an Olympus AX70 Provis upright fluorescence microscope and a ×100 objective and stacked using Helicon Focus software ( ). Data availability The data that support the findings of this study can be downloaded from the CORA repository ( ) at .
Palaeontologists from University College Cork (UCC) in Ireland have discovered 125 million-year-old dandruff preserved amongst the plumage of feathered dinosaurs and early birds, revealing the first evidence of how dinosaurs shed their skin. UCC's Dr Maria McNamara and her team studied the fossil cells, and dandruff from modern birds, with powerful electron microscopes for the study, published today in the journal Nature Communications. "The fossil cells are preserved with incredible detail – right down to the level of nanoscale keratin fibrils. What's remarkable is that the fossil dandruff is almost identical to that in modern birds – even the spiral twisting of individual fibres is still visible," said Dr Maria McNamara. Just like human dandruff, the fossil dandruff is made of tough cells called corneocytes, which in life are dry and full of the protein keratin. The study suggests that this modern skin feature evolved sometime in the late Middle Jurassic, around the same time as a host of other skin features evolved. "There was a burst of evolution of feathered dinosaurs and birds at this time, and it's exciting to see evidence that the skin of early birds and dinosaurs was evolving rapidly in response to bearing feathers," Dr McNamara added. Dr McNamara led the study, in collaboration with her postdoctoral researcher Dr Chris Rogers; Dr Andre Toulouse and Tara Foley, also from UCC; Dr Paddy Orr from UCD, Ireland; and an international team of palaeontologists from the UK and China. The dandruff is the first evidence of how dinosaurs shed their skin. The feathered dinosaurs studied - Microraptor, Beipiaosaurus and Sinornithosaurus – clearly shed their skin in flakes, like the early bird Confuciusornis studied by the team and also modern birds and mammals, and not as a single piece or several large pieces, as in many modern reptiles. Co-author Professor Mike Benton, from the University of Bristol's School of Earth Sciences, said: "It's unusual to be able to study the skin of a dinosaur, and the fact this is dandruff proves the dinosaur was not shedding its whole skin like a modern lizard or snake but losing skin fragments from between its feathers." Modern birds have very fatty corneocytes with loosely packed keratin, which allows them to cool down quickly when they are flying for extended periods. The corneocytes in the fossil dinosaurs and birds, however, were packed with keratin, suggesting that the fossils didn't get as warm as modern birds, presumably because they couldn't fly at all or for as long periods.
10.1038/s41467-018-04443-x
Biology
Oldest DNA reveals life in Greenland two million years ago
Kurt H. Kjær et al, A 2-million-year-old ecosystem in Greenland uncovered by environmental DNA, Nature (2022). DOI: 10.1038/s41586-022-05453-y Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-05453-y
https://phys.org/news/2022-12-oldest-dna-reveals-life-greenland.html
Abstract Late Pliocene and Early Pleistocene epochs 3.6 to 0.8 million years ago 1 had climates resembling those forecasted under future warming 2 . Palaeoclimatic records show strong polar amplification with mean annual temperatures of 11–19 °C above contemporary values 3 , 4 . The biological communities inhabiting the Arctic during this time remain poorly known because fossils are rare 5 . Here we report an ancient environmental DNA 6 (eDNA) record describing the rich plant and animal assemblages of the Kap København Formation in North Greenland, dated to around two million years ago. The record shows an open boreal forest ecosystem with mixed vegetation of poplar, birch and thuja trees, as well as a variety of Arctic and boreal shrubs and herbs, many of which had not previously been detected at the site from macrofossil and pollen records. The DNA record confirms the presence of hare and mitochondrial DNA from animals including mastodons, reindeer, rodents and geese, all ancestral to their present-day and late Pleistocene relatives. The presence of marine species including horseshoe crab and green algae support a warmer climate than today. The reconstructed ecosystem has no modern analogue. The survival of such ancient eDNA probably relates to its binding to mineral surfaces. Our findings open new areas of genetic research, demonstrating that it is possible to track the ecology and evolution of biological communities from two million years ago using ancient eDNA. Main The Kap København Formation is located in Peary Land, North Greenland (82° 24′ N 22° 12′ W) in what is now a polar desert. The upper depositional sequence contains well-preserved terrestrial animal and plant remains washed into an estuary during a warmer Early Pleistocene interglacial cycle 7 (Fig. 1 ). Nearly 40 years of palaeoenvironmental and climate research at the site provide a unique perspective into a period when the site was situated at the boreal Arctic ecotone with reconstructed summer and winter average minimum temperatures of 10 °C and −17 °C respectively—more than 10 °C warmer than the present 7 , 8 , 9 , 10 , 11 . These conditions must have driven substantial ablation of the Greenland Ice Sheet, possibly producing one of the last ice-free intervals 7 in the last 2.4 million years (Myr). Although the Kap København Formation is known to yield well-preserved macrofossils from a coniferous boreal forest and a rich insect fauna, few traces of vertebrates have been found. To date, these comprise remains from lagomorph genera, their coprolites and Aphodius beetles, which live in and on mammalian dung 10 , 11 . However, the approximately 3.4 Myr old Fyles Leaf bed and Beaver Pond on Ellesmere Island in Arctic Canada preserve fossils of mammals that potentially could have colonized Greenland, such as the extinct bear ( Protarctos abstrusus ), extinct beavers ( Dipoides sp.), the small canine Eucyon and Arctic giant camelines 4 , 12 , 13 (similar to Paracamelus ). Whether the Nares Strait was a sufficient barrier to isolate northern Greenland from colonization by this fauna remains an open question. Fig. 1: Geographical location and depositional sequence. a . Location of Kap København Formation in North Greenland at the entrance to the Independence Fjord (82° 24′ N 22° 12′ W) and locations of other Arctic Plio-Pleistocene fossil-bearing sites (red dots). b , Spatial distribution of the erosional remnants of the 100-m thick succession of shallow marine near-shore sediments between Mudderbugt and the low mountains towards the north (a + b refers to location 74a and 74b). c , Glacial–interglacial division of the depositional succession of clay Member A and units B1, B2 and B3 constituting sandy Member B. Sampling intervals for all sites are projected onto the sedimentary succession of locality 50. Sedimentological log modified after ref. 7 . Circled numbers on the map mark sample sites for environmental DNA analyses, absolute burial dating and palaeomagnetism. Numbered sites refer to previous publications 7 , 10 , 11 , 14 , 61 . Full size image The Kap København Formation is formally subdivided into two members 7 (Fig. 1 ). The lower Member A consists of up to 50 m of laminated mud with an Arctic ostracod, foraminifera and mollusc fauna deposited in an offshore glaciomarine environment 14 . The overlying Member B consists of 40–50 m of sandy (units B1 and B3) and silty (unit B2) deposits (Extended Data Fig. 1 ), including thin organic-rich beds with an interglacial macrofossil fauna that were deposited closer to the shore in a shallow marine or estuarine environment represented by upper and lower shoreface sedimentary facies 7 . The specific depositional environments are also reflected in the mineralogy of the units, where the proximal B3 locality has the lowest clay and highest quartz contents (Sample compositions in Supplementary Tables 4.2.1 and 4.2.2 and unit averages in Supplementary Tables 4.2.3 and 4.2.4 ). The architecture of the basin infill suggests that Member B units thicken towards the present coast—that is, distal to the sediment source in the low mountains in the north (Fig. 1 ). Abundant organic detritus horizons are recorded in units B1 and B3, which also contain beds rich in Arctic and boreal plant and invertebrate macrofossils, as well as terrestrial mosses 10 , 15 . Therefore, the taphonomy of the DNA most probably reflects the biological communities eroded from a range of habitats, fluvially transported to the foreshore and concentrated as organic detritus mixed into sandy near-shore sediments within units B1 and B3. Conversely, the deeper water facies from Member A and unit B2 have a stronger marine signal. This scenario is supported by the similarities in the mineralogic composition between Kap København Formation sediments and Kim Fjelde sediments (Supplementary Tables 4.2.1 and 4.2.5 ). Geological age A series of complementary studies has successively narrowed the depositional age bracket of the Kap København Formation from 4.0–0.7 Myr to a 20,000-year-long age bracket around 2.4 Myr (see Supplementary Information, sections 1 – 3 ). This was achieved by a combination of palaeomagnetism, biostratigraphy and allostratigraphy 7 , 14 , 16 , 17 , 18 . Notably, the last appearance data of the mammals, foraminifera and molluscs in the stratigraphic record show an age close to 2.4 Myr (see Supplementary Information, section 2 ). Within this overall framework, we add new palaeomagnetic data showing that Member A has reversed magnetic polarity and the main part of the overlying unit B2 has normal magnetic polarity. In the context of previous work, this is consistent with three magnetostratigraphic intervals in the Early Pleistocene where there is a reversal: 1.93 Myr (scenario 1), 2.14 Myr (scenario 2) or 2.58 Myr (scenario 3) (Supplementary Information, section 1 ). Furthermore, we constrain the age using cosmogenic 26 Al: 10 Be burial dating of Member B at four sites in this study (Supplementary Information, section 3 ). The recommended maximum burial age for the Kap København Formation is 2.70 ± 0.46 Myr (Fig. 2 ; Methods). However, we discard the older scenario 3 as it contradicts the evidence for a continuous sedimentation across Members A and B during a single glacial–interglacial depositional cycle 7 , 14 , 16 , 18 , 19 . This leaves two possible scenarios (scenarios 1 and 2), in which scenario 1 supports an age of 1.9 Myr and scenario 2 supports an age of 2.1 Myr. Fig. 2: Age proxies for the Kap København Formation. a , Revised palaeomagnetic analysis shows unit B2 to have normal polarity and unlocks three possible age scenarios (S1–S3) including Members A (blue) and B (brown). Normal polarity is coloured black and reverse polarity is shown in white. Ja, Jaramillo; Co, Cobb Mountain; Ol, Olduvai; Fe, Feni; Ka, Kaena; Ma, Mammoth. b , Presence and last appearance datum (LAD) for marine foraminifera Cibicides grossus , rabbit-genus Hypolagus and the mollusc Arctica islandica in the High Arctic, Northern Hemisphere and North Greenland, respectively. The blue band on the far right indicates the age range for Member A estimated from amino acid ratios on shells 7 . c , Convolved probability distribution functions for cosmogenic burial ages calculated for two different production ratios (7.42 (black) and 6.75 (blue)). The dashed line and the solid line show the distributions for steady erosion and zero erosion, respectively. These distributions are all maximum ages. d , Molecular dating of Betula sp. yielding a median age of the DNA in the sediment of 1.323 Myr, with whiskers confining the 95% height posterior density (HPD) of 0.68 to 2.02 Myr (blue density plot), running Markov chain Monte Carlo estimation for 100 million iterations. The red dot is the median molecular age estimate found using the Mastodon mitochondrial genome restricting to radiocarbon-dated specimens, whereas the green area includes molecular clock estimated specimens in BEAST, running Markov chain Monte Carlo estimation for 400 million iterations. Whiskers confine the 95% HPD. Full size image DNA preservation DNA degrades with time owing to microbial enzymatic activity, mechanical shearing and spontaneous chemical reactions such as hydrolysis and oxidation 20 . The oldest known DNA obtained to date has been recovered from a permafrost-preserved mammoth molar dated to 1.2–1.1 Myr using geological methods and 1.7 Myr (95% highest posterior density, 2.1–1.3 Myr) using molecular clock dating 21 . To explore the likelihood of recovering DNA from sediments at the Kap København formation, we calculated the thermal age of the DNA and its expected degree of depurination at the Kap København Formation. Using the mean average temperature 22 (MAT) of −17 °C, we found a thermal age of 2.7 thousand years for DNA at a constant 10 °C, which is 741 times less than the age of 2.0 Myr (Supplementary Information, section 4 and Supplementary Table 4.4.1 ). Using the rate of depurination from Moa bird fossils 23 , we found it plausible that DNA with an average size of 50 base pairs (bp) could survive at the Kap København Formation, assuming that the site remained frozen (Supplementary Information, section 4 and Supplementary Table 4.4.2 ). Mechanisms that preserve DNA in sediments are likely to be different from that of bone. Adsorption at mineral surfaces modifies the DNA conformation, probably impeding molecular recognition by enzymes, which effectively hinders enzymatic degradation 24 , 25 , 26 , 27 . To investigate whether the minerals found in Kap København Formation could have retained DNA during the deposition and preserved it, we determined the mineralogic composition of the sediments using X-ray diffraction and measured their adsorption capacities. Our findings highlight that the marine depositional environment favours adsorption of extracellular DNA on the mineral surfaces (Supplementary Information, section 4 and Supplementary Table 4.3.1.1 ). Specifically, the clay minerals (9.6–5.5 wt%) and particularly smectite (1.2–3.7 wt%), have higher adsorption capacity compared to the non-clay minerals (59–75 wt%). At a DNA concentration representative of the natural environments 28 (4.9 ng ml −1 DNA), the DNA adsorption capacity of smectite is 200 times greater than for quartz. We applied a sedimentary eDNA extraction protocol 29 on our mineral-adsorbed DNA samples, and retrieved only 5% of the adsorbed DNA from smectite and around 10% from the other clay minerals (Methods and Supplementary Information, section 4 ). By contrast, we retrieved around 40% of the DNA adsorbed to quartz. The difference in adsorption capacity and extraction yield from the different minerals demonstrates that mineral composition may have an important role in ancient eDNA preservation and retrieval. Kap København metagenomes We extracted DNA 29 from 41 organic-rich sediment samples at five different sites within the Kap København Formation (Supplementary Information, section 6 and Source Data 1), which were converted into 65 dual-indexed Illumina sequencing libraries 30 . First, we tested 34 of the 65 libraries for plant plastid DNA by screening for the conserved photosystem II D2 ( psbD ) gene using droplet digital PCR (ddPCR) with a gene-targeting primer and probe spanning a 39-bp region and a P7 index primer. Further, we screened for the psbA gene using a similar assay targeting the Poaceae (Methods and Supplementary Fig. 6.12.1 ). A clear signal in 31 out of 34 samples tested confirmed the presence of plant plastid DNA in these libraries (Source Data 1, sheets 5 and 6). Additionally, we subjected 34 of the 65 libraries to mammalian mtDNA capture enrichment using the Arctic PaleoChip 1.0 31 and shotgun sequenced all libraries (initial and captured) using the Illumina HiSeq 4000 and NovaSeq 6000. A total of 16,882,114,068 reads were sequenced, which after adaptor trimming, filtering for ≥30 bp and a minimum phred quality of 30 and duplicate removal resulted in 2,873,998,429 reads. These were analysed for k- mer comparisons using simka 32 (Supplementary Information, section 6 ) and then parsed for taxonomic classification using competitive mapping with HOLI ( ), which includes a recently published dataset of more than 1,500 genome skims of Arctic and boreal plant taxa 33 , 34 (Methods and Supplementary Information, section 6 ). Considering the age of the samples and thus the potential genetic distance to recent reference genomes, we allowed each read to have a similarity between 95–100% for it to be taxonomically classified using ngsLCA 35 . The metaDMG (v.0.14.0) program 36 was subsequently used to quantify and filter each taxonomic node for postmortem DNA damage for all the metagenomic samples (Methods). This method estimates the average damage at the termini position (D-max) and a likelihood ratio (λ - LR) that quantifies how much better the damage model (that is, more damage at the beginning of the read) fits the data compared with a null model (that is, a constant amount of damage; see Supplementary Information, section 6 ). We found the DNA damage to be highly increased, especially for eukaryotes (mean D-max = 40.7%, see Supplementary Information, section 6). From this we set D-max ≥25% as a filtering threshold for a taxonomic node to be parsed for further downstream analysis as well as a λ - LR higher or equal to 1.5. We furthermore set a threshold requiring that the minimum number of reads per taxon exceeded the median of reads assigned across all taxa divided by two to filter for taxa in low abundance. Similarly, for a sample to be considered, the total number of reads for a sample had to exceed the median number of reads per sample divided by two, to filter for samples with fewest reads. Lastly, we filtered out taxa with fewer than three replicates and subsequently reads were normalized by conversion to proportions (Figs. 3 and 4a ). Fig. 3: Early Pleistocene plants of northern Greenland. Taxonomic profiles of the plant assemblage found in the metagenomes. Taxa in bold are genera only found as DNA and not as macrofossil or pollen. Asterisks indicate those that are found at other Pliocene Arctic sites. Extinct species as identified by either macrofossils or phylogenetic placements are marked with a dagger. Reads classified as Pyrus and Malus are marked with a pound symbol, and are probably over-classified DNA sequences belonging to another species within Rosaceae that are not present as a reference genome. Full size image Fig. 4: Early Pleistocene animals of northern Greenland. a , Taxonomic profiles of the animal assemblage from units B1, B2 and B3. Taxa in bold are genera only found as DNA. b , Phylogenetic placement and pathPhynder 62 results of mitochondrial reads uniquely classified to Elephantidae or lower (Source Data 1). Extinct species as identified by either macrofossils or phylogenetic placements are marked with a dagger. Full size image DNA, pollen and macrofossils comparison Greenland’s coasts extend from around 60° to 83° N and include bioclimatic zones from the subarctic to the northern polar desert 37 , 38 . There are 175 vascular plant genera native to Greenland, excluding historically introduced species 39 , 40 , 41 . Of these, 70 (40%) were detected by the metagenomic analysis (Fig. 3 ); the majority of these genera are today confined to bioclimatic zones well to the south of Kap København’s polar desert (see ref. 42 and references therein), for example, all aquatic macrophytes. Reads assigned to Salix , Dryas , Vaccinium , Betula , Carex and Equisetum dominate the assemblage, and of these genera, Equisetum , Dryas , Salix arctica and two species of Carex ( Carex nardina and Carex stans ) grow there currently, whereas only a few records of Vaccinium uliginosum are found above 80º N, and Betula nana are found above 74º N (ref. 43 ). Out of the 102 genera detected in the Kap København ancient eDNA assemblage, 39% no longer grow in Greenland but do occur in the North American boreal (for example, Picea and Populus ) and northern deciduous and maritime forests (for example, Crataegus , Taxus , Thuja and Filipendula ). Many of the plant genera in this diverse assemblage do not occur on permafrost substrates and require higher temperatures than those at any latitude on Greenland today. In addition to the DNA, we counted pollen in six samples from locality 119, unit B3 (Methods and Supplementary Fig. 5.1.1 ). Percentages were calculated for 4 of the samples with pollen sums ranging from 71–225 terrestrial grains (mean = 170.25). Upland herbs, including taxa in the Cyperaceae, Ericales and Rosaceae comprised around 40% of sample 4. Samples 5 and 6 were dominated by arboreal taxa, particularly Betula . The Polypodiopsida (for example, Equisetum , Asplenium and Athyrium filix-mas ) and Lycopodiopsida ( Lycopodium annotinum and Selaginella rupestris ) were also well represented and comprised over 30% of the assemblage in samples 1, 4 and 6. A total of 39 plant genera out of the 102 identified by DNA also occurred as macrofossils or pollen at the genus level. A further 39 taxa were potentially identified as macrofossil or pollen but not to the same taxonomic level 10 , 15 (Source Data 1, sheets 1 and 2). For example, 12 genera of Poaceae were identified by DNA ( Alopecurus , Anthoxanthum , Arctagrostis , Arctophila , Calamagrostis , Cinna , Dupontia , Hordelymus , Leymus , Milium , Phippsia and Poa ), of these only Hordelymus is not found in the Arctic today ( ), but these were only distinguished to family level in the pollen analysis and only one Poaceae macrofossil was found. There were 24 taxa that were recorded only as DNA. These included the boreal tree Populus and a few shrubs and dwarf shrubs, but mainly herbaceous plants. Of the 73 plant genera recovered as macrofossils 10 , 15 , only 24 were not detected in the DNA analysis. Because macrofossils and DNA have similar taphonomies—as both are deposited locally—more overlap is expected between them than between DNA and pollen, which is typically dispersed regionally 44 . Nine of the taxa absent in DNA were bryophytes, probably owing to poor representation of this group within the genomic reference databases. Furthermore, the extinct taxon Araceae is not present in the reference databases. The remaining undetected genera were vascular plants, and all except two ( Oxyria and Cornus ) were rare in the macrofossil record. Because the detection of rare taxa is challenging in both macrofossil and DNA records 45 , we argue that this overlap between the DNA and macrofossil records is as high as can be expected on the basis of the limitations of both methods. An additional 19 taxa were recorded in the pollen record presented here and in that of Bennike 46 including four trees or shrubs, five ferns, three club mosses, and one each of algae, fungi and liverwort. We also find pollen from anemophilous trees, particularly gymnosperms, which can be distributed far north of the region where the plants actually grow 10 . Bennike 46 also notes a high proportion of club mosses and ferns and suggests they may be overrepresented owing to their spore wall being resistant to degradation. Furthermore, if these taxa were preferentially distributed along streams flowing into the estuary, their spores could be relatively more concentrated in the alluvium than the pollen of more generally distributed taxa. Thus, both decay resistance and alluvial deposition could contribute to the relative frequencies we observe. This same alluvial dynamic might also have contributed to the very large read counts for Salix , Betula , Populus , Carex and Equisetum in the metagenomic record, implying that neither the proportion of these taxa in the pollen records nor read counts necessarily correlate with their actual abundance in the regional vegetation in terms of biomass or coverage. Finally, we sought to date the age of the plant DNA by phylogenetic placement of the chloroplast DNA. We examined data for the genera Betula , Populus and Salix , because these had both sufficiently high chloroplast genome coverage (with mean depth 24.16×, 57.06× and 27.04×, respectively) and sufficient present-day whole chloroplast reference sequences (Methods). Owing to their age and hence potential genetic distance from the modern reference genomes, we lowered the similarity threshold of uniquely classified reads to 90% and merged these by unit to increase coverage. Both Betula and Salix placed basally to most of the represented species in the respective genera, and the Populus placement results showed support for a mixture of different species related to P. trichocarpa and P. balsamifera (Extended Data Figs. 7 – 9 ). We used the Betula chloroplast reads for a molecular dating analysis, because they were placed confidently on a single edge of the phylogenetic tree (that is, not a mixture as in Populus ), had a large number of reference sequences, and had high coverage in the ancient sample. We used BEAST 47 v1.10.4 to obtain a molecular clock date estimate for our ancient Betula chloroplast sample (see Methods, ‘Molecular dating methods’ for details). We included 31 modern Betula and one Alnus chloroplast reference sequences, used only sites that had a depth of at least 20 in the ancient sample, and included a previously estimated Betula–Alnus chloroplast divergence time 48 of 61.1 Myr for calibration of the root node. Our BEAST analysis was robust to both different priors on the age of the ancient sample, and to different nucleotide substitution models (Extended Data Fig. 10 ). This yielded a median age estimate of 1.323 Myr, with a 95% HPD of (0.6786, 2.0172) Myr (Fig. 2 ). Animal DNA results The metazoan mitochondrial and nuclear DNA record was much less diverse than that of the plants but contained one extinct family, one that is absent from Greenland today, and four vertebrate genera native to Greenland as well as representatives of four invertebrate families (Fig. 4a ). Assignments were based on incomplete and variable representation of reference genomes, so we identified reads to family level, and only where sufficient mitochondrial reads were present, we refined the assignment to genus level by matching these into mitochondrial phylogenies based on more complete present-day mitochondrial sequences (Supplementary Information, section 6 ). As for the plant reads, uniquely classified animal reads with more than 90% similarity were parsed and merged by unit to increase coverage for phylogenetic placement. Most notably, we found reads in unit B2 and B3 assigned to the family Elephantidae, which includes elephants and mammoths, but taxonomically not mastodon ( Mammut sp.)—which are, however, in the NCBI taxonomy, and therefore our analysis reads classified to Elephantidae or below therefore include Mammut sp. A consensus genome of our Elephantidae mitochondrial reads falls on the Mammut sp. branch (Fig. 4b ) and is placed basal to all clades of mastodons. However, we note that this placement within the mastodons depends on only two transition single-nucleotide polymorphisms (SNPs), with the first one supported by a read depth of three and the second by only one (Extended Data Fig. 4 , Methods and Supplementary Information, section 6 ). Furthermore, we attempted dating the recovered mastodon mitochondrial genome using BEAST 49 . We implemented two dating approaches, one was based on using radiocarbon-dated specimens alone, while the other used radiocarbon- and molecular-dated mastodons. The first analysis yielded a median age estimate for our mastodon mitogenome of 1.2 Myr (95% HPD: 191,000 yr–3.27 Myr), the second approach resulted in a median age estimate of 5.2 Myr (95% HPD: 1.64–10.1 Myr) (Supplementary Fig. 6.8.5 and Supplementary Information, section 6 ). Similarly, reads assigned to the Cervidae support a basal placement on the Rangifer (reindeer and caribou) branch (Extended Data Fig. 3 ). Mitochondrial reads mapping to Leporidae (hares and rabbits) place near the base to the Eurasian hare clade (Extended Data Fig. 2 ), which is the only mammal found in the fossil record 7 . Lepus , specifically Lepus arcticus , is also the only genus in the Leporidae living in Greenland today. Mitochondrial reads assigned to Cricetidae cover only one informative transversion SNP, which places them as deriving from the subfamily Arvicolinae (voles, lemmings and muskrats) (Extended Data Fig. 6 ). For the only avian taxon represented in our dataset—Anatidae, the family of geese and swans—we found a robust basal placement to the genus Branta of black geese, supported by three transversion SNPs with read depths ranging between two and four (Extended Data Fig. 5 ). The refined vertebrate assignments based on mitochondrial references are more biogeographically conserved than for plants. Dicrostonyx —specifically Dicrostonyx groenlandicus (the Nearctic collared lemming)—is the only genus of the Cricetidae native to Greenland today, just as Rangifer —specifically Rangifer tarandus groenlandicus (the barren-ground caribou)—is the only member of the Cervidae. The mastodon is the exception, as no member of the Elephantidae lives in present-day Greenland. Ancient DNA from marine organisms The other metazoan taxa identified in the DNA record were a single reef-building coral (Merulinidae) and several arthropods, with matches to two insects—Formicidae (ants) and Pulicidae (fleas)—and one marine family—Limulidae (horseshoe crabs). This is somewhat unexpected, given the rich insect macrofossil record from the Kap København Formation, which comprises more than 200 species, including Formica sp. The marine taxa are less abundant than the terrestrial taxa, and no mitochondrial DNA was identified from marine metazoans. The read lengths, DNA damage and the fact that the reads assigned distribute evenly across the reference genomes suggests that these are not artefacts but may be over-matched DNA sequences of closely related, potentially extinct species within the families that are currently absent from our reference databases owing to poor taxonomic representation. By contrast, Limulidae, in the subphylum Chelicerata , is unlikely to be misidentified as this distinct genus is the only surviving member within its order and thus deeply diverged from other extant organisms. The probable source of these reads is a population of Limulus polyphemus , the only Atlantic member of the genus, which would have spawned directly onto the sediment as it accumulated. Today this genus does not spawn north of the Bay of Fundy (about 45° N), suggesting warmer surface water conditions in the Early Pleistocene at Kap København consistent with the +8 °C annual sea surface temperature anomaly reconstructed for the Pleistocene of the coast of northeast Greenland 50 . By aligning our reads against the Tara Oceans eukaryotic metagenomic assembled genomes (SMAGs) data (Methods), we further reveal the presence of 24 marine planktonic taxa in 14 samples, covering both zooplankton and phytoplankton (Fig. 5 ). These detected SMAGs belong to the supergroups Opisthokonta (6), Stramenopila (15) and Archaeplastida (3). The majority of these signals are from SMAGs associated with cold regions in the modern ocean (that is, the Arctic Ocean and Southern Ocean), such as diatoms (Bacillariophyta), Chrysophyceae and the MAST-4 group (Supplementary Table 6.11.1 ), as we expected. However, a few are cosmopolitan, whereas others, such as Archaeplastida (green microalgae), have an oceanic signal that is today confined to more temperate waters in the Pacific Ocean (Fig. 5 ). Although we do not know whether modern day ecologies can be extrapolated to ancient ecosystems, the abundance of green microalgae is believed to be increasing in Arctic regions, which tends to be associated with warming surface waters. Fig. 5: Marine planktonic eukaryotes identified at the Kap København Formation. a , Detection of SMAGs and average damage (D-max) of a SMAG within a member unit. Top, the SMAG distribution in contemporary oceans based on the data of Delmont et al. 63 . The SMAGs are ordered on the basis of phylogenomic inference from Delmont et al. 63 . b – d , Distribution of DNA damage among the taxonomic supergroup Opisthokonta ( b ), Stramenopila ( c ) and Archaeplastida ( d ) (Source Data 1). Full size image Discussion The Kap København ancient eDNA record is extraordinary for several reasons; the upper limit of the 95% highest posterior density of the estimated molecular age is 2.0 Myr and independently supports a geological age of approximately 2 Myr (Fig. 2 ). This implies that the DNA is considerably older than any previously sequenced DNA 21 . Our DNA results detected five times as many plant genera as previous studies using shotgun sequencing of ancient sediments 29 , 34 , 51 , 52 , which is well within the range of the richest northern boreal metabarcoding records 53 . The accuracy of the assignments is strengthened by the observation that 76% of the taxa identified to the level of genus or family also occurred in macrofossil and/or pollen assemblages from the same units. Our results demonstrate the potential of ancient environmental metagenomics to reconstruct ancient environments, phylogenetically place and date ancient lineages from diverse taxa from around 2 Ma (Supplementary Information, section 6 ). Finally, the DNA identified a set of additional plant genera, which occur as macrofossils at other Arctic Late Pliocene and Early Pleistocene sites (Figs. 1 and 3 and Supplementary Information, section 5 ) but not as fossils at Kap København, thereby expanding the spatiotemporal distribution of these ancient floras. Of note, the detection of both Rangifer (reindeer and caribou) and Mammut (mastodon) forces a revision of earlier palaeoenvironmental reconstructions based on the site’s relatively impoverished faunal record, entailing both higher productivity and habitat diversity for much of the deposition period. Because all the vertebrate taxa identified by DNA are herbivores, their representation may be a function of relative biomass (see discussion on taphonomy in Supplementary Information, section 6 ). Caribou, geese, hares and rodents can all be abundant, at least seasonally, in boreal environments. Additionally, the excrement of large herbivores (such as caribou and particularly mastodons) can be a significant component of sediments 34 . By contrast, carnivores are not represented, consistent with their smaller total biomass. This dynamic also explains the dominance of plant reads over metazoans and to some extent differences in representation of various plant genera (Supplementary Information, section 6 ). In the general absence of fossils, DNA may prove the most effective tool for reconstructing the biogeography of vertebrates through the Early Pleistocene. DNA from mastodon must imply a viable population of this large browsing megaherbivore, which would require a more productive boreal habitat than that inferred in earlier reconstructions based primarily on plant macrofossils 7 . Mastodon dung from a site in central Nova Scotia from around 75,000 years ago contained macrofossils from sedges, cattail, bulrush, bryophytes and even charophytes, but was dominated by spruce needles and birch samaras 54 . The Kap København units with mastodon DNA yielded macrofossils and DNA from Betula as well as more thermophilic arboreal taxa including Thuja , Taxus , Cornus and Viburnum , none of which range into Greenland’s hydric Arctic tundra or polar deserts today. The co-occurrence of these taxa in multiple units compels a revision of previous temperature estimates as well as the presence of permafrost. No single modern plant community or habitat includes the range of taxa represented in many of the macrofossil and DNA samples from Kap København. The community assemblage represents a mixture of modern boreal and Arctic taxa, which has no analogue in modern vegetation 10 , 15 . To some degree, this is expected, as the ecological amplitudes of modern members of these genera have been modified by evolution 55 . Furthermore, the combination of the High Arctic photoperiod with warmer conditions and lower atmospheric CO 2 concentrations 56 made the Early Pleistocene climate of North Greenland very different from today. The mixed character of the terrestrial assemblage is also reflected in the marine record, where Arctic and more cosmopolitan SMAGs of Opistokonta and Stramenopila are found together with horseshoe crabs, corals and green microalgae (Archaeplastida), which today inhabit warmer waters at more southern latitudes. Megaherbivores, particularly mastodons, could have had a significant impact on an interglacial taiga environment, even providing a top-down trophic control on vegetation structure and composition at this high latitude. The presence of mastodons 57 , 58 coupled with the absence of anthropogenic fire, which has had a role in some Holocene boreal habitats 59 , are important differences. Another important factor is the proximity and biotic richness of the refugia from which pioneer species were able to disperse into North Greenland when conditions became favourable at the beginning of interglacials. The shorter duration of Early Pleistocene glaciations produced less extensive ice sheets allowing colonization from relatively species-rich coniferous-deciduous woodlands in northeastern Canada 12 , 60 . More extensive glaciation later in the Pleistocene increasingly isolated North Greenland and later re-colonizations were from increasingly distant and/or less diverse refugia. In summary, we show the power of ancient eDNA to add substantial detail to our knowledge of this unique, ancient open boreal forest community intermixed with Arctic species, a community composition that has no modern analogues and included mastodons and reindeer, among others. Similar detailed flora and vertebrate DNA records may survive at other localities. If recovered, these would advance our understanding of the variability of climate and biotic interactions during the warmer Early Pleistocene epochs across the High Arctic. Methods Sampling Sediment samples were obtained from the Kap København Formation in North Greenland (82° 24′ 00″ N 22° 12′ 00″ W) in the summers of 2006, 2012 and 2016 (see Supplementary Table 3.1.1 ). Sampled material consisted of organic-rich permafrost and dry permafrost. Prior to sampling, profiles were cleaned to expose fresh material. Samples were hereafter collected vertically from the slope of the hills either using a 10 cm diameter diamond headed drill bit or cutting out ~40 × 40 × 40 cm blocks. Sediments were kept frozen in the field and during transportation to the lab facility in Copenhagen. Disposable gloves and scalpels were used and changed between each sample to avoid cross-contamination. In a controlled laboratory environment, the cores and blocks were further sub-sampled for material taking only the inner part of sediment cores, leaving 1.5–2 cm between the inner core and the surface that provided a subsample of approximately 6–10 g. Subsequently, all samples were stored at temperatures below −22 °C. We sampled organic-rich sediment by taking samples and biological replicates across the three stratigraphic units B1, B2 and B3, spanning 5 different sites, site: 50 (B3), 69 (B2), 74a (B1), 74b (B1) and 119 (B3). Each biological replicate from each unit at each site was further sampled in different sublayers (numbered L0–L4, Source Data 1, sheet 1). Absolute age dating In 2014, Be and Al oxide targets from 8× 1 kg quartz-rich sand samples collected at modern depths ranging from 3 to 21 m below stream cut terraces were analysed by accelerator mass spectrometry and the cosmogenic isotope concentrations interpreted as maximum ages using a simple burial dating approach 1 ( 26 Al: 10 Be versus normalized 10 Be). The 26 Al and 10 Be isotopes were produced by cosmic ray interactions with exposed quartz in regolith and bedrock surfaces in the mountains above Kap København prior to deposition. We assume that the 26 Al: 10 Be was uniform and steady for long time periods in the upper few metres of these gradually eroding palaeo-surfaces. Once eroded by streams and hillslope processes, the quartz sand was deposited in sandy braided stream sediment, deltaic distributary systems, or the near-shore environment and remained effectively shielded from cosmic ray nucleons buried (many tens of metres) under sediment, intermittent ice shelf or ice sheet cover, and—at least during interglacials—the marine water column until final emergence. The simple burial dating approach assumes that the sand grains experienced only one burial event. If multiple burial events separated by periods of re-exposure occurred, then the starting 26 Al: 10 Be before the last burial event would be less than the initial production ratio (6.75 to 7.42, see discussion below) owing to the relatively faster decay of 26 Al during burial, and therefore the calculated burial age would be a maximum limiting age. Multiple burial events can be caused by shielding by thick glacier ice in the source area, or by sediment storage in the catchment prior to final deposition. These shielding events mean that the 26 Al: 10 Be is lower, and therefore a calculated burial age assuming the initial production ratio would overestimate the final burial duration. We also consider that once buried, the sand grains may have been exposed to secondary cosmogenic muons (their depth would be too great for submarine nucleonic production). As sedimentation rates in these glaciated near-shore environments are relatively rapid, we show that even the muonic production would be negligible (see Supplemental Information ). However, once the marine sediments emerged above sea level, in-situ production by both nucleogenic and muogenic production could alter the 26 Al: 10 Be. The 26 Al versus 10 Be isochron plot reveals this complex burial history (Supplementary Information, section 3 ) and the concentration versus depth composite profiles for both 26 Al and 10 Be reveal that the shallowest samples may have been exposed during a period of time (~15,000 years ago) that is consistent with deglaciation in the area (Supplemental Information). While we interpret the individual simple burial age of all samples as a maximum limiting age of deposition of the Kap København Formation Member B, we recommend using the three most deeply shielded samples in a single depth profile to minimize the effect of post-depositional production. We then calculate a convolved probability distribution age for these three samples (KK06A, B and C). However, this calculation depends on the 26 Al: 10 Be production ratio we use (that is, between 6.75 and 7.42) and on whether we adjust for erosion in the catchment. So, we repeat the convolved probability distribution function age for the lowest and highest production ratio and zero to maximum possible erosion rate, to obtain the minimum and maximum limiting age range at 1 σ confidence (Supplementary Information, section 3 ). Taking the midpoint between the negative and positive 3 σ confidence limits, we obtain a maximum burial age of 2.70 ± 0.46 Myr. This age is also supported by the position of those three samples on the isochron plot, which suggests the true age may not be significantly different that this maximum limiting age. Thermal age The extent of thermal degradation of the Kap København DNA was compared to the DNA from the Krestovka Mammoth molar. Published kinetic parameters for DNA degradation 64 were used to calculate the relative rate difference over a given interval of the long-term temperature record and to quantify the offset from the reference temperature of 10 °C, thus estimating the thermal age in years at 10 °C for each sample (Supplementary Information, section 4 ). The mean annual air temperature (MAT) for the the Kap København sediment was taken from Funder et al. (2001) 6 and for the Krestovka Mammoth the MAT was calculated using temperature data from the Cerskij Weather Station (WMO no. 251230) 68.80° N 161.28° E, 32 m from the International Research Institute Data Library ( ) (Supplementary Table 4.4.1 ). We did not correct for seasonal fluctuation for the thermal age calculation of the Kap København sediments or from the Krestovka Mammoth. We do provide theoretical average fragment length for four different thermal scenarios for the DNA in the Kap København sediments (Supplementary Table 4.4.2 ). A correction in the thermal age calculation was applied for altitude using the environmental lapse rate (6.49 °C km −1 ). We scaled the long-term temperature model of Hansen et al. (2013) 65 to local estimates of current MATs by a scaling factor sufficient to account for the estimates of the local temperature decline at the last glacial maximum and then estimated the integrated rate using an activation energy (Ea) of 127 kJ mol −1 (ref. 64 ). Mineralogic composition The minerals in each of the Kap København sediment samples were identified using X-ray diffraction and their proportions were quantified using Rietveld refinement. The samples were homogenized by grinding ~1 g of sediment with ethanol for 10 min in a McCrone Mill. The samples were dried at 60 °C and added corundum (CR-1, Baikowski) as the internal standard to a final concentration of 20.0 wt%. Diffractograms were collected using a Bruker D8 Advance (Θ–Θ geometry) and the LynxEye detector (opening 2.71°), with Cu K α1,2 radiation (1.54 Å; 40 kV, 40 mA) using a Ni-filter with thickness of 0.2 mm on the diffracted beam and a beam knife set at 3 mm. We scanned from 5–90° 2θ with a step size of 0.1° and a step time of 4 s while the sample was spun at 20 rpm. The opening of the divergence slit was 0.3° and of the antiscatter slit 3°. Primary and secondary Soller slits had an opening of 2.5° and the opening of the detector window was 2.71°. For the Rietveld analysis, we used the Profex interface for the BGMN software 66 , 67 . The instrumental parameters and peak broadening were determined by the fundamental parameters ray-tracing procedure 68 . A detailed description of identification of clay minerals can be found in the supporting information. Adsorption We used pure or purified minerals for adsorption studies. The minerals used and treatments for purifying them are listed in Supplementary Table 4.2.6 . The purity of minerals was checked using X-ray diffraction with the same instrumental parameters and procedures as listed in the above section i.e., mineralogical composition. Notes on the origin, purification and impurities can be found in the Supplementary Information section 4 . We used artificial seawater 69 and salmon sperm DNA (low molecular weight, lyophilized powder, Sigma Aldrich) as a model for eDNA adsorption. A known amount of mineral powder was mixed with seawater and sonicated in an ultrasonic bath for 15 min. The DNA stock was then added to the suspension to reach a final concentration between 20–800 μg ml −1 . The suspensions were equilibrated on a rotary shaker for 4 h. The samples were then centrifuged and the DNA concentration in the supernatant determined with UV spectrometry (Biophotometer, Eppendorf), with both positive and negative controls. All measurements were done in triplicates, and we made five to eight DNA concentrations per mineral. We used Langmuir and Freundlich equations to fit the model to the experimental isotherm and to obtain adsorption capacity of a mineral at a given equilibrium concentration. Pollen The pollen samples were extracted using the modified Grischuk protocol adopted in the Geological Institute of the Russian Academy of Science which utilizes sodium pyrophosphate and hydrofluoric acid 70 . Slides prepared from 6 samples were scanned at 400× magnification with a Motic BA 400 compound microscope and photographed using a Moticam 2300 camera. Pollen percentages were calculated as a proportion of the total palynomorphs including the unidentified grains. Only 4 of the 6 samples yielded terrestrial pollen counts ≥50. In these, the total palynomorphs identified ranged from 225 to 71 (mean = 170.25; median = 192.5). Identifications were made using several published keys 71 , 72 . The pollen diagram was initially compiled using Tilia version 1.5.12 73 but replotted for this study using Psimpoll 4.10 74 . DNA recovery For recovery calculation, we saturated mineral surfaces with DNA. For this, we used the same protocol as for the determination of adsorption isotherms with an added step to remove DNA not adsorbed but only trapped in the interstitial pores of wet paste. This step was important because interstitial DNA would increase the amount of apparently adsorbed DNA and overestimate the recovery. To remove trapped DNA after adsorption, we redispersed the minerals in seawater. The process of redispersing the wet paste in seawater, ultracentrifugation and removal of supernatant lasted less than 2.5 min. After the second centrifugation, the wet pastes were kept frozen until extraction. We used the same extraction protocol as for the Kap København sediments. After the extraction, the DNA concentration was again determined using UV spectrometry. Metagenomes A total of 41 samples were extracted for DNA 75 and converted to 65 dual-indexed Illumina sequencing libraries (including 13 negative extraction- and library controls) 30 . 34 libraries were thereafter subjected to ddPCR using a QX200 AutoDG Droplet Digital PCR System (Bio-Rad) following manufacturer’s protocol. Assays for ddPCR include a P7 index primer (5′-AGCAGAAGACGGCATAC-3′) (900nM), gene-targeting primer (900 nM), and a gene-targeting probe (250nM). We screened for Viridiplantae psbD (primer: 5′-TCATAATTGGACGTTGAACC-3′, probe: 5′-(FAM)ACTCCCATCATATGAAA(BHQ1)-3′) and Poaceae psbA (primer: 5′-CTCACAACTTCCCTCTAGAC-3′, probe 5′-(HEX)AGCTGCTGTTGAAGTTC(BHQ1)-3′). Additionally, 34 of the 65 libraries were enriched using targeted capture enrichment, for mammalian mitochondrial DNA using the PaleoChip Arctic1.0 bait-set 31 and all libraries were hereafter sequenced on an Illumina HiSeq 4000 80 bp PE or a NovaSeq 6000 100 bp PE. We sequenced a total of 16,882,114,068 reads which, after low complexity filtering (Dust = 1), quality trimming ( q ≥ 25), duplicate removal and filtering for reads longer than 29 bp (only paired read mates for NovaSeq data) resulted in 2,873,998,429 reads that were parsed for further downstream analysis. We next estimated k mer similarity between all samples using simka 32 (setting heuristic count for max number of reads (-max-reads 0) and a k mer size of 31 (-kmer-size 31)), and performed a principal component analysis (PCA) on the obtained distance matrix (see Supplementary Information , ‘DNA’). We hereafter parsed all QC reads through HOLI 33 for taxonomic assignment. To increase resolution and sensitivity of our taxonomic assignment, we supplemented the RefSeq (92 excluding bacteria) and the nucleotide database (NCBI) with a recently published Arctic-boreal plant database (PhyloNorway) and Arctic animal database 34 as well as searched the NCBI SRA for 139 genomes of boreal animal taxa (March 2020) of which 16 partial-full genomes were found and added (Source Data 1, sheet 4) and used the GTDB microbial database version 95 as decoy. All alignments were hereafter merged using samtools and sorted using gz-sort (v. 1). Cytosine deamination frequencies were then estimated using the newly developed metaDMG, by first finding the lowest common ancestor across all possible alignments for each read and then calculating damage patterns for each taxonomic level 36 ( Supplementary Information , section 6 ). In parallel, we computed the mean read length as well as number of reads per taxonomic node ( Supplementary Information , section 6 ). Our analysis of the DNA damage across all taxonomic levels pointed to a minimum filter for all samples at all taxonomic levels with a D-max ≥ 25% and a likelihood ratio (λ-LR) ≥ 1.5. This ensured that only taxa showing ancient DNA characteristics were parsed for downstream profiling and analysis and resulted in no taxa within any controls being found (Supplementary Information, section 6 ). Marine eukaryotic metagenome We sought to identify marine eukaryotes by first taxonomically labelling all quality-controlled reads as Eukaryota, Archaea, Bacteria or Virus using Kraken 2 76 with the parameters ‘--confidence 0.5 --minimum-hit-groups 3’ combined with an extra filtering step that only kept those reads with root-to-leaf score >0.25. For the initial Kraken 2 search, we used a coarse database created by the taxdb-integration workflow ( ) covering all domains of life and including a genomic database of marine planktonic eukaryotes 63 that contain 683 metagenome-assembled genomes (MAGs) and 30 single-cell genomes (SAGs) from Tara Oceans 77 , following the naming convention in Delmont et al. 63 , we will refer to them as SMAGs. Reads labelled as root, unclassified, archaea, bacteria and virus were refined through a second Kraken 2 labelling step using a high-resolution database containing archaea, bacteria and virus created by the taxdb-integration workflow. We used the same Kraken 2 parameters and filtering thresholds as the initial search. Both Kraken 2 databases were built with parameters optimized for the study read length (--kmer-len 25 --minimizer-len 23 --minimizer-spaces 4). Reads labelled as eukaryota, root and unclassified were hereafter mapped with Bowtie2 78 against the SMAGs. We used MarkDuplicates from Picard ( ) to remove duplicates and then we calculated the mapping statistics for each SMAG in the BAM files with the filterBAM program ( ). We furthermore estimated the postmortem damage of the filtered BAM files with the Bayesian methods in metaDMG and selected those SMAGs with a D-max ≥ 0.25 and a fit quality (λ - LR) higher than 1.5. The SMAGs with fewer than 500 reads mapped, a mean read average nucleotide identity (ANI) of less than than 93% and a breadth of coverage ratio and coverage evenness of less than 0.75 were removed. We followed a data-driven approach to select the mean read ANI threshold, where we explored the variation of mapped reads as a function of the mean read ANI values from 90% to 100% and identified the elbow point in the curve (Supplementary Fig. 6.11.1 ). We used anvi’o 79 in manual mode to plot the mapping and damage results using the SMAGs phylogenomic tree inferred by Delmont et al. as reference. We used the oceanic signal of Delmont et al. as a proxy to the contemporary distribution of the SMAGs in each ocean and sea (Fig. 5 and Supplementary Information, section 6 ). Comparison of DNA, macrofossil and pollen To allow comparison between records in DNA, macrofossil and pollen, the taxonomy was harmonized following the Pan Arctic Flora checklist 43 and NCBI. For example, since Bennike (1990) 18 , Potamogeton has been split into Potamogeton and Stuckenia , Polygonym has been split to Polygonum and Bistorta , and Saxifraga was split to Saxifraga and Micranthes , whereas others have been merged, such as Melandrium with Silene 40 . Plant families have changed names—for instance, Gramineae is now called Poaceae and Scrophulariaceae has been re-circumscribed to exclude Plantaginaceae and Orobancheae 80 . We then classified the taxa into the following: category 1 all identical genus recorded by DNA and macrofossils or pollen, category 2 genera recorded by DNA also found by macrofossils or pollen including genus contained within family level classifications, category 3 taxa only recorded by DNA, category 4 taxa only recorded by macrofossils or pollen (Source Data 1). Phylogenetic placement We sought to phylogenetically place the set of ancient taxa with the most abundant number of reads assigned, and with a sufficient number of reference sequences to build a phylogeny. These taxa include reads mapped to the chloroplast genomes of the flora genera Salix , Populus and Betula , and to the mitochondrial genomes of the fauna families Elephantidae, Cricetidae, Leporidae, as well as the subfamilies Capreolinae and Anserinae. Although the evolution of the chloroplast genome is somewhat less stable than that of the plant mitochondrial genome, it has a faster rate of evolution, and is non-recombining, and hence is more likely to contain more informative sites for our analysis than the plant mitochondria 81 . Like the mitochondrial genome, the chloroplast genome also has a high copy number, so that we would expect a high number of sedimentary reads mapping to it. For each of these taxa, we downloaded a representative set of either whole chloroplast or whole mitochondrial genome fasta sequences from NCBI Genbank 82 , including a single representative sequence from a recently diverged outgroup. For the Betula genus, we also included three chloroplast genomes from the PhyloNorway database 34 , 83 . We changed all ambiguous bases in the fasta files to N. We used MAFFT 84 to align each of these sets of reference sequences, and inspected multiple sequence alignments in NCBI MSAViewer to confirm quality 85 . We trimmed mitochondrial alignments with insufficient quality due to highly variable control regions for Leporidae, Cricetidae and Anserinae by removing the d-loop in MegaX 86 . The BEAST suite 49 was used with default parameters to create ultrametric phylogenetic trees for each of the five sets of taxa from the multiple sequence alignments (MSAs) of reference sequences, which were converted from Nexus to Newick format in Figtree ( ). We then passed the multiple sequence alignments to the Python module AlignIO from BioPython 87 to create a reference consensus fasta sequence for each set of taxa. Furthermore, we used SNPSites 88 to create a vcf file from each of the MSAs. Since SNPSites outputs a slightly different format for missing data than needed for downstream analysis, we used a custom R script to modify the vcf format appropriately. We also filtered out non-biallelic SNPs. From the damage filtered ngsLCA output, we extracted all readIDs uniquely classified to reference sequences within these respective taxa or assigned to any common ancestor inside the taxonomic group and converted these back to fastq files using seqtk ( ). We merged reads from all sites and layers to create a single read set for each respective taxon. Next, since these extracted reads were mapped against a reference database including multiple sequences from each taxon, the output files were not on the same coordinate system. To circumvent this issue and avoid mapping bias, we re-mapped each read set to the consensus sequence generated above for that taxon using bwa 89 with ancient DNA parameters (bwa aln -n 0.001). We converted these reads to bam files, removed unmapped reads, and filtered for mapping quality > 25 using samtools 90 . This produced 103,042, 39,306, 91,272, 182 and 129 reads for Salix , Populus , Betula , Elephantidae and Capreolinae, respectively. We next used pathPhynder 62 , a phylogenetic placement algorithm that identifies informative markers on a phylogeny from a reference panel, evaluates SNPs in the ancient sample overlapping these markers, and traverses the tree to place the ancient sample according to its derived and ancestral SNPs on each branch. We used the transversions-only filter to avoid errors due to deamination, except for Betula , Salix and Populus in which we used no filter due to sufficiently high coverage. Last, we investigated the pathPhynder output in each taxon set to determine the phylogenetic placement of our ancient samples (see Supplementary Information for discussion on phylogenetic placement). Based on the analysis described above we further investigated the phylogenetic placement within the genus Mammut , or mastodons. To avoid mapping reference biases in the downstream results, we first built a consensus sequence from all comparative mitochondrial genomes used in said analysis and mapped the reads identified in ngsLCA as Elephantidae to the consensus sequence. Consensus sequences were constructed by first aligning all sequences of interest using MAFFT 84 and taking a majority rule consensus base in Geneious v2020.0.5 ( ). We performed three analyses for phylogenetic placement of our sequence: (1) Comparison against a single representative from each Elephantidae species including the sea cow ( Dugong dugon ) as outgroup, (2) Comparison against a single representative from each Elephantidae species, and (3) Comparison against all published mastodon mitochondrial genomes including the Asian elephant as outgroup. For each of these analyses we first built a new reference tree using BEAST v1.10.4 (ref. 47 ) and repeated the previously described pathPhynder steps, with the exception that the pathPhynder tree path analysis for the Mammut SNPs was based on transitions and transversions, not restricting to only transversions due to low coverage. Mammut americanum We confirmed the phylogenetic placement of our sequence using a selection of Elephantidae mitochondrial reference sequences, GTR+G, strict clock, a birth-death substitution model, and ran the MCMC chain for 20,000,000 runs, sampling every 20,000 steps. Convergence was assessed using Tracer 91 v1.7.2 and an effective sample size (ESS) > 200. To determine the approximate age of our recovered mastodon mitogenome we performed a molecular dating analysis with BEAST 47 v1.10.4. We used two separate approaches when dating our mastodon mitogenome, as demonstrated in a recent publication 92 . First, we determined the age of our sequence by comparing against a dataset of radiocarbon-dated specimens ( n = 13) only. Secondly, we estimated the age of our sequence including both molecularly ( n = 22) and radiocarbon-dated ( n = 13) specimens using the molecular dates previously determined 92 . We utilized the same BEAST parameters as Karpinski et al. 92 and set the age of our sample with a gamma distribution (5% quantile: 8.72 × 10 4 , Median: 1.178 × 10 6 ; 95% quantile: 5.093 × 10 6 ; initial value: 74,900; shape: 1; scale: 1,700,000). In short, we specified a substitution model of GTR+G4, a strict clock, constant population size, and ran the Markov Chain Monte Carlo chain for 50,000,000 runs, sampling every 50,000 steps. Convergence of the run was again determined using Tracer. Molecular dating methods In this section, we describe molecular dating of the ancient birch ( Betula ) chloroplast genome using BEAST v1.10.4 (ref. 47 ). In principle, the genera Betula , Populus and Salix had both sufficiently high chloroplast genome coverage (with mean depth 24.16×, 57.06× and 27.04×, respectively, although this coverage is highly uneven across the chloroplast genome) and enough reference sequences to attempt molecular dating on these samples. Notably, this is one of the reasons we included a recently diverged outgroup with a divergence time estimate in each of these phylogenetic trees. However, our Populus sample clearly contained a mixture of different species, as seen from its inconsistent placement in the pathPhynder output. In particular, there were multiple supporting SNPs to both Populus balsamifera and Populus trichocarpa , and both supporting and conflicting SNPs on branches above. Furthermore, upon inspection, our Salix sample contained a surprisingly high number of private SNPs which is inconsistent with any ancient or even modern age, especially considering the number of SNPs assigned to the edges of the phylogenetic tree leading to other Salix sequences. We are unsure what causes this inconsistency but hypothesize that our Salix sample is also a mixed sample, containing multiple Salix species that diverged from the same placement branch on the phylogenetic tree at different time periods. This is supported by looking at all the reads that cover these private SNP sites, which generally appear to be from a mixed sample, with reads containing both alternate and reference alleles present at a high proportion in many cases. Alternatively, or potentially jointly in parallel, this could be a consequence of the high number of nuclear plastid DNA sequences (NUPTs) in Salix 93 . Because of this, we continued with only Betula . First, we downloaded 27 complete reference Betula chloroplast genome sequences and a single Alnus chloroplast genome sequence to use as an outgroup from the NCBI Genbank repository, and supplemented this with three Betula chloroplast sequences from the PhyloNorway database generated in a recent study 29 , for a total of 31 reference sequences. Since chloroplast sequences are circular, downloaded sequences may not always be in the same orientation or at the same starting point as is necessary for alignment, so we used custom code ( ) that uses an anchor string to rotate the reference sequences to the same orientation and start them all from the same point. We created a MSA of these transformed reference sequences with Mafft 84 and checked the quality of our alignment by eye in Seqotron 94 and NCBI MsaViewer. Next, we called a consensus sequence from this MSA using the BioAlign consensus function 87 in Python, which is a majority rule consensus caller. We will use this consensus sequence to map the ancient Betula reads to, both to avoid reference bias and to get the ancient Betula sample on the same coordinates as the reference MSA. From the last common ancestor output in metaDMG 36 , we extracted read sets for all units, sites and levels that were uniquely classified to the taxonomic level of Betula or lower, with at a minimum sequence similarity of 90% or higher to any Betula sequence, using Seqtk 95 . We mapped these read sets against the consensus Betula chloroplast genome using BWA 89 with ancient DNA parameters (-o 2 -n 0.001 -t 20), then removed unmapped reads, quality filtered for read quality ≥25, and sorted the resulting bam files using samtools 89 . For the purpose of molecular dating, it is appropriate to consider these read sets as a single sample, and so we merged the resulting bam files into one sample using samtools. We used bcftools 89 to make an mpileup and call a vcf file, using options for haploidy and disabling the default calling algorithm, which can slightly biases the calls towards the reference sequence, in favour of a majority call on bases that passed the default base quality cut-off of 13. We included the default option using base alignment qualities 96 , which we found greatly reduced the read depths of some bases and removed spurious SNPs around indel regions. Lastly, we filtered the vcf file to include only single nucleotide variants, because we do not believe other variants such as insertions or deletions in an ancient environmental sample of this type to be of sufficiently high confidence to include in molecular dating. We downloaded the gff3 annotation file for the longest Betula reference sequence, MG386368.1, from NCBI. Using custom R code 97 , we parsed this file and the associated fasta to label individual sites as protein-coding regions (in which we labelled the base with its position in the codon according to the phase and strand noted in the gff3 file), RNA, or neither coding nor RNA. We extracted the coding regions and checked in Seqotron 94 and R that they translated to a protein alignment well (for example, no premature stop codons), both in the reference sequence and the associated positions in the ancient sequence. Though the modern reference sequence’s coding regions translated to a high-quality protein alignment, translating the associated positions in the ancient sequence with no depth cut-off leads to premature stop codons and an overall poor quality protein alignment. On the other hand, when using a depth cut-off of 20 and replacing sites in the ancient sequence which did not meet this filter with N, we see a high-quality protein alignment (except for the N sites). We also interrogated any positions in the ancient sequence which differed from the consensus, and found that any suspicious regions (for example, with multiple SNPs clustered closely together spatially in the genome) were removed with a depth cut-off of 20. Because of this, we moved forward only with sites in both the ancient and modern samples which met a depth cut-off of at least 20 in the ancient sample, which consisted of about 30% of the total sites. Next, we parsed this annotation through the multiple sequence alignment to create partitions for BEAST 47 . After checking how many polymorphic and total sites were in each, we decided to use four partitions: (1) sites belonging to protein-coding positions 1 and 2, (2) coding position 3, (3) RNA, or (4) non-coding and non-RNA. To ensure that these were high confidence sites, each partition also only included those positions which had at least depth 20 in the ancient sequence and had less than 3 total gaps in the multiple sequence alignment. This gave partitions which had 11,668, 5,828, 2,690 and 29,538 sites, respectively. We used these four partitions to run BEAST 47 v1.10.4, with unlinked substitution models for each partition and a strict clock, with a different relative rate for each partition. (There was insufficient information in these data to infer between-lineage rate variation from a single calibration). We assigned an age of 0 to all of the reference sequences, and used a normal distribution prior with mean 61.1 Myr and standard deviation 1.633 Myr for the root height 48 ; standard deviation was obtained by conservatively converting the 95% HPD to z -scores. For the overall tree prior, we selected the coalescent model. The age of the ancient sequence was estimated following the overall procedures of Shapiro et al. (2011) 98 . To assess sensitivity to prior choice for this unknown date, we used two different priors, namely a gamma distribution metric towards a younger age (shape = 1, scale = 1.7); and a uniform prior on the range (0, 10 Myr). We also compared two different models of rate variation among sites and substitution types within each partition, namely a GTR+G with four rate categories, and base frequencies estimated from the data, and the much simpler Jukes Cantor model, which assumed no variation between substitution types nor sites within each partition. All other priors were set at their defaults. Neither rate model nor prior choice had a qualitative effect on results (Extended Data Fig. 10 ). We also ran the coding regions alone, since they translated correctly and are therefore highly reliable sites and found that they gave the same median and a much larger confidence interval, as expected when using fewer sites (Extended Data Fig. 10 ). We ran each Markov chain Monte Carlo for a total of 100 million iterations. After removing a burn-in of the first 10%, we verified convergence in Tracer 91 v1.7.2 (apparent stationarity of traces, and all parameters having an Effective Sample Size > 100). We also verified that the resulting MCC tree from TreeAnnotator 47 had placed the ancient sequence phylogenetically identically to pathPhynder 62 placement, which is shown in Extended Data Fig. 9 . For our major results, we report the uniform ancient age prior, and the GTR+G 4 model applied to each of the four partitions. The associated XML is given in Source Data 3. The 95% HPD was (2.0172,0.6786) for the age of the ancient Betula chloroplast sequence, with a median estimate of 1.323 Myr, as shown in Fig. 2 . Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability Raw sequence data (13,135,646,556 reads following adapter trimming) are available through the ENA project accession PRJEB55522 . Pollen counts are available through . Source data are provided with this paper. Code availability All code used is available at .
Scientists discovered the oldest known DNA and used it to reveal what life was like 2 million years ago in the northern tip of Greenland. Today, it's a barren Arctic desert, but back then it was a lush landscape of trees and vegetation with an array of animals, even the now extinct mastodon. "The study opens the door into a past that has basically been lost," said lead author Kurt Kjær, a geologist and glacier expert at the University of Copenhagen. With animal fossils hard to come by, the researchers extracted environmental DNA, also known as eDNA, from soil samples. This is the genetic material that organisms shed into their surroundings—for example, through hair, waste, spit or decomposing carcasses. Studying really old DNA can be a challenge because the genetic material breaks down over time, leaving scientists with only tiny fragments. But with the latest technology, researchers were able to get genetic information out of the small, damaged bits of DNA, explained senior author Eske Willerslev, a geneticist at the University of Cambridge. In their study, published Wednesday in the journal Nature, they compared the DNA to that of different species, looking for matches. The samples came from a sediment deposit called the Kap København formation in Peary Land. Today, the area is a polar desert, Kjær said. This illustration provided by researchers depicts Kap Kobenhavn, Greenland, two million years ago, when the temperature was significantly warmer than northernmost Greenland today. Scientists have analyzed 2-million-year-old DNA extracted from dirt samples in the area, revealing an ancient ecosystem unlike anything seen on Earth today, including traces of mastodons and horseshoe crabs roaming the Arctic. Credit: Beth Zaiken via AP But millions of years ago, this region was undergoing a period of intense climate change that sent temperatures up, Willerslev said. Sediment likely built up for tens of thousands of years at the site before the climate cooled and cemented the finds into permafrost. The cold environment would help preserve the delicate bits of DNA—until scientists came along and drilled the samples out, beginning in 2006. During the region's warm period, when average temperatures were 20 to 34 degrees Fahrenheit (11 to 19 degrees Celsius) higher than today, the area was filled with an unusual array of plant and animal life, the researchers reported. The DNA fragments suggest a mix of Arctic plants, like birch trees and willow shrubs, with ones that usually prefer warmer climates, like firs and cedars. The DNA also showed traces of animals including geese, hares, reindeer and lemmings. Previously, a dung beetle and some hare remains had been the only signs of animal life at the site, Willerslev said. Professors Eske Willerslev and Kurt H. Kjaer expose fresh layers for sampling of sediments at Kap Kobenhavn, Greenland. Scientists have analyzed 2-million-year-old DNA extracted from dirt samples in the area, revealing an ancient ecosystem unlike anything seen on Earth today, including traces of mastodons and horseshoe crabs roaming the Arctic. Credit: Svend Funder via AP One big surprise was finding DNA from the mastodon, an extinct species that looks like a mix between an elephant and a mammoth, Kjær said. Many mastodon fossils have previously been found from temperate forests in North America. That's an ocean away from Greenland, and much farther south, Willerslev said. "I wouldn't have, in a million years, expected to find mastodons in northern Greenland," said Love Dalen, a researcher in evolutionary genomics at Stockholm University who was not involved in the study. Because the sediment built up in the mouth of a fjord, researchers were also able to get clues about marine life from this time period. The DNA suggests horseshoe crabs and green algae lived in the area—meaning the nearby waters were likely much warmer back then, Kjær said. By pulling dozens of species out of just a few sediment samples, the study highlights some of eDNA's advantages, said Benjamin Vernot, an ancient DNA researcher at Germany's Max Planck Institute for Evolutionary Anthropology who was not involved in the study. This 2006 photo provided by researchers shows a close-up of organic material in coastal deposits at Kap Kobenhavn, Greenland. The organic layers show traces of the rich plant flora and insect fauna that lived two million years ago. Scientists have analyzed 2-million-year-old DNA extracted from dirt samples in the area, revealing an ancient ecosystem unlike anything seen on Earth today, including traces of mastodons and horseshoe crabs roaming the Arctic. Credit: Svend Funder via AP "You really get a broader picture of the ecosystem at a particular time," Vernot said. "You don't have to go and find this piece of wood to study this plant, and this bone to study this mammoth." Based on the data available, it's hard to say for sure whether these species truly lived side by side, or if the DNA was mixed together from different parts of the landscape, said Laura Epp, an eDNA expert at Germany's University of Konstanz who was not involved in the study. This 2006 photo provided by researchers shows geological formations at Kap Kobenhavn, Greenland. At the bottom of the section, the deep marine deposits are overlain by the coastal deposits of fine sandy material. The two people at the top are sampling for environmental DNA. Scientists have analyzed 2-million-year-old DNA extracted from dirt samples in the area, revealing an ancient ecosystem unlike anything seen on Earth today, including traces of mastodons and horseshoe crabs roaming the Arctic. Credit: Svend Funder via AP But Epp said this kind of DNA research is valuable to show "hidden diversity" in ancient landscapes. Willerslev believes that because these plants and animals survived during a time of dramatic climate change, their DNA could offer a "genetic roadmap" to help us adapt to current warming. Stockholm University's Dalen expects ancient DNA research to keep pushing deeper into the past. He worked on the study that previously held the "oldest DNA" record, from a mammoth tooth around a million years old. This 2006 photo provided by researchers shows the landscape at Kap Kobenhavn, Greenland. The many hills have been formed by rivers running towards the coast. Scientists have analyzed 2-million-year-old DNA extracted from dirt samples in the area, revealing an ancient ecosystem unlike anything seen on Earth today, including traces of mastodons and horseshoe crabs roaming the Arctic. Credit: Kurt H. Kjaer via AP "I wouldn't be surprised if you can go at least one or perhaps a few million years further back, assuming you can find the right samples," Dalen said.
10.1038/s41586-022-05453-y
Medicine
Study of ASU football team produces largest known dataset for concussion diagnostics
Ashish Yeri et al, Total Extracellular Small RNA Profiles from Plasma, Saliva, and Urine of Healthy Subjects, Scientific Reports (2017). DOI: 10.1038/srep44061 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep44061
https://medicalxpress.com/news/2017-03-asu-football-team-largest-dataset.html
Abstract Interest in circulating RNAs for monitoring and diagnosing human health has grown significantly. There are few datasets describing baseline expression levels for total cell-free circulating RNA from healthy control subjects. In this study, total extracellular RNA (exRNA) was isolated and sequenced from 183 plasma samples, 204 urine samples and 46 saliva samples from 55 male college athletes ages 18–25 years. Many participants provided more than one sample, allowing us to investigate variability in an individual’s exRNA expression levels over time. Here we provide a systematic analysis of small exRNAs present in each biofluid, as well as an analysis of exogenous RNAs. The small RNA profile of each biofluid is distinct. We find that a large number of RNA fragments in plasma (63%) and urine (54%) have sequences that are assigned to YRNA and tRNA fragments respectively. Surprisingly, while many miRNAs can be detected, there are few miRNAs that are consistently detected in all samples from a single biofluid, and profiles of miRNA are different for each biofluid. Not unexpectedly, saliva samples have high levels of exogenous sequence that can be traced to bacteria. These data significantly contribute to the current number of sequenced exRNA samples from normal healthy individuals. Introduction The field of circulating extracellular molecules is rapidly growing, fueled by the potential for development of new diagnostic and therapeutic tools. As the field is still largely in an exploratory and descriptive phase, there are no standardized methods for sample collection, isolation, or analysis. It is currently unclear what the expectations for a good quality sample should be, and each biofluid under various disease and injury conditions will likely have diverse contents and different criteria for quality. Large datasets examining different biofluids, isolation methods, detection platforms and analysis tools are important to further our understanding of the extent and types of extracellular material present in biofluids. These data will help inform us about how best to develop additional tools to enrich and capture specific types of information. Circulating extracellular molecular material includes RNAs, DNA, lipids, and proteins (reviewed in refs 1 , 2 , 3 , 4 ). Carrying these materials to their targets, cells/tissues/organs, and protecting them from degradation, are a variety of extracellular vesicles, lipoproteins, and other RNA-binding proteins 5 , 6 , 7 . A growing number of isolation methods for profiling circulating extracellular molecules have been, and are being, developed. There is still considerable work necessary to identify the most efficient inclusive or selective protocols, depending on the downstream question. There is also a need for rigorous characterization of the biological functions of circulating extracellular RNAs. There are few large datasets describing the extracellular contents in biofluid samples from normal controls 8 , 9 , 10 , 11 , 12 , 13 . Here we describe the largest dataset to date, focusing on total cell-free RNA (extracellular RNA), using next generation sequencing to profile the small RNA (16–32 nts) payload of human biofluids. Extracellular RNA was isolated and the small RNA content sequenced for 183 plasma samples, 204 urine samples and 46 saliva samples from 55 male college athletes ages 18–25 years. We examined the total RNA in an attempt not to exclude any information in our profile, as extracellular RNAs are packaged not only into extracellular vesicles, but are also associated with lipoproteins 5 and AGO2 6 , 7 . We profiled the RNA contents from these samples and examined the prevalence of different RNA biotypes. YRNA and tRNA fragments have previously been identified as abundant RNA species in extracellular vesicles and in biofluids 14 , 15 , 16 , 17 , 18 , 19 , 20 . We also found significant expression of YRNA and tRNA fragments in plasma and urine, respectively. The reasons for such high levels of these small RNA fragments and their functions, are not yet known. YRNA fragments do not appear to be a part of the same regulatory pathway as miRNA or generated by Dicer 21 . They may play a role in apoptosis 18 , 22 , the degradation of misprocessed RNAs 23 , and/or DNA replication 24 . tRNA fragments (TRFs) can be created in response to stress (reviewed in ref. 25 ). tRNA fragments have also been found to have a unique role in displacing RNA binding proteins that can protect mRNAs from degradation 26 . It should be noted, that tRNAs also have many different modifications along them, which may interfere with their full representation by conventional sequencing approaches 27 , 28 . miRNAs make up a large portion of the reads generated for each sample. miRNAs typically function in post-transcriptional gene regulation. Their role in extracellular vesicles appears to be less well understood and is potentially more diverse - as a means to rapidly remove miRNA, and as a way to alter local tissue microenvironment and regulate surrounding cells (reviewed in refs 3 and 29 ). We also detect piRNAs that repress transposable elements in the germline (reviewed in ref. 30 ). It is unknown what role piRNA play in circulating biofluids from normal healthy individuals. We also detect other small RNA species 31 that are expressed at low levels in biofluids. Throughout the analysis, it is important to bear in mind that sequencing data provides information on the proportion of RNA biotypes relative to one another in each sample, but not absolute concentrations of RNAs. Therefore, many of the analyses are presented as RPM or percentages of total input reads or reads mapping to the human genome. Results Summary of the input read alignments Total cell-free RNA was isolated from plasma, urine, and saliva; sequencing libraries were created using the Illumina TruSeq small RNA preparation kit. We sequenced the small RNA contents from 183 plasma samples, 204 urine samples and 46 saliva samples taken from male college athletes age 18–25. In our comparison of small RNAs in biofluids, we excluded plasma samples with <5% miRNA mapped reads and urine and saliva samples with <0.5% miRNA mapped reads (calculated from the total number of reads mapped to the human genome). The average number of input reads for plasma samples was ~8.9 million reads (median ~8.1 million reads; interquartile range/IQR; 4.7–10.5 million), average ~11.6 million reads for urine samples (median 10.3 million reads; IQR: 7–15.8 million), and an average of ~19 million reads for saliva samples (median 14.8 million reads; IQR: 10.9–22.6 million). Accordingly, 161 plasma, 159 urine, and 30 saliva samples fit these criteria and were included in the forthcoming analyses. Figure 1A displays the percentage of input reads that aligned to the human genome (blue), aligned to human rRNA (green), were too short after adaptor removal (<15 nucleotides; yellow), or did not align to the human genome (orange) for each sample in the analysis. A detailed sample list with the percentage of reads in each category can be found in Supplementary Table S1 . After removal of reads that were too short and reads that mapped to human rRNA, the average number of reads aligned to the human genome for plasma samples were 86% (median 93%: IQR 85–96%), 68% for urine (median 74%; IQR: 56–86%), and 32% for saliva (median 34%; IQR: 28–36%). The percentage of rRNA is highest in the urine samples with an average of 8% (median 6%), followed by saliva samples with an average of 5% of the reads (median ~4%), and plasma with 1% of the total input reads mapping to rRNA (median <1%). The average percentage of reads that are too short (<15 nucleotides) in the saliva samples are ~18% (median 8%), in urine samples it is ~18% (median 11%), and in plasma samples it is 9% (median 4%). In the saliva samples, a median of ~45% of the reads are unaligned to the human genome. Figure 1: Distribution of total input reads and reads mapped to the genome. ( A ) Displays the alignment of the input reads for each biofluid to the human genome, human rRNA, reads that were too short (<15 nts), and unaligned to the human genome. ( B ) Displays the distribution of the reads mapped to the human genome to RNA biotypes: miRNA, tRNA, piRNA, protein-coding fragments, miRNA hairpins, Mt_tRNA (mitochondrial tRNA), oncRNA (other non coding RNA), snoRNA, snRNA, Vault RNA, YRNA, tRNA flanking regions, 3′ and 5′ (50 bps flanking the mature tRNA sequence), more than one RNA biotype, and reads that are unassigned (intergenic, intronic, and overlapping with >40 regions to the genome). Full size image Summary of small RNA biotypes We next examined the percentage of reads assigned to abundant RNA categories or RNA biotypes ( Fig. 1B ). The reads that remain after mapping to rRNA and miRNA are mapped simultaneously to mature tRNAs, mature piRNAs and to all other RNA transcripts in GENCODE (Ensembl 75). This alignment strategy leads to a large percentage of read sequences that are identical in more than one RNA biotype. These sequences that overlap exactly with more than one RNA biotype, form the most abundant category of reads in the urine samples. The light green bars in Fig. 1B , depicting ‘reads shared’ across more than one biotype or position in the genome, constitute ~54% of the reads from urine samples and ~3% of aligned reads in saliva samples. In urine, these reads are primarily shared between tRNA and piRNA. Supplementary Table S2 provides a complete list of each sample and the percentage of reads going to each RNA biotype. The RNA biotype that is most represented in plasma samples is YRNA, with a median of 63% of the reads mapped (IQR 52–73%). A more detailed analysis of YRNA fragments is discussed in a section below. Plasma samples have the highest percentage of reads assigned to miRNAs, when compared to urine and saliva, with a median of 25% (IQR: 17% to 34%; Wilcoxon p value <1.5E-12). The urine samples have 2% of their reads assigned to miRNA (IQR of 1–4%) and saliva 1% reads are assigned (IQR 2–3%). A more complete statistical comparison of small RNA biotypes across and within biofluids, can be found in Supplementary Table S3 . The grey bar denoted as “unassigned” refers to reads that do not have a known gene annotation, such as intergenic or intronic regions of the genome. This category also includes reads that multi-map to more than 40 different annotations. Saliva samples have the highest percentage of reads (~93%) that are unassigned. Multiple alignments of short reads There are several challenges to small RNA sequence analysis that influence the final categorization of the reads into biotypes. Many short read sequences are shared by more than one RNA biotype, making it difficult/impossible to resolve to a single unique RNA classification for many of the reads. The substantial percentage of reads shared amongst different RNA biotypes, when mapped concurrently, motivated us to examine these analyses more thoroughly. As was described previously, we allowed the reads to align across biotypes simultaneously; after removing reads that aligned to rRNA and miRNA. Similar analysis recommendations are outlined by the NIH Extracellular RNA Communication Consortium (ERCC) in Freedman et al . 10 . We observed a substantial percentage of read sequences shared between tRNAs and piRNAs, particularly in the urine samples. Figure 2A displays Venn diagrams for plasma, urine and saliva samples that depict the average RPM (reads per million are calculated from reads mapped to the human genome), for reads mapping only to tRNAs, only to piRNAs and the reads mapping to both these RNA biotypes simultaneously for all three biofluids. Evident from the Venn diagrams, plasma samples have the lowest proportion of read sequences shared between tRNAs and piRNAs, 734 RPM on average shared between these two RNA biotypes. The saliva samples have on average 7,689 RPM mapping simultaneously to both tRNAs and piRNAs and 18,684 RPM mapping to tRNAs only. The urine samples have on average 417,700 RPM shared between tRNAs and piRNAs and an additional ~116,500 RPM aligning solely to tRNA fragments. Only 370 RPM align to piRNAs, therefore, the small RNA profile of the urine samples appears to be dominated by tRNA fragments. Figure 2: Reads aligning to tRNA and piRNAs. ( A ) Read overlap between piRNA and tRNA. A large number of sequences detected by sequencing simultaneously align to both piRNA and tRNA. We assessed the distribution of reads per million mapped to the human genome and the numbers that were uniquely classified as piRNA, uniquely classified as tRNA, or overlapped between the two RNAs. In urine and saliva samples, there were few reads that exclusively mapped to piRNA. This does not rule out the presence of piRNA, but the origin of these sequences would have to be further investigated. ( B ) The upper and lower panels display the number of tRNA and YRNA fragments displays the number of tRNA fragments normalized as reads per million mapped to the human genome (RPM) found in each biofluid. Urine has very high levels of tRNA fragments compared to plasma and saliva, normalized as reads per million mapped to the human genome (RPM) found in each biofluid. Urine has very high levels of tRNA fragments compared to plasma and saliva and the lower panel demonstrates that there are a large number of YRNA fragments found in plasma compared to urine and saliva. ( C and D ) These two panels display the lengths for the tRNA and YRNA fragments respectively, identified in each biofluid and their abundance. Full size image Based on the sequence of the piRNAs in piRBase and the mature tRNA sequences, we find that there are 133 piRNAs that share sequences with less than or equal to 2 mismatches. For example in the urine samples, the top 5 piRNA sequences that share the largest number of reads with tRNAs are piR-hsa-1207, piR-hsa-28131, piR-hsa-24672, piR-hsa-5937, piR-hsa-5938. The sequences for the first two piRNAs differ by only one base (piR-hsa-1207: AGCATTGGTGGTTCAGTGGTAGAATTCTCGC, piR-hsa-28131: GGCATTGGTGGTTCAGTGGTAGAATTCTCGC), and overlap with the 5′ end of 10 Gly tRNAs at an average RPM of 411,839 GCATTGGTGGTTCAGTGGTAGAATTCTCGC. The last 3 piRNAs differ by only 1–2 bases (piR-hsa-24672: TTCCCTGGTGGTCTAGTGGTTAGGATTCGGC, piR-hsa-5937: TCCCTGGTGGTCTAGTGGTTAGGATTCGGCA, piR-hsa-5938: TCCCTGGTGGTCTAGTGGTTAGGATTCGGCAC), these overlap with the 5′ end of 8 Glu tRNAs sequences detected at RPM of 4730. There were 26, 22, and 25 piRNA sequences, that did not overlap with other RNA biotypes, consistently detected in 80% of the plasma, urine, and saliva samples respectively. These piRNA sequences shared no overlap with tRNAs. Detailed tRNA fragments analysis The presence of tRNA fragments cleaved from mature tRNAs in cells as a response to stress is known 32 , 33 , 34 , 35 , 36 , 37 , 38 . Recently tRNA fragments (tRF) have also been shown to be present in extracellular RNA in biofluids 15 , 17 , 19 . Mature tRNAs contain a number of post transcriptional modifications which hinder efficient sequencing by NGS 27 , 28 . A complete list of modifications can be found at the Modomics website ( ). Thus, the detection of tRNA fragments in sequenced samples can arise due to 2 main reasons: a) biologically induced tRNA cleavage, b) inability of the reverse transcriptase to process the entire tRNA strand due to the presence of modifications. Because we did not sequence these samples specifically with the removal of RNA modifications, we cannot distinguish between a) and b). Regardless, the tRNA fragments that were detected using our sequencing protocols in urine samples exceed those found in saliva and plasma (Wilcoxon p-value < 1.5E-15; Fig. 2B upper panel). The median tRF RPM for plasma, urine and saliva are 2,912, 624,387 and 22,103 respectively (Wilcoxon p-value < 2.2E-16). Figure 2C represents the length distribution of the tRF reads. In the plasma samples, there is a bimodal distribution of the reads with two clear peaks at 18 nts and 30–33 nts, the latter of which is found in the saliva samples. In the urine samples, there is a sizeable well-defined single peak at 30 nts. The presence of this peak at ~500,000 RPM is two and three orders of magnitude higher than the saliva and plasma samples respectively for their highest expressed tRF. The location of these tRFs was examined on the mature tRNA, whether it originates from the 5′ (tRF5), 3′ (tRF3) or neither end (tRFM, tRNA fragment middle). Table 1 summarizes the median percentage of reads originating from tRF5, tRF3 and tRFM. Figure S1A shows the percentages for individual tRNA fragments across all samples for all three biofluids as stacked bar plots. The percentages are calculated based on the total number of reads assigned to tRNAs, after the samples have been normalized for library size using median ratio normalization (DESeq2; ref. 39 ). Table 1 tRNA fragment analysis. Full size table Table 2 summarizes the top ten expressed tRFs by amino-acid type for the three biofluids with the median percentage assigned shown in brackets. The rest of the tRNAs are combined and referred to as “Other tRNAs”. tRNAs belonging to the Gly-GCC family are the highest expressed in all three biofluids, followed by Glu-CTC in urine and Val-CAC in plasma and saliva. These are the sequences detected using conventional sequencing, and the presence of modifications may be masking other abundant tRNAs. Supplemental Figure S1B displays the percentage of reads assigned to different tRNAs based on the amino-acid type and the top 10 tRNA. Table 2 Percentage of reads assigned to the top 10 detected tRNA in each biofluid. Full size table Detailed YRNA fragments analysis According to ENSEMBL 75, there are 4 human YRNAs; RNY1, RNY3, RNY4, RNY5. There are an additional 52 transcripts which are pseudogenes based on the 4 human YRNAs and a further 878 predicted YRNA transcripts make up the YRNA category. As mentioned before, in the plasma samples ~63% of the reads assigned to the human genome align to YRNAs in our samples. The presence of YRNA fragments has been reported previously 16 , 17 , 18 , 19 . Figure 2B (lower panel) illustrates the proportion of reads aligning to YRFs in plasma exceeds those found in saliva and urine (Wilcoxin p-value < 1.3E-16). The median YRF RPM, based on reads mapped to the human genome, for plasma, urine and saliva are 629,023, 6,224 and 3,735 respectively. Figure 2D displays the length distribution of the YRF reads. Unlike the tRFs, the YRFs have a unimodal distribution with a single large peak at 32 nts in all three biofluids. The majority of the YRNA fragments originate from the 5′ end. Table 3 summarizes the median percentage of reads arising from the YRF5, YRF3 or YRFM (middle) for all three biofluids. The percentages are calculated based on the total number of reads assigned to YRNAs, after the samples have been normalized for library size by the median ratio method of normalization. The most abundant YRF originates from the 5′ end of RNY4. It is responsible for a median 93%, 97% and 84% of the RPM assigned to YRFs in the plasma, urine and saliva samples respectively. The length of the most abundant YRF is 32, which maps to 9 YRNA annotations in ENSEMBL 75- RNY4, 2 YRNA pseudogenes, and 6 genes predicted by RFam. Table 3 YRNA fragments originating from the 5′, 3′, or middle section of the YRNA sequence. Full size table Detailed miRNA analysis As most laboratories focus their analysis on the miRNA contents of biofluids, we examined characteristics of this small RNA biotype in greater detail. A principal component analysis (PCA) of the miRNAs from each biofluid demonstrates that samples cluster primarily by biofluid type, Fig. 3A . The top ten miRNAs with the highest absolute loadings for the first principal component (PC1) were hsa-miR-30a-5p, hsa-miR-1273h-3p, hsa-miR-30a-3p, hsa-miR-30c-2–3p, hsa-miR-10b-5p, hsa-miR-199a-5p, hsa-miR-204-5p, hsa-miR-4433b-5p, hsa-miR-6852-5p and hsa-miR-126-3p. The top ten miRNAs with the highest absolute loadings for the second principal component (PC2) were hsa-miR-320a, hsa-miR-26b-5p, hsa-miR-421, hsa-miR-29a-3p, hsa-miR-450b-5p, hsa-miR-155-5p, hsa-miR-26a-5p, hsa-miR-30c-5p, hsa-miR-32-5p and hsa-miR-361-5p. The first and second principal components cumulatively explain ~29% of the variance in all the samples. Figure 3: Distribution of miRNAs in biofluids. Panel A is a principal components analysis of the miRNAs detected in each biofluid. Each biofluid has a distinct miRNA pattern. Panel B displays the miRNAs detected in each biofluid with >10 or >50 counts in at least 80% of the samples. There are only a handful of miRNAs uniquely detected in urine and saliva at this level of expression. Most miRNAs can be detected in plasma. ( C ) shows the number of detected miRNAs at 1 count, 10 counts or 50 counts, as a function of input reads. ( D ) shows the number of reads mapped to the human genome as a function of the input reads. Saliva samples require larger numbers of input reads to achieve the same numbers of reads aligned to the genome as plasma and urine. Urine samples behave similarly to plasma samples with respect to input reads that map to the genome ( D ), but have fewer miRNAs detected ( C ). Full size image We examined the most robust miRNAs for each biofluid, requiring detection with >10 or >50 read counts in 80% of the samples for each biofluid. Most miRNAs are detectable in plasma samples, a few unique miRNAs are detectable in saliva and urine ( Fig. 3B ). Summarized in Table 4 are the number of miRNAs detected with at least 10 or 50 counts and the number of samples in which they are found for each biofluid. From Table 4 , there are 975 miRNAs detected in at least one of the sequenced plasma samples with >10 counts. If we examine miRNAs that are consistently detected in plasma samples, we find 329 miRNAs expressed >10 counts in 50% of the samples and only 98 miRNA detected in 100% of the plasma samples. 545 miRNAs are detected in at least one urine sample, and 122 miRNAs are identified at >10 counts in 50% of samples, and 25 miRNAs with >10 counts were found in 100% of the samples. 336 miRNAs were detected at least once in a saliva sample with >10 counts, and 141 and 69 miRNAs were detected with >10 counts in 50% and 100% of saliva samples. There are surprisingly few miRNAs consistently detected in all samples. Supplementary Table 4 summarizes the miRNAs detected in all samples. In this table, we describe the number of samples in which each miRNA was detected, for each biofluid. We also display the level of expression for that miRNA in each biofluid. Table 4 Number of miRNAs detected in each biofluid. Full size table Limit of detection of miRNAs The analysis for detection of miRNAs in Table 4 depends on read depth and the complexity of miRNAs and other small RNAs in the sample. Each library loaded onto the sequencer has some variability in read depth, and the amount of rRNA and other RNA biotypes present in the sample can alter the number of reads that align to the genome and to miRNA. Using some of the libraries loaded with higher than expected read depth, we can calculate how many miRNAs are detected with increasing library size. Figure 3C displays the number of miRNAs observed as a function of input reads. The samples are binned by one million read increments. The median number of detected miRNAs for each bin with greater than 1, 10 and 50 read counts are plotted. As expected, the solid line depicts the increase in the number of miRNAs detected as a logarithmic function with respect to the sequencing depth. For the plasma samples, an increase in the sequencing depth from 10–11 million reads to greater than 20 million reads adds another 64 miRNAs that are detected with at least 50 read counts. However, for the urine samples, a commensurate increase in sequencing depth adds only 17 miRNAs detected with 50 counts. For saliva, doubling the sequencing depth from 4–6 million reads to greater than 10 million reads adds 78 miRNAs detected with at least 50 counts. Saliva samples require many more input reads to achieve meaningful levels of reads mapped to the human genome, and therefore to the detection of miRNAs ( Fig. 3D ). miRNAs with highest and lowest CVs We next wanted to assess how similar the samples within each biofluid were to each other, and if the samples were taken from the same individual, were they more similar than when compared to the samples from the whole group. We examined data from individuals that provided more than 5 samples over time (11 individuals provided >5 plasma samples and 5 individuals provided >5 urine samples). It should be noted, the samples obtained from an individual were not equally distributed in time, a collection period could span 69 weeks. We did not have >5 saliva samples sequenced from any individuals, and therefore did not include an analysis. The miRNA read counts for all samples were normalized for library size using the median ratio method. The coefficient of variation (CV) was calculated to assess the dispersion of miRNA expression among all of the samples sequenced, and multiple samples collected from the same individuals. In this analysis we included only well-expressed miRNAs, >50 read counts and detected in at least 80% of the samples. The box plots in Fig. 4A display the distribution of miRNA CV values for each individual sampled at least 5 times. The distribution of the miRNA CVs for the “All samples” boxplot takes into account one sample per individual. The boxplots for the individuals depict intra-individual variance and the last boxplot for “All samples” represents inter-individual variance. For urine, the intra-individual miRNA variation is significantly less than the inter-individual variation. A two-sided Wilcoxon rank sum test reveals that this difference is statistically significant, with p-values < 0.05 for all individuals and p-value < 0.001 for individuals 1, 2, 3 and 4 (See Fig. 4A ). miRNAs detected in the plasma samples had a wider range of variability, higher (star) and lower (asterisk) CV than the samples from all individuals. Figure 4: Coefficient of variation for multiple samples taken from the same individual compared with the coefficient of variation from all subjects. ( A ) displays the distribution of CVs calculated for each miRNA detected at >50 counts in at least 80% of the plasma and urine samples. Some of the individual subjects 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 that provided more than 5 samples for sequencing over the course of ~70 weeks, show a closer distribution of CVs when compared to samples from all subjects (asterisk). And some of the subjects with >5 samples display a higher CV than when examining all subjects at once (star). Distribution of miRNA counts for the 15 miRNAs with the lowest CV ( B ) and the highest CV ( C ) in each biofluid. At this time, we are unable to determine if the CVs are related to biological variability or to technical variability. Full size image Figure 4B,C display the 15 miRNAs with the lowest CVs (4B) and highest CVs (4C) for all three biofluids across all samples sequenced. miRNAs that were in common between at least two biofluids are bolded and underlined, only one miRNA was highlighted for all biofluids. miR-1246 had a high CV in all biofluids. Box plots in Supplementary Figure S2 display the top 15 miRNAs with the lowest CV and highest CV for individuals (5 miRNAs overlap in the analysis of the lowest CVs and 5 miRNAs overlap in the calculated highest CVs). Detailed exogenous RNA analysis We did not directly target exogenous RNA sequences in our samples. However, we assessed the potential bacterial content in the samples using GOTTCHA; Genomic Origin Through Taxonomic CHAllenge 40 . GOTTCHA is a profiling tool that uses read-based metagenome characterization following a hierarchical collection of exclusive signatures at multiple taxonomic levels such as strain-level, species, genus, family, order, class and phylum. Owing to similarity in genomic regions, such as 16S rRNA, coding regions and other highly conserved regions, metagenomic identification typically yields results with a high false discovery rate (FDR). We used the precompiled bacterial database available at ftp://ftp.lanl.gov/public/genome/gottcha/ consisting of unique species-level genomic signatures that was produced by eliminating shared 24-mer (k24) sequences from 4937 bacterial replicons (includes both chromosomes and plasmids) and the human genome, while retaining a minimum of 24 bp of unique fragments. When mapped to the species-level database, the reads are rolled up to the next higher taxonomic orders, which are genus, family, order, class and phylum. Further elimination of false positives entail that the organism discovered must have a minimum of 100 non-overlapping bases covering the unique genomic signature, a coverage of at least 0.5% and at least 10 hits to the unique signature. These stringent requirements allow for the identification of bacteria in the biofluid samples with a potentially low false positive rate. Out of the 161 plasma exRNA samples sequenced, only 9 samples had at least one significant species of bacteria detected. Only two species of bacteria was seen in 4 or more samples: Candidatus Tremblaya princeps and Brucella melitensis . One sample had 315 species of bacteria present, with the highest read counts going to Desulfovibrio vulgaris, Pyrococcus furiosus, Singulisphaera acidiphila, Rhodopirellula baltica, Asticcacaulis excentricus and Synechococcus sp. WH 8102 Desulfobacca acetoxidans . 66 out of the 159 urine exRNA samples had at least one significant species of bacteria detected. Escherichia coli was seen in 7 samples and only three species of bacteria was seen in >25% of the 66 samples: Achromobacter xylosoxidans, Gardnerella vaginalis and Streptococcus pneumonia , with 22, 20 and 17 samples respectively. One sample had 1433 species of bacteria present; with the highest read counts going to Prevotella intermedia, Singulisphaera acidiphila, Micrococcus luteus, Arthrobacter phenanthrenivorans and Amycolicicoccus subflavus . A large number of bacterial species are seen in the saliva samples. There are 110 bacterial species seen in at least 22 of the 30 saliva samples (75% of the saliva samples sequenced). The top ten most highly detected bacteria are: Rothia mucilaginosa, Prevotella melaninogenica, Dyadobacter fermentans, Streptococcus salivarius, Methanolacinia petrolearia, Bacillus megaterium, Anaerobaculum mobile, Singulisphaera acidiphila and Rothia dentocariosa . In the saliva samples, an average of ~45.5% of the reads mapped uniquely to bacterial species (after adapter trimming and removing reads that were <15 nts). Table 5 provides a snapshot of the species most commonly detected and in what number of samples. Supplementary Table S3 has a more thorough analysis of the bacterial species detected using GOTTCHA. Table 5 List of exogenous species detected by GOTTCHA. Full size table Discussion This is the largest dataset examining extracellular RNA expression, and the presence of different RNAs, to date. As we are deciphering what quality metrics differentiate a good sample from a bad sample – and what should be expected from extracellular RNA samples, more data sets are required. Different isolation, sequencing, and analysis tools will need to be applied to large data sets so that we can accurately gauge the true signature from technical artifact. In addition, more data exploring a significant range of age and gender will be important. The Extracellular RNA Communication Consortium (ERCC) is actively developing an extensive atlas for normal extracellular RNAs from biofluids, and from several diseases and injuries. The dataset described here will be deposited in the atlas where it can be compared with other samples examining extracellular RNAs (the data can also be found in dbGaP, accession # phs001258.v1.p1). We analyzed this data for several key abundant RNA biotypes, however, there are several more biotypes that could be explored using this data 9 , 10 . Each biofluid appears to have clear differences in extracellular RNA expression profiles. For example, there appears to be a high proportion of tRNA fragments in urine samples, when compared with other RNA biotypes. In particular, there is one fragment that has very high read counts. It would be interesting to know if the tissues in closest proximity to urine (kidney, bladder, adrenal gland) had very high levels of this tRNA or the fragment. Or, perhaps the increased proportion of the fragments in urine is due to some filtering mechanism for that biofluid. Better small RNA tissue atlases that include more comprehensive profiles of the small RNA species will be necessary to help answer these questions. Unique sequences that align exclusively to piRNA in urine and saliva samples are very low. It is not possible to say that there are no piRNA in these biofluids, but their origin and overlap with other RNA biotypes will have to be carefully examined. We assessed YRNAs, which accounted for ~63% of the reads mapping to the genome for plasma samples. It is unclear what the role these 5 prime YRNA fragments have in normal healthy individuals, but they are found in significant abundance. The Gingeras laboratory recently observed that RNY5 fragments, found in vesicles isolated from cancer cells, could trigger cell death 18 . Urine (0.7%) and saliva (0.5%) samples did not show such high levels of YRNA fragments. miRNAs are the most diverse RNA biotype found in biofluid samples. Most of the miRNAs could be detected in plasma, with few unique miRNAs in urine and saliva. While almost 1000 miRNAs could be detected in all plasma samples, if we required higher stringency - that they be detected in most samples (80%) with at least modest expression levels (>10 counts), the number of miRNAs in the analysis went down dramatically. Because sequencing does pick up such large numbers of other small RNAs (YRNA, tRNA, etc.), a more targeted approach to miRNA detection may reveal a larger number of miRNAs consistently expressed. Not surprisingly, saliva had a large number of reads going to bacterial species. The largest category of reads aligned to the human genome for the saliva samples was, unassigned, meaning that most reads were to intergenic, intronic or were sequences that mapped to >40 places in the genome and could not be assigned to any location. More reads actually aligned to bacterial species than to the human genome. We used an algorithm, GOTTCHA, to detect bacteria in our samples 40 . Urine samples had some bacterial species identified, and only a handful of samples had a large diversity of bacteria detected. This analysis was interesting and showed few plasma samples that had detectable levels of bacteria. There was one plasma sample that had 315 bacterial species detected at low levels, likely due to contamination. In future experiments, additional negative controls should be examined to verify that the bacterial species came from the biofluid samples, and not contamination from subject skin, collection process, isolation or kit preparations. As we examine extracellular RNA profiles in biofluids, we are looking for consistent RNAs that can be detected with confidence in each biofluid, as well as normal levels of expression for comparison to disease and injury. We found that miRNAs from urine, that were assessed more than 5 times from the same individuals over a years time frame, were closer together than when examining the miRNA dispersion in samples taken from all subjects. This was not the case for plasma samples, where some individuals had higher and lower variability over time than when compared to all subjects. This may indicate that establishing a baseline for individuals when they are healthy may provide the most meaningful comparisons when exploring early indicators of disease, severity, or outcome. While we cannot determine at this time if the lowest and highest CVs are due to technical or biological variability, we believe this is worth keeping track of as more large datasets become available. For example, miR-1246 was found to have one of the highest CVs in each biofluid. Does this miRNA reflect rapid turnover and changes in response to biological events, or significant technical variability due to sample collection, handling or sequencing? As more large datasets examining the full profile of extracellular RNAs found in biofluids emerge, it will be important to learn what variables alter the detection of RNAs. As more individuals share their samples, analysis and classification schemes, other researchers can improve upon the methods to increase accuracy and remove variability from the analysis. Therefore, we have tried to provide the most comprehensive profile of the most abundant RNA species detected in our samples using current tools and databases. This information will be essential as the field moves toward using these expression changes for the detection of health, disease, and injury. Materials and Methods Samples Samples were collected from male college athletes ages 18–25. All human subjects provided written consent form prior to enrollment. All samples were collected with consent and approval from the Western Instiutional Review Board (WIRB) study ID# 1307009395. Small RNA profiling experiments from human samples were performed at the Translational Genomics Research Institute (TGen) in accordance with the regulations and proper approval from WIRB. Blood samples were collected in EDTA tubes and placed in a cooler with ice packs until they were transported from Arizona State University (ASU) to TGen, within 2–3 hours of blood draw. Samples were spun down at 2500 RPM for 10 minutes at 4 °C. Plasma was aliquotted at 1 mL volumes into 2 mL RNase/DNase free Microcentrifuge tubes (VWR), and stored at −80 °C. Urine was collected in sterile cups and placed in a cooler with ice, and transported to TGen within 2–3 hours of collection. Samples were spun at 3000 RPM for 10 minutes at 4 °C and aliquotted 15 mL into a 50 mL conical tube for storage at −80 °C. Saliva samples were collected by allowing passive drool to collect and spitting into a 50 mL conical tube. The sample was spun at 3000 RPM for 10 minutes at 4 °C, aliquotted as 1 mL volumes into 2 mL microcentrifuge tubes and stored at −80 °C. RNA Isolation For all plasma and saliva samples we isolated 1 mL of biofluid. Samples were isolated using the mir Vana miRNA Isolation Kit (ThermoFisher Scientific, AM1560) according to Burgos et al ., 2013 41 . Samples were DNase treated using TURBO DNA-free Kit (ThermoFisher Scientific, AM1907). Because of residual phenol/chloroform, samples were then cleaned and concentrated using Zymo RNA Clean and Concentrator (Zymo Research, R1016) using Protocol: Purification of small and large RNAs into separate fractions and combining the fractions at the end. All urine samples (15 mL) were isolated using Urine Total RNA Purification Maxi Kit, Slurry Format (Norgen Biotek Corp., Cat#29600). Samples were DNase treated on column using RNase-Free DNase Set (Qiagen, cat# 79254). Because there was no residual phenol/chloroform, samples were concentrated by Speed Vacuum. Sequencing All sequencing data is available through the ERCC exRNA Atlas and through accession number phs001258.v1.p1 in dbGaP. The plasma, saliva and urine RNA were quantified in triplicate using Quant-iT Ribogreen RNA Assay kit, Low-Range protocol (R11490; ThermoFisher). The Illumina small RNA TruSeq kit (RS-200–0048; Illumina) was used for sequencing all samples. RNA input for plasma and saliva was 10–20 ng for all samples and the RNA input for urine was 30 ng for all samples. The reagents from the Illumina TruSeq kit were halved, as in Burgos et al ., 2013. Each sample was assigned one of 48 possible indices. We used 16 PCR cycles for all samples. Indexed samples were run on a gel and purified away from the adaptor band. The samples were then pooled and placed on Illumina V3 single read flowcells (GD-401-3001; Illumina). The average read counts at each nucleotide length for each biofluid is displayed in Supplementary Figure S3 . RNA-Seq data analysis The raw sequence image files from the Illumina HiSeq 2500 in the form of bcl are converted to the fastq format using bcltofastq v1.8.4 and checked for quality to ensure the quality scores do not deteriorate drastically at the read ends. The adapters from the 3′ end are clipped using cutadapt v.1.10 ( ). Reads shorter than 15 nts are discarded and after adapter trimming, the 3′ bases below a quality score of 30 are trimmed as well. All subsequent steps are carried out using sRNABench ( ), which provides an elegant framework to map the reads to various RNA libraries using Bowtie1 to perform the alignments. The reads are first mapped to human rRNA sequences obtained from NCBI and those that map are removed from analysis. The algorithm used in sRNABench is based on the mirAnalyzer 42 , where the reads with the same sequence are collapsed and mapped to the human genome and miRNA database. A single base mismatch and a seedlength of 19 nts are used for this step. The reads that remain after mapping to the miRNA database are then mapped to mature tRNAs, piRNAs and all other RNAs in ENSEMBL 75. Here, there is no mismatch allowed and each read is allowed to multimap to at most 40 RNA annotations. Table 6 provides the list of libraries used and their versions. Table 6 Database of RNA biotypes used. Full size table Analysis of the tRNA fragments (tRFs) All reads that map to mature tRNAs are used for the analysis. The reads, based on their read sequence are stacked as shown in Table 7 and the read counts for all reads that share the same 5 prime or 3 prime end are added up for that particular tRNA. For example, the following fragments arise from the 5′ end of mature tRNA GluCTC. There are 8 mature GluCTC tRNAs with identical 5′ ends. These fragments are now “collapsed” to their longest sequence and all read counts are added up (bottom row). Table 7 Example for the collapse of tRNA sequences. Full size table The fragments that arise from the 5 prime and the 3 prime end of the mature tRNAs are termed tRF5s and tRF3s respectively. Fragments that arise from neither of these two ends are termed tRFMs (tRF middle). This gives rise to a list of fragments with unique sequences whose source on the mature tRNA (tRF5, tRF3 and tRFU), the length distribution and the type of tRNA fragment (ValCAC, GlyGCC, etc) are now known. Analysis of YRNA fragments Analysis was carried out in the same manner as tRNA analysis. Analysis of the exogenous bacterial species The precompiled bacterial database available at ftp://ftp.lanl.gov/public/genome/gottcha/ consisting of unique species-level genomic signatures that was produced by eliminating shared 24-mer (k24) sequences from 4937 bacterial replicons (includes both chromosomes and plasmids) and the human genome, while retaining a minimum of 24 bp of unique fragments was used. All the parameters used for filtering out false positives were the default parameters in GOTTCHA. Additional Information How to cite this article: Yeri, A. et al . Total Extracellular Small RNA Profiles from Plasma, Saliva, and Urine of Healthy Subjects. Sci. Rep. 7 , 44061; doi: 10.1038/srep44061 (2017). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Following a three-year study of the Arizona State University football program, researchers at the Translational Genomics Research Institute (TGen) have created the largest dataset to date of extracellular small RNAs, which are potential biomarkers for diagnosing medical conditions, including concussions. Details of the dataset were published today in Scientific Reports, an online open-access journal of the Nature Publishing Group. The study amassed a collection of biomarkers from the ASU student-athletes' biofluids: blood, urine and saliva. A portion of that information will be used with data from helmet sensors that recorded the number, intensity and direction of head impacts during games and practices from the 2013-16 football teams. TGen researchers are using that combined data to potentially develop new diagnostic and therapeutic tools. "Large datasets—examining different biofluids, isolation methods, detection platforms and analysis tools—are important to further our understanding of the extent and types of extracellular materials present when someone is injured or develops disease," said Dr. Kendall Van Keuren-Jensen, TGen Associate Professor of Neurogenomics and Co-Director of TGen's Center for Noninvasive Diagnostics, and one of the study's senior authors. "Concussion safety, protocol and diagnostics are key components of Sun Devil Athletics' student-athlete welfare program," said Ray Anderson, ASU Vice President for University Athletics. "Our partnership with TGen and the research conducted with these biomarkers will ideally provide doctors, trainers and administrators with a mechanism to proactively safeguard the health of our student-athletes. We are proud and excited to be a part of this groundbreaking study that will significantly expand research in this important area of scientific discovery." Because the data is being published in an open access journal, they are available to aid other researchers studying how to develop tests for the detection and extent of injuries involving everything from automobile accidents to battlefield explosions. Sensors in the ASU student-athlete football helmets were wirelessly connected to a field-level computer as part of the Sideline Response System—a head impact monitoring and research tool developed and deployed by Riddell, a leading provider of helmets to the NFL and major college football teams. "Riddell is pleased to be engaged with TGen on its important research as it has great potential to help the scientific community worldwide in the development of new breakthroughs, particularly in the area of brain health," said Dan Arment, President and Chief Executive Officer of Riddell, the industry leader in football helmet technology and innovation. TGen researchers used advanced genomic sequencing to identify the biomarkers of extracellular RNA (exRNA), strands of genetic material that are released from cells, and which can be detected in biofluids. TGen sequenced, or spelled out, the chemical letters that make up these biomarkers from among 183 blood samples, 204 urine samples and 46 saliva samples derived from among 55 consenting student-athletes, ages 18-25. "The small RNA profile of each biofluid is distinct," the study said. "These data significantly contribute to the current number of sequenced exRNA samples from young healthy individuals." By identifying biofluids associated with healthy individuals, researchers hope to use these as standards for assessing disease and injury: "Establishing a baseline for individuals when they are healthy may provide the most meaningful comparisons when exploring early indicators of disease, severity or outcome," the study said. "These data will help inform us about how best to develop additional tools to enrich and capture specific types of information," according to the paper, titled: "Total Extracellular Small RNA Profiles from Plasma, Saliva, and Urine of Healthy Subjects." "We have tried to provide the most comprehensive profile of the small RNA species detected in our samples," said Dr. Matt Huentelman, TGen Professor of Neurogenomics, and one of the study's lead authors. "This information may prove to be essential as the field moves toward using RNA expression changes for the detection of health, disease and injury."
10.1038/srep44061
Chemistry
Protein suggests a new strategy to thwart infection
Recognition of microbial glycans by human intelectin-1, DOI: 10.1038/nsmb.3053 Journal information: Nature Structural and Molecular Biology
http://dx.doi.org/10.1038/nsmb.3053
https://phys.org/news/2015-07-protein-strategy-thwart-infection.html
Abstract The glycans displayed on mammalian cells can differ markedly from those on microbes. Such differences could, in principle, be 'read' by carbohydrate-binding proteins, or lectins. We used glycan microarrays to show that human intelectin-1 (hIntL-1) does not bind known human glycan epitopes but does interact with multiple glycan epitopes found exclusively on microbes: β-linked D -galactofuranose (β-Gal f ), D -phosphoglycerol–modified glycans, heptoses, D - glycero- D - talo -oct-2-ulosonic acid (KO) and 3-deoxy- D- manno -oct-2-ulosonic acid (KDO). The 1.6-Å-resolution crystal structure of hIntL-1 complexed with β-Gal f revealed that hIntL-1 uses a bound calcium ion to coordinate terminal exocyclic 1,2-diols. N -acetylneuraminic acid (Neu5Ac), a sialic acid widespread in human glycans, has an exocyclic 1,2-diol but does not bind hIntL-1, probably owing to unfavorable steric and electronic effects. hIntL-1 marks only Streptococcus pneumoniae serotypes that display surface glycans with terminal 1,2-diol groups. This ligand selectivity suggests that hIntL-1 functions in microbial surveillance. Main Organisms that serve as hosts for microbes must distinguish microbial cells from those of their own 1 , 2 . A mechanism of differentiation is especially important at sites in which host tissues contact the environment, such as the lung, intestine and skin 3 , 4 . Differences in cellular surface glycosylation can serve as markers of a cell's identity—its developmental state, its tissue type or its being self or nonself 5 . Cell-surface glycans can be distinguished by carbohydrate-binding proteins, or lectins 6 , which are typically categorized on the basis of their monosaccharide selectivity 7 . These lectins can be exploited for host defense, as in the case of innate immune lectins, such as mannose-binding lectin 8 . In serum, mannose-binding lectin is precomplexed with mannose-binding lectin–associated serine proteases, and interaction of this complex with a cell surface results in activation of the lectin pathway of complement and ultimately leads to pathogen opsonization and clearance 9 , 10 . Other humoral lectins implicated in immunity include ficolins, collectins, galectins and HIP/PAP 1 , 11 , 12 , 13 . One group of lectins whose specificity has been unclear is the intelectins (IntLs). The first IntL protein was reported in Xenopus laevis oocytes 14 . Homologs have since been identified in many other chordates, including other amphibians, fishes and many mammals. IntLs belong to a family of lectins termed X-type lectins 15 and have been shown to exist as homo-oligomers of 35-kDa monomers. They are reported to function as calcium ion–dependent lectins; however, they do not contain the calcium-dependent C-type-lectin sequence motif 16 present in many human lectins. IntLs instead contain a fibrinogen-like domain (FBD, residues 37–82 in hIntL-1 (ref. 17 )) and have been proposed to be most similar to ficolins, a class of FBD-containing innate immune lectins 11 . Several observations have implicated IntLs in innate immunity. Mammalian IntLs are predominantly produced by lung and intestinal goblet cells, and intestinal Paneth cells 17 , 18 , 19 . In sheep and mice, IntL expression increases upon infection with intestinal parasitic nematodes 20 , 21 . In humans, the mucus induced by allergic reactions is enriched in IntLs 22 , 23 . Moreover, hIntL-1 has been reported to be the intestinal lactoferrin receptor 24 and to function as a tumor marker 25 . It also has been suggested to be involved in metabolic disorders including diabetes, in which it is known as omentin 26 . Given these diverse potential functions, we set out to examine the ligand specificity of hIntL-1. hIntL-1 has been reported to bind furanose residues (five-membered-ring saccharide isomers), including ribofuranose (Rib f ) and a β-Gal f –containing disaccharide 17 , 27 . The monosaccharide Gal f is present in the cell-surface glycans produced by a number of microbes, but the biosynthetic enzymes that mediate Gal f incorporation are absent in humans 28 , 29 , 30 . The presence of Gal f in microbial but not human glycans is an example of phylogenetic glycan differences 31 . This is just one example, and collectively the surface glycans of microbes are generated from more than 700 unique building blocks, whereas fewer than 35 carbohydrate residues are needed to assemble mammalian glycans 32 , 33 . In principle, targeting of monosaccharide residues unique to microbes could be used by the innate immune system to differentiate mammalian cells from microbes. We reasoned that clues to hIntL-1 function would emerge from determining the glycans to which hIntL-1 binds and the molecular basis for its recognition selectivity. Here, we use glycan microarrays to demonstrate that hIntL-1 preferentially binds microbial over human glycans. Given the diversity of microbial glycans, a lectin that binds a single microbial saccharide epitope (for example, galactofuranose) would be expected to have specialized functions. It is therefore striking that hIntL-1 does not engage a single monosaccharide or even related saccharides but instead interacts with multiple structurally divergent microbial monosaccharide residues. We have used X-ray crystallography to reveal the molecular mechanism by which hIntL-1 recognizes its targets: hIntL-1 binds its carbohydrate ligands through calcium ion–dependent coordination of a conserved exocyclic, terminal 1,2-diol. The functional-group selectivity observed in the glycan arrays is manifested in the context of cells because hIntL-1 targets S. pneumoniae serotypes that display its glycan ligands. Results hIntL-1 binds β-Gal f Native hIntL-1 has been shown to exist as a disulfide-linked trimer 17 , 27 . Therefore, we first developed a robust expression system that yields the protein as a disulfide-linked trimer that can be purified with an immobilized–β-Gal f column ( Supplementary Fig. 1a,b ). Because lectin-carbohydrate interactions often depend on multivalent binding 34 , 35 we postulated that hIntL-1 trimers might bind avidly to multivalent carbohydrate displays. Hence, we evaluated hIntL-1 carbohydrate binding specificity by using immobilized biotinylated carbohydrates (β- D -Gal f , β- D -galactopyranose (β-Gal p ) and β- D -ribofuranose (β-Rib f )) in an enzyme-linked immunosorbent-like assay (ELISA) ( Fig. 1a and Supplementary Fig. 1c,d ). We chose the monosaccharide epitopes that we tested on the basis of a previous study in which a small carbohydrate panel was evaluated for inhibition of hIntL-1 binding to a polysaccharide immobilized on a polymeric resin 17 . In those studies, ribose was the most effective competitor (half-maximal inhibitory concentration (IC 50 ) <5 mM), and it was followed by Gal f- β(1,4)-GlcNAc (IC 50 of 9 mM), with galactose being less potent (IC 50 of 66 mM) 17 . Our data indicate that hIntL-1 does not bind ribofuranose or galactopyranose, but it does engage the β-Gal f –substituted surface avidly with a functional affinity (apparent affinity) of 85 ± 14 nM ( Fig. 1b ). Figure 1: hIntL-1 selectivity for monosaccharides. ( a ) Structures of saccharides used for characterization of hIntL-1 by ELISA and SPR. ( b ) Specificity of hIntL-1 binding to immobilized β-Gal f , β-ribofuranose (β-Rib f ) and β-galactopyranose (β-Gal p ), evaluated by ELISA (schematic in Supplementary Fig. 1b ). Data are shown as mean ± s.d. ( n = 3 technical replicates)). Data were fit to a single-site binding equation (solid lines) and therefore represent the apparent affinity of trimeric hIntL-1. Values for hIntL-1 bound to immobilized β-Gal f ( K d(apparent, trimer) ± s.d.) are 85 ± 14 nM or 8.0 ± 1.3 μg/ml. OD, optical density. ( c ) Representative real-time SPR sensorgrams (from three independent experiments) of hIntL-1 binding to immobilized carbohydrates. Biotin served as a control. (The complete SPR data set is in Supplementary Fig. 1e .) Full size image Our results contrast with those of the previous study 17 because we did not detect binding to the pyranose form of galactose or to ribofuranose. The apparent discrepancies could arise because inhibition was obtained in the previous investigation with high concentrations of free carbohydrate. Under those conditions, competition could arise from protein modification or from the less prevalent open-chain form of the saccharide. The apparent binding constant that we observed for hIntL-1 binding to immobilized β- D -Gal f suggests that the protein binds tightly to a ligand, but the previous IC 50 for the β- D -Gal f –containing disaccharide (9 mM) suggests the interaction is weak. This difference presumably stems from the distinct assay formats. We postulated that the presentation of glycosides from a surface is a more relevant assessment of hIntL-1 activity because it mimics key aspects of the multivalent display of carbohydrate ligands on a cell surface 34 . Nonetheless, the differences between the reported hIntL-1 binding specificities and those we observed prompted us to examine hIntL-1 binding with another assay. We used surface plasmon resonance (SPR) and monitored hIntL-1 interaction with surfaces to which the aforementioned saccharides or β- D -arabinofuranose (β-Ara f ) or α- L -rhamnopyranose (α- L -Rha) were appended. Even at high concentrations of hIntL-1, we observed only selective hIntL-1 binding to β-Gal f ( Fig. 1c and Supplementary Fig. 1e ). hIntL-1 binding to microbial glycans Glycan microarray technology can provide a more comprehensive assessment of hIntL-1 ligand recognition 36 . Therefore, we prepared a focused array that included furanosides ( Supplementary Table 1 ), with the methods used to generate the Consortium for Functional Glycomics (CFG; ) mammalian glycan v5.1 array, and we tested both arrays for hIntL-1 binding. In the focused array, we included lacto- N -neotetraose (LNnT) and asialo, galactosylated biantennary N-linked glycan (NA2) to ascertain the efficiency of carbohydrate immobilization. Data from the focused array were consistent with those obtained from the ELISA and SPR assays, thus indicating that, of the carbohydrates displayed, hIntL-1 bound only to carbohydrates with β-Gal f residues ( Fig. 2a and Supplementary Table 1 ). We attribute the small amount of binding to β-Gal p to its hydrophobic, alkyl anomeric linker. In contrast to the furanoside array, the CFG v5.1 array yielded no validated interactions with mammalian glycans ( Fig. 2a ). Increasing the protein concentration yielded similarly low signals, a result suggesting that the modest residual binding that we detected arose from nonspecific interactions ( Supplementary Table 2 ). Thus, none of the human glycans examined are ligands of hIntL-1. Figure 2: Glycan selectivity of hIntL-1, assessed by glycan microarrays. ( a ) Recombinant hIntL-1 (50 μg/ml) binding to mammalian glycan microarray CFG v5.1 (left) and to a furanoside array (right). The concentrations given for the furanoside array represent those used in the carbohydrate-immobilization reaction. Data are shown as mean ± s.d. ( n = 4 technical replicates). (The full data set is in Supplementary Tables 1 and 2 .) RFU, relative fluorescence units. ( b ) Binding of recombinant Strep -tagged hIntL-1 (50 μg/ml) to microbial glycan array. Data are shown as mean ± s.d. ( n = 4 technical replicates). (Glycan array data organized by genus are in Supplementary Fig. 2a , and the full data set is in Supplementary Table 3 .) ( c ) Structural representation of the putative key binding epitopes for hIntL-1 and the nonbinding N -acetylneuraminic acid (α-Neu5Ac). A terminal vicinal diol (red) is a common feature of α-Neu5Ac and all of the ligands identified. Full size image The initial binding data revealing that hIntL-1 robustly complexes β-Gal f residues but not human glycans prompted us to evaluate the lectin's specificity for a more diverse collection of microbial glycans. Though absent from mammals 28 , Gal f residues occur in glycans from a number of human pathogens, including the bacteria Mycobacterium tuberculosis and Klebsiella pneumoniae , and the fungus Aspergillus fumigatis 29 , 37 . The possibility that hIntL-1 interacts with microbial glycans was tested with a microarray displaying more than 300 oligosaccharides from bacterial species 38 . Screening of this array revealed multiple glycan ligands for hIntL-1 ( Fig. 2b , Supplementary Fig. 2a and Supplementary Table 3 ). These ligands comprised glycans from Gram-negative and Gram-positive bacteria, including S. pneumoniae , Proteus mirabilis , Proteus vulgaris , Yersinia pestis and K. pneumoniae ( Table 1 ). Four of the top 15 ligands contained terminal β-Gal f epitopes, including the outer polysaccharide from K. pneumoniae and a capsular polysaccharide from S. pneumoniae . Surprisingly, the majority of the glycans identified did not possess Gal f residues. The top five hits had saccharide residues with D -glycerol-1-phosphate substituents. This epitope was the common feature because the residue to which it was appended varied between glycans. Other common epitopes included either D/l - manno- heptose, KO or KDO residues ( Fig. 2c ). Each characterized glycan ligand from the top 15 hits contains at least one of the five aforementioned epitopes. Despite its ability to bind structurally diverse glycans, hIntL-1 exhibited selectivity. Conspicuously missing from hit microbial glycan ligands were those containing α-Gal f residues ( Supplementary Fig. 2b ). What was especially notable, however, was that none of the hIntL-1 ligands that we identified on the microbial glycan array are found in mammalian glycans, but collectively these five residues are widely distributed in bacteria 32 . Table 1 Top 15 microbial glycan ligands, sorted by average fluorescence intensity Full size table Structure of hIntL-1 To understand the molecular mechanisms underlying glycan recognition by hIntL-1, we determined its structure by X-ray crystallography. Apo–hIntL-1 crystals diffracted to 1.8-Å resolution, and we solved the structure of the protein with molecular replacement by using the structure of a selenomethione-labeled Xenopus laevis IntL as a search model ( Table 2 ) (PDB 4WMO ). hIntL-1 possesses an oblong, globular structure containing two highly twisted β-sheet–containing structures surrounded by seven short α-helices and extensive random-coil regions ( Fig. 3a ). The second of these β-sheet structures closes on itself to form a very short stretch of unusually flattened β-ribbons (amino acids 221–226 and 248–278). A Dali search 39 with the hIntL-1 structure yielded several weak fibrinogen and ficolin structure hits (r.m.s. deviation values of ∼ 4 Å). The secondary structures of L-ficolin 40 and hIntL-1 are related up to residue 150, although the sequence conservation is limited to the FBD. The remaining residues diverge substantially in sequence and structure ( Supplementary Fig. 3 ). Indeed, removal of the first 150 residues from the hIntL-1 Dali input yielded no hits. These data indicate that hIntL-1 has a composite fold not previously reported. Table 2 Data collection and refinement statistics Full size table Figure 3: Structure of hIntL-1 bound to allyl-β- D -Gal f . ( a ) Complex of hIntL-1 disulfide-linked trimer and allyl-β- D -Gal f . Each monomer unit is depicted in green, wheat or gray; the β-allyl Gal f in black; calcium ions in green; the intermonomer disulfides in orange; and ordered water molecules in the binding site in red. The two orientations indicate the positioning of all three ligand-binding sites within the trimer. The trimeric structure is produced from chain A in the asymmetric unit by a three-fold crystallographic operation. ( b ) Stereo image of the carbohydrate-binding site. Residues involved in calcium coordination and ligand binding are noted. Dashed lines are included to show the heptavalent coordination of the calcium ion and to highlight functional groups important for ligand and calcium-ion binding. (Difference density map ( F o – F c , 3σ) of the allyl-β- D -Gal f ligand is in Supplementary Fig. 4b .) Full size image Two hIntL-1 monomers are present in the asymmetric unit (chain A and chain B), and they represent two similar, though nonidentical (Cα r.m.s. deviation of 0.65 Å), disulfide-linked trimers, each arranged around a crystallographic three-fold axis. In one trimer, the peptide chain that connects each monomer to the adjacent monomer is resolved, so that the intermolecular disulfide bond between residues C31 and C48 is apparent ( Fig. 3a ). These data are consistent with SDS-PAGE analysis indicating that hIntL-1 exists as a trimer. Each hIntL-1 monomer has three calcium ions, and each cation is chelated by hard protein or water ligands (bond distance 2.3–2.5 Å). Two of these cations are embedded within the protein while one is surface exposed. To determine how hIntL-1 binds its ligands, we solved a structure of the complex of allyl-β- D -Gal f bound to hIntL-1, to 1.6-Å resolution. The Cα r.m.s. deviation between the asymmetric unit of apo and Gal f −bound structures (0.118 Å) suggested that no substantial structural changes occur upon ligand binding. The Gal f O(5) and O(6) hydroxyl groups displace ordered water molecules and serve as coordinating ligands for the surface-accessible calcium ion. Protein side chains are poised for hydrogen-bonding (i.e., H263 to the Gal f O(6) hydroxyl group; Fig. 3b and Supplementary Fig. 4a ), enhance calcium coordination. As they chelate the calcium, the carbohydrate vicinal exocyclic hydroxyl groups adopt a gauche conformation, with dihedral angles of 45° and 51° for chains A and B, respectively. As anticipated from the structure, glycans containing Gal f residues with substituents at either the O(5) or O(6) fail to bind hIntL-1 ( Fig. 2b and Supplementary Table 3 ). This portion of the saccharide also fits well into a binding pocket formed by W288 and Y297. The presence of these aromatic groups suggests that CH-π bonds contribute to affinity. The high resolution of the structure of the hIntL-1 complex allows unambiguous assignment of the β-Gal f ring conformation 41 , 42 in each monomer ( Supplementary Fig. 4b ). Using the Altona-Sundaralingam pseudorotational model 43 , we calculated the pseudorotational phase angle, P , of each furanoside to assign its conformation. In hIntL-1 chain A, the furanoside is in the twist conformation with C 1 above the plane of the ring and the ring O below, and the torsion angles around the C 4 -C 5 bond are both in gauche conformation, while those around the C 5 -C 6 bond are gauche-trans ( 1 T O -gg-gt) (calculated P of 105°) conformation, whereas the β-Gal f shown in Figure 3b adopts the envelope 4 E-gg-gt (calculated P of 57°) conformation ( Supplementary Fig. 4c,d ). The presence of conformational differences within the structures is consistent with the flexibility of furanosides 42 . Structural basis for hIntL-1 selectivity The structure of the lectin–Gal f complex reveals why the acyclic 1,2-diol moiety is critical: the vicinal hydroxyl groups engage in calcium-ion coordination. However, other glycan properties contribute to hIntL-1 recognition. For example, hIntL-1 does not bind α-Gal f –substituted glycans ( Supplementary Fig. 2b ). A cursory assessment of the β-Gal f complex suggests that hIntL-1 might accommodate α-Gal f linkages. An alteration in anomeric configuration for furanosides, however, can drastically change conformational preferences. Although the low energetic barrier of furanoside ring pseudorotation complicates definitive analysis, experimental and computational studies of the isomeric methyl glycosides of D -Gal f have revealed that the anomers have dramatically different conformational preferences 42 . The β-Gal f 4 E-gg-gt conformer that we find in hIntL-1 chain B is predicted to be the second lowest in energy (0.4 kcal/mol) 42 . That conformation for methyl-α-Gal f is destabilized by 3.2 kcal/mol. As a result, the expected Boltzmann population for methyl-α-Gal f in a 4 E-gg-gt conformation is less than 0.2%, and it is thus ranked 25th out of the 90 conformations examined 42 . These data suggest that α-Gal f residues adopt a conformation incompatible with favorable hIntL-1 interactions. One of the most striking findings from the binding data is that the lectin failed to interact with any of the 148 α-Neu5Ac–containing glycans in the mammalian glycan array ( Fig. 2a ). A saccharide epitope widespread in human glycans, α-Neu5Ac has a terminal 1,2-diol and shares similarity with KDO, which are common in microbial glycans and do function as hIntL-1 ligands 44 . We used a biotinylated glycoside to confirm that hIntL-1 fails to interact with surfaces displaying α-Neu5Ac ( Supplementary Fig. 5a ). Moreover, compounds identified as hIntL-1 ligands—glycerol and glycerol-1-phosphate—competitively inhibit the lectin from binding to β-Gal f , but methyl-α-mannopyranoside and methyl-α-Neu5Ac do not ( Supplementary Fig. 5b ). These results indicate that hIntL-1 uses a single site to bind disparate sterically unhindered 1,2-diol epitopes within microbial glycans, yet the lectin evades interaction with human carbohydrate epitopes. To understand the ability of hIntL-1 to discriminate between methyl-α-Neu5Ac and bacterial carboxylic acid–containing sugars such as KDO and KO, we docked methyl-α-Neu5Ac and methyl-α-KDO into the hIntL-1 structure. We found that the KDO glycoside is readily accommodated, but the α-Neu5Ac glycoside is not ( Fig. 4a,b ). Anion-anion repulsion between the α-Neu5Ac anomeric exocyclic carboxylate and the carboxylate side chains in the binding site should destabilize binding. Additionally, steric interactions between the methyl group of the anomeric oxygen and the bulky C(5) N -acetyl group with the protein surface should disfavor α-Neu5Ac complexation ( Fig. 4a ). The destabilizing interactions with α-Neu5Ac cannot be mitigated by rotating bonds or by adopting accessible low-energy conformations. Future experiments with protein variants and ligand analogs will be useful in testing this proposed evasion mechanism. Figure 4: Models for hIntL-1 interacting with relevant saccharide epitopes from humans (α-Neu5Ac) or microbes (α-KDO). ( a ) Docking of methyl-α-Neu5Ac into the hIntL-1 structure. The conformation shown is similar to that observed in other protein structures with a methyl-α-Neu5Ac ligand (PDB 2BAT , 2P3I , 2P3J , 2P3K , 2I2S , 1KQR , 1HGE and 1HGH (refs. 56 , 57 , 58 , 59 , 60 )). All models in this figure were generated from the allyl-β- D -Gal f –bound structure by docking the relevant diol of each compound into the Gal f diol electron density in Coot without further refinement. Calcium ions are shown in green and ordered water molecules in red. ( b ) Docking of methyl-α-KDO into the hIntL-1 structure. Comparison with methyl-α-Neu5Ac docked into the hIntL-1 structure reveals differences in the steric requirements for binding for each molecule. Full size image hIntL-1 comparison with ficolins The FBD of hIntL-1 suggested that hIntL-1 would be structurally related to the ficolins. With the structure of an X-type lectin complex, it is now apparent that, outside the FBD, intelectins and ficolins deviate extensively. IntLs lack the collagen-like domain that mediates complement activation. Additionally, the hIntL-1 carbohydrate-recognition domain is larger than that of the ficolins, and hIntL-1 coordinates three calcium ions, two of which are buried, whereas the ficolins bind only a single calcium ion. Finally, the carbohydrate-binding site and mode of recognition differ. The ficolin calcium ion is not found in the glycan-binding site; in contrast, a surface-exposed calcium ion in hIntL-1 participates directly in glycan binding ( Supplementary Fig. 3c ). Together, the data suggest that X-type lectins, of which the hIntL-1 structure serves as the founding member, constitute a distinct protein structural class. hIntL-1 binding to S. pneumoniae Because hIntL-1 is expressed in mucosal tissues, we examined its binding to immunologically distinct serotypes of the encapsulated human lung pathogen S. pneumoniae , the causative agent of several diseases, including pneumonia, meningitis and septicemia 45 . The surface-exposed pneumococcal capsular polysaccharide is among the first microbial antigens encountered by the immune system upon challenge 46 . This capsule is important for pathogen survival and is associated with virulence. Antibodies targeting the capsule have been shown to be protective against pneumococcal diseases, an observation that was previously leveraged to develop a polysaccharide-based vaccine that is protective against streptococcus infections 47 . The serotypes that we selected possess glycans that were present on the microbial glycan array: serotype 8 displays a glycan that lacks a terminal diol, serotype 43 displays a phosphoglycerol unit, and serotypes 20 and 70 possess β-Gal f residues 46 (chemical structures in Fig. 5a ). The data indicate that hIntL-1 binds to the surfaces of serotypes 20, 70 and 43, each of which displays cell-surface glycans with an exocyclic, terminal 1,2-diol ( Fig. 5b–d and Supplementary Fig. 6 ). As predicted by the structure of the β-Gal f –hIntL-1 complex, binding to these strains depends on calcium ion–mediated coordination, and glycerol functions as a competitive ligand ( Fig. 5b,d ). The relative fluorescence intensity of hIntL-1 binding to whole bacteria is generally consistent with the results predicted by the microbial glycan array. Specifically, hIntL-1 bound to strains that display β-Gal f (i.e., hit 13 from the microbial array, Table 1 ), but it interacted most avidly with the serotype displaying the D -glycerol-1-phosphate–modified saccharide that was the top hit from the microbial glycan array ( Fig. 5c ). These data suggest that the relative ligand ranking from the array analysis can provide information about how effectively a lectin can target cells displaying those glycans. Moreover, the results demonstrate that hIntL-1 specifically recognizes structurally diverse exocyclic 1,2-diol–containing glycans on bacterial cell surfaces. Figure 5: hIntL-1 binds to S. pneumoniae serotypes producing capsular polysaccharides with terminal vicinal diols. ( a ) Chemical structure of the capsular polysaccharides displayed on the S. pneumoniae serotypes (8, 20, 43 and 70) tested. The Gal f residues assumed to mediate hIntL-1 cell binding are shown in red, and the phosphoglycerol moiety is shown in blue. Ac, acetyl. ( b ) Fluorescence microscopy of hIntL-1 binding to S. pneumoniae serotype 20. Bacteria were treated with Strep -tagged hIntL-1 (15 μg/ml). Red, anti– Strep -tag antibody conjugate; blue, cellular DNA visualized with Hoechst. hIntL-1 at the surface of serotype 20 bacteria in the presence of Ca 2+ (left) or EDTA (right). Images are representative of more than five fields of view per sample. Scale bars, 2 μm. (Results for serotypes 43, 70 and 8 are shown in Supplementary Fig. 6a.) (c,d ) Flow cytometry analysis of Strep –hIntL-1 binding to S. pneumoniae serotypes with an anti– Strep -tag antibody conjugate. In the anti- Strep control sample, recombinant hIntL-1 was omitted. Cells were labeled with propidium iodide. ( c ) Flow cytometry analysis of serotypes 8, 20, 43 and 70. Data were collected consecutively with identical instrument settings. ( d ) Dependence of the hIntL-1–carbohydrate interaction on Ca 2+ , tested by addition of 10 mM EDTA. Ligand selectivity was tested by addition of 100 mM glycerol. Data are representative of two independent experiments. (Analyses of serotypes 20, 70 and 8 are shown in Supplementary Fig. 6b .) Full size image hIntL-1 has been reported to bind lactoferrin 24 , a protein that appears to have antimicrobial activity 48 . These observations suggest that hIntL-1 could recruit lactoferrin to microbial cell surfaces for cell killing. To examine the interaction between these proteins, we immobilized human lactoferrin and assayed hIntL-1 binding by ELISA. As reported, we detected an interaction between lactoferrin and hIntL-1, but in our assay, in contrast to the previous reports, this interaction did not require calcium ions. The apparent affinity that we measured for the hIntL-1 trimer is rather weak for a specific protein-protein interaction ( K d of ∼ 500 nM). The isoelectric points (pI) of the proteins ( ∼ 5.5 for hIntL-1 and ∼ 8.5 for lactoferrin) suggest that the interaction may be mediated by bulk Coulombic interactions. We were unable to detect any killing of S. pneumoniae by human lactoferrin (up to 100 μg/ml) in a buffer that would be compatible with hIntL-1 binding to the cell surface (HEPES-buffered saline, pH 7.4, with 2 mM CaCl 2 ). Our results were consistent with those of others who noted that the bactericidal activity of lactoferrin is abolished under similar conditions 49 , 50 . These initial data are inconsistent with a central role for lactoferrin–intelectin complexes in mediating microbial cell killing, and they suggest that other functional roles for hIntL-1 should be explored. Mouse IntL-1 binding to Gal f If the role of intelectins is to participate in defense against microbes, the recognition specificity of intelectins from other mammals should be preserved. We therefore produced mouse IntL-1, which is the mouse homolog 27 of hIntL-1. When we tested mouse IntL-1 with the same SPR assay used with the human homolog, its glycan-recognition properties were analogous: it failed to interact with β-ribofuranose, β-arabinofuranose, α-rhamnopyranose or β-Gal p , but it did interact with β-Gal f ( Supplementary Fig. 7 ). These data support the prospect that IntLs from different species have evolved to bind widely distributed 1,2-diol–containing epitopes unique to microbes. Discussion Data from glycan microarrays reveal that hIntL-1 recognizes multiple microbial glycan epitopes yet paradoxically can discriminate between microbial and mammalian glycans. By determining the structure of this X-type lectin bound to Gal f , we have resolved this apparent contradiction. The five saccharide epitopes identified as recognition motifs (Gal f , phosphoglycerol, glycero - D - manno -heptose, KDO and KO) share a common feature: a terminal acyclic 1,2-diol. The hIntL-1 X-ray structure indicates that these terminal vicinal hydroxyl groups can coordinate to a protein-bound calcium ion. This binding mode has similarities to that used by another major class of mammalian carbohydrate-binding proteins: the C-type lectins 16 . C-type lectins also recognize glycans through calcium ions in the binding site, to which carbohydrate hydroxyl groups coordinate 7 . In the case of C-type lectins, however, the hydroxyl groups are typically those on the pyranose ring of a mannose, fucose or galactose residue. The hIntL-1–binding pocket requires that any 1,2-diol motifs possess a primary hydroxyl group because the aromatic substituents W288 and Y297 act as walls to preclude the binding of more substituted diols. These aromatic substituents could contribute not only to specificity but also to affinity. The positioning of Y297 could allow it to participate in a CH-π interaction 51 , which would enhance binding. Although the terminal 1,2-diol is necessary for hIntL-1 recognition, it is not sufficient. The lectin is unable to bind human glycans, including those with an α-Neu5Ac residue. This result was confusing because glycans with α-Neu5Ac residues were prevalent on the mammalian glycan microarray, and although many glycans in this array present a terminal 1,2 diol, none were bound by hIntL-1. We were unable to model methyl-α-Neu5Ac in the hIntL-1 binding site without incurring Coulombic repulsion or severe steric interactions. These observations suggest a molecular basis for hIntL-1's ability to avoid interaction with human glycans. With a structure that identifies the glycan-binding site, the proposed rationale for hIntL-1's selectivity for microbial glycans can be tested further. We anticipate that our structure will also provide insight into the physiological roles of the intelectins. The upregulation of intelectins upon infection suggests that they may function in innate immunity. Although existing data from genome-wide association studies have not directly linked intelectin mutations and increased susceptibility to infection, there are studies that have linked hIntL-1 to asthma 52 and Crohn's disease 53 , both of which arise from defects at mucosal surfaces where intelectins are secreted. Moreover, the amino acid variant V109D is associated with an increased risk of asthma 52 . Our structure reveals that this residue is not centrally important for binding, but it is located at a monomer-monomer interface. We postulate that the trimeric form of hIntL-1 is important for the lectin's function. The presence of three binding sites on one face of the hIntL-1 trimer ( Fig. 3a ) suggested that the protein could exploit multivalency to recognize relevant terminal 1,2-diol motifs and bind avidly to microbes. We therefore tested whether hIntL-1's selectivity for glycans would be manifested in a proclivity to engage only those S. pneumoniae serotypes whose capsular polysaccharides possess hIntL-1 recognition motifs. Our finding that hIntL-1 bound to strains bearing Gal f (serotypes 20 or 70) or phosphoglycerol (serotype 43) but not those lacking the requisite terminal 1,2-diol (serotype 8) highlights the advantages of using a simple binding epitope: hIntL-1 is not restricted to binding solely one glycan building block; instead, it can interact with bacterial cells that present glycans composed of very different components (Gal f versus phosphoglycerol). Because it engages a small epitope found within microbial glycans, hIntL-1 should be capable of recognizing a wide variety of microbes. We analyzed the 20 most common glycan building blocks unique to microbes 32 and found that half of these possess an acyclic 1,2-diol that could, in principle, be recognized by intelectins (structures in Fig. 6 ). The potential that a given microbe generates glycan ligands for hIntL-1 can be inferred from genetic sequence data. For example, organisms bearing Gal f residues have a glf gene 29 . D -glycerol-1-phosphate–modified glycans are generated from the activated donor CDP- D -glycerol and therefore will encode functional homologs of the S. pneumoniae gct gene 46 . Pathways that lead to the incorporation of heptose, KO and KDO are known, because these residues are found in lipopolysaccharide 54 and in the capsular (K) antigen of Gram-negative bacteria 55 . The orientation of the saccharide-binding sites on a single face of the hIntL-1 trimer not only enhances the avidity of cell-surface binding but also provides a surface for recruitment of other immune proteins or effectors to a hIntL-1–bound microbe. The remarkable selectivity of hIntL-1 for microbial over human cell-surface glycans raises the intriguing possibility that IntLs function as microbial detectors. It is possible that this selective microbial recognition can be harnessed to deliver cargo to microbes, to detect them or to target them for destruction. Figure 6: Structures of the 20 most prevalent monosaccharides that are unique to bacterial glycans. The most common, L , D -α-heptose, is shown in the top left corner, and number 20, β- L -arabinose-4N, is shown at the bottom right. This figure is derived from data in ref. 32 . Terminal acyclic 1,2-diol epitopes that could serve as ligands of hIntL-1 are highlighted with a red box. Cross symbol designates monosaccharides for which no stereochemical information was provided. Full size image Methods Chemical synthesis of glycans. Procedures for glycan synthesis are described in detail in the Supplementary Note . Native human intelectin-1 expression and purification. The cDNA for hIntL-1 ( NM_017625 ) was obtained from Open Biosystems clone LIFESEQ2924416 as a glycerol stock (GE Healthcare). The full coding sequence, residues 1–313, was amplified with PCR with the forward primer 5′-CGTGGGATCCTGGAGGGAGGGAGTGAAGGAGC-3′ and the reverse primer 5′-GCCAGCTCGAGACCTTGGGATCTCATGGTTGGGAGG-3′. The primers included sites for the restriction endonucleases BamHI and XhoI , respectively. The doubly digested PCR fragment encoding hIntL-1 was ligated into a doubly digested pcDNA4/myc-HisA vector backbone (Life Technologies). Correct insertion was confirmed by DNA sequencing (UW–Madison Biotechnology Center). The gene encoding hIntL-1 was expressed via transient transfection of suspension-adapted HEK 293T cells obtained from the American Type Culture Collection (ATCC). Cells were transfected in Opti-MEM I Reduced Serum Medium (Life Technologies) at ∼ 2 × 10 6 cells/mL with Lipofectamine 2000 (Life Technologies), according to the manufacturer's protocol. Six hours after transfection, the culture medium was exchanged to FreeStyle F17 expression medium (Life Technologies) supplemented with 50 U/mL penicillin-streptomycin, 4 mM L -glutamine, 1× nonessential amino acids, 0.1% FBS and 0.1% Pluronic F-68 (Life Technologies). Cells expressing hIntL-1 were cultured for up to 6 d, or until viability decreased below 60%, at which point the conditioned expression medium was harvested by centrifugation and sterile filtration. Conditioned medium was adjusted to pH 7.4 by slow addition of a 0.1 M solution of sodium hydroxide (NaOH), and calcium chloride (CaCl 2 ) was added from a 1 M stock solution to achieve a final concentration of 10 mM. Recombinant hIntL-1 was purified by binding to a β-Gal f column generated from reaction of a β-Gal f glycoside bearing an anomeric linker and an amine to UltraLink Biosupport (Pierce). The resulting resin was washed with a solution of 20 mM HEPES, pH 7.4, 150 mM sodium chloride (NaCl) and 10 mM CaCl 2 . hIntL-1 was eluted with a solution of 20 mM HEPES, pH 7.4, 150 mM NaCl and 10 mM EDTA, and the protein was concentrated with a 10,000 molecular-weight-cutoff (MWCO) Amicon Ultra centrifugal filter. The buffer was exchanged to 20 mM HEPES, pH 7.4, 150 mM NaCl and 1 mM EDTA. Protein purity was assessed by SDS-PAGE electrophoresis and Coomassie blue staining and was often >95%. The concentration of hIntL-1 was determined according to absorbance at 280 nm, with a calculated ɛ = 237,400 cm −1 M −1 for the trimer and an estimated trimer molecular mass of 101,400 Da (to account for glycosylation). Typical yields from a 30-mL transfection were 400 μg. Expression and purification of Strep -tag II hIntL-1. An N-terminal Strep -tag II was cloned into the hItnL-1::pcDNA4 vector with site-directed mutagenesis and a primer set composed of 5′-ACCACCAGAGGATGGAGTACAGATTGGAGCCATCCGCAGTTTGAAAAGTCTACAGATGAGGCTAATA CTTACTTCAAGGA-3′ and its reverse complement. Correct insertion was confirmed with DNA sequencing. Strep –hIntL-1 was expressed identically to hIntL-1. For purification, conditioned Strep –hIntL-1 medium was adjusted to pH 7.4 with NaOH, avidin was added per the IBA protocol (IBA, cat. no. 2-0205-050), CaCl 2 was added to 10 mM and the solution was cleared with centrifugation (15,000 g for 15 min). Protein was captured onto 2 mL of Strep -Tactin Superflow resin (IBA, cat. no. 2-1206-002). The resulting resin was washed with a solution of 20 mM HEPES, pH 7.4, 150 mM NaCl, and 10 mM CaCl 2 and then 20 mM HEPES, pH 7.4, 150 mM NaCl and 1 mM EDTA. The protein was eluted with 5 mM d -desthiobiotin (Sigma) in 20 mM HEPES, pH 7.4, 150 mM NaCl and 1 mM EDTA and concentrated with a 10,000-MWCO Amicon Ultra centrifugal filter. The concentration of Strep –hIntL-1 was determined with absorbance at 280 nm, with a calculated ɛ = 237,400 cm −1 M −1 for the trimer and an estimated trimer molecular mass of 101,400 Da. Typical yields were similar to what was measured with untagged hIntL-1. For protein X-ray crystallography, Strep –hIntL-1 was purified after culture-medium dialysis against 20 mM bis-Tris, pH 6.7, 150 mM NaCl and 1 mM EDTA. The pH of the culture medium was adjusted to 6.7, avidin was added per the IBA protocol, CaCl 2 was added to 10 mM and the solution was cleared with centrifugation. Protein was purified by capture onto Strep -Tactin Superflow resin. Resin was washed with 20 mM bis-Tris, pH 6.7, 150 mM NaCl, and 10 mM CaCl 2 and then with 20 mM bis-Tris, pH 6.7, 150 mM NaCl and 0.5 mM EDTA. Protein was eluted with 5 mM d -desthiobiotin (Sigma) in 20 mM bis-Tris, pH 6.7, 150 mM NaCl, and 0.5 mM EDTA and concentrated with a 10,000-MWCO Amicon Ultra centrifugal filter. hIntL-1 carbohydrate binding ELISA-like assay. To fabricate carbohydrate-displaying surfaces, 0.5 μg of streptavidin (Prozyme, cat. no. SA20) was adsorbed onto a Maxisorp (Nunc) flat-bottomed 96-well plate in PBS. Wells were washed with PBS and then coated with 5 μM of carbohydrate-biotin ligand in PBS for 1 h at 22 °C. Wells were blocked with bovine serum albumin (BSA) in ELISA buffer (20 mM HEPES, pH 7.4, 150 mM NaCl, 10 mM CaCl 2 and 0.1% Tween-20). Samples containing hIntL-1 were prepared by serial dilution into ELISA buffer with 0.1% bovine serum albumin (BSA) and added to wells for 2 h at 22 °C. Wells were washed four times with ELISA buffer. Bound hIntL-1 was detected with 0.75 μg/mL of a sheep polyclonal IgG hIntL-1 antibody (R&D Systems, cat. no. AF4254) in ELISA buffer with 0.1% BSA for 2 h at 22 °C. This primary antibody has been validated by the company for detecting intelectin by western blot, immunohistochemistry and direct ELISA. Wells were washed with ELISA buffer. A donkey anti-sheep IgG horseradish peroxidase (HRP) conjugate (Jackson ImmunoResearch Laboratories) was added at a 1:5,000 dilution in ELISA buffer with 0.1% BSA for 1 h at 22 °C. When Strep –hIntL-1 was assayed, Strep MAB-Classic HRP conjugate (IBA, cat. no. 2-1509-001) was used to specifically recognize the Strep -tag II of bound hIntL-1. Strep MAB-Classic HRP conjugate was diluted 1:10,000 in ELISA buffer with 0.1% BSA and incubated for 2 h at 22 °C. Wells were washed. hIntL-1 was detected colorimetrically with addition of 1-Step Ultra TMB-ELISA (Pierce). Once sufficient signal was achieved (typically in <2 min), the reaction was quenched by addition of an equal volume of 2 M sulfuric acid (H 2 SO 4 ). Plates were read at 450 nm on an ELx800 plate reader (Bio-Tek). When testing the calcium-ion dependency of hIntL-1, 1 mM EDTA replaced 10 mM CaCl 2 in all steps. Data were analyzed in Prism6 (GraphPad). Data were fit to a one-site binding equation. Surface plasmon resonance (SPR). Analysis of intelectins with SPR was conducted on a ProteOn XPR36 (Bio-Rad) at the University of Wisconsin−Madison Department of Biochemistry Biophysics Instrumentation Facility (BIF). To measure intelectin binding, ProteOn NLC sensor chips (NeutrAvidin-coated sensor chip) (Bio-Rad, cat. no. 176-5021) were used to capture the biotinylated carbohydrate ligand. All experiments presented here were conducted at surface-saturated levels of ligand, ∼ 200 response units (RU). In all experiments, captured biotin was used in flow cell one as a control. Samples containing purified intelectin were prepared by serial dilution into intelectin SPR running buffer (20 mM HEPES, pH 7.4, 150 mM NaCl, 1 mM CaCl 2 and 0.005% Tween-20). Surfaces were regenerated with short injections of solutions of 10 mM hydrochloric acid (HCl). Data were referenced with either the interspots or the biotin reference channel and processed with the Bio-Rad ProteOn software package. Construction of the furanoside glycan array. The microarray of furanoside-containing glycans was printed as previously described 61 , 62 . Briefly, the amine functionalized glycans shown in Supplementary Figure 6a were dissolved in 100 mM sodium phosphate, pH 8.0, and printed as 14 arrays on N -hydroxysuccinimidyl (NHS) ester–activated slides (Shott Nexterion). Arrays were printed in quadruplicate at different glycan concentrations (as indicated in Supplementary Fig. 6b ) with a Piezorray printer (PerkinElmer) that delivered 0.33 nL per spot. The 2-amino( N -aminoethyl) benzamine (AEAB) derivatives of lacto- N -neotetraose (LNnT) and asialo, galactosylated biantennary N-linked glycan (NA2) were printed as controls to confirm glycan immobilization. After printing, covalent coupling of glycans to the surface was facilitated by incubation at 55 °C in an atmosphere of >80% humidity for 1 h. Slides were dried in a desiccator overnight and blocked with a solution of 50 mM ethanolamine in 50 mM borate buffer, pH 8.0. Prior to interrogation with glycan-binding proteins (GBPs), the arrays were rehydrated in binding buffer. Assay of hIntL-1 on furanoside and CFG mammalian glycan array. GBPs at various concentrations were applied to separate furanoside arrays in 70 μL of binding buffer (20 mM HEPES, pH 7.4, 150 mM NaCl, 1 mM EDTA, 10 mM CaCl 2 , 1% BSA and 0.05% Tween-20) in the wells formed on the slide with a silicon grid (14 wells per slide). After incubation for 1 h at RT, the slides were washed with wash buffer (20 mM HEPES, pH 7.4, 150 mM NaCl, 1 mM EDTA, 10 mM CaCl 2 and 0.05% Tween-20). The biotinylated lectins Erythrina cristagalli lectin (ECL) and Ricinus communis agglutinin I lectin (RCA-I) were detected with Alexa Fluor 488–labeled streptavidin (10 μg/ml) in binding buffer ( Supplementary Fig. 6c,d ). hIntL-1 was detected with the same sheep polyclonal IgG antibody specific for hIntL-1 (5 μg/ml) (R&D Systems) and an Alexa Fluor 488–labeled donkey anti-sheep IgG secondary antibody (5 μg/ml) (Life Technologies). Bound protein was detected with a ProScanArray Scanner (PerkinElmer) equipped with four lasers covering an excitation range from 488 to 633 nm. The data from the furanoside glycan array were analyzed with ScanArray Express (PerkinElmer) as the average of the four replicates. For the analysis of the CFG glycan array 36 , hIntL-1 was applied in 70 μl at concentrations of 50 and 200 μg/ml in binding buffer under a coverslip to distribute the solution evenly over the large array of 610 glycans printed in sextuplicate (Array v5.1). After washing and scanning steps, the data from the CFG glycan microarray were analyzed with ImaGene software (BioDiscovery) as the average of four values after removal of the high and low values of the six replicates. With both the furanoside and mammalian glycan array, the images were converted to Excel files, and the data are reported as histograms of average relative fluorescence units (RFU) versus the print identification numbers that identified the glycan targets. Figures were made with Prism6 (GraphPad) or Excel (Microsoft). Assay of hIntL-1 on the bacterial glycan array. Strep –hIntL-1 was used to interrogate the Microbial Glycan Microarray version 2 (MGMv2). Construction of the MGMv2 was as previously described 38 . Briefly, bacterial polysaccharide samples were dissolved and diluted to 0.5 mg/mL in printing buffer (150 mM sodium phosphate, pH 8.4, and 0.005% Tween-20). Samples were immobilized on NHS-activated glass slides (SlideH, Schott/Nexterion) with a MicroGrid II (Digilab) contact microarray printer equipped with SMP-4B printing pins (Telechem). Six replicates of each bacterial glycan sample were printed. Covalent coupling of glycans to the surface was facilitated by incubation for 1 h after printing at 100% relative humidity. The remaining reactive NHS moieties were quenched with a blocking solution (50 mM ethanolamine in 50 mM borate buffer, pH 9.2). Blocked slides were stored at −20 °C until assays were performed. To interrogate the MGMv2, Strep –hIntL-1 was diluted to 50 μg/mL in binding buffer (20 mM Tris-HCl, pH 7.4, 150 mM NaCl, 2 mM CaCl 2 , 2 mM magnesium chloride (MgCl 2 ), 1% BSA and 0.05% Tween-20) and applied directly to the array surface for 1 h. After incubation, the array was washed by dipping into binding buffer four times. The Strep -tag II on bound hIntL-1 was detected with Strep MAB-Classic Chromeo 647 nm (10 μg /mL, IBA Lifesciences) diluted in binding buffer, applied directly to the array surface and allowed to incubate for 1 h. The array was washed in binding buffer (four dips), binding buffer without BSA and Tween-20 (four dips) and deionized water (four dips). Finally, the array was dried by centrifugation and scanned. Interrogated arrays were scanned for Chromeo 647 signal with a ProScanArray Express scanner (PerkinElmer), and resultant images were processed to extract signal data with Imagene (v6.0, Biodiscovery). Signal data were calculated as the average of four values after removal of the high and low values of the six replicates. Data were plotted with Excel (Microsoft) as average relative fluorescence units (RFU) versus print identification number. Figures were made with Prism6 (GraphPad). Protein X-ray crystallography. The Strep –hIntL-1 protein that was purified with 20 mM bis-Tris, pH 6.7, was concentrated to 1.5 mg/mL, 1 M CaCl 2 was added to a final concentration of 10 mM, and crystallization (hanging-drop vapor diffusion) was achieved by mixture of 1 μL of the protein solution and 1 μL of well solution (100 mM bis-Tris, pH 6.0, and 25% PEG 3350). Crystals grew to full size in 2 weeks. Protein crystals of Apo–hIntL-1 were cryoprotected via transfer to well solution supplemented to a total concentration of 35% PEG 3350 for 1 min and then were vitrified in liquid nitrogen. The allyl-β-Gal f –hIntL-1 complex was formed by soaking of apo–hIntL-1 crystals in cryoprotection solution supplemented with 50 mM allyl-β- D -galactofuranose for 2 weeks. Single-crystal X-ray diffraction experiments were performed at beamline 21-ID-D (Life Sciences Collaborative Access Team, LS-CAT), Advanced Photon Source, Argonne National Laboratory. The wavelength for data collection was 0.97924 Å for the Apo–hIntL-1 structure and 1.00394 for Gal f -Bound hIntL-1. Integration, scaling, and merging were performed with HKL2000 (ref. 63 ). The structure was solved with the PHENIX suite 64 . The Xenopus laevis intelectin structure recently solved in our laboratory was used as a search model to determine the structure of apo–hIntL-1 by molecular replacement with Phaser 65 . Because the data for apo–hIntL-1– and β-Gal f −bound hIntL-1 are isomorphous, the structure of β-Gal f −bound hIntL-1 was solved by a difference Fourier method with apo–hIntL-1 as a starting model for rigid-body refinement with phenix.refine 66 . The chemical restraint for β-Gal f was generated by PRODRG 67 . Model adjustment and refinement were performed in Coot 68 and phenix.refine, respectively ( Supplementary Table 1 ). The model was validated with MolProbity 69 . Crystal structure figures were generated with PyMOL ( ). hIntL-1 binding to S. pneumoniae. S. pneumoniae (Klein) Chester serotypes 8 (ATCC 6308), 20 (ATCC 6320), 43 (ATCC 10343) and 70 (ATCC 10370) were obtained from the ATCC. The structure of the capsular polysaccharide from each of these serotypes has been previously determined 46 . Cells were revived in tryptic soy broth containing 5% defibrinated sheep blood. Cells were grown on plates of tryptic soy agar containing 5% defibrinated sheep blood or in suspension in Luria Broth (LB). Cells were grown at 37 °C under 5% carbon dioxide gas. During liquid culture, cells were shaken at 100 r.p.m. To analyze hIntL-1 binding to the bacterial cell surface, cells were harvested by centrifugation, washed with PBS and fixed in 1% formaldehyde in PBS for 30 min on ice. Cells were stained with 15 μg/mL Strep –hIntL-1 with a 1:250 dilution of Strep MAB-Classic Oyster 645 conjugate (IBA, cat. no. 2-1555-050) in 20 mM HEPES, pH 7.4, 150 mM NaCl, 10 mM CaCl 2 , 0.1% BSA and 0.05% Tween-20 for 2 h at 4 °C. To test the calcium-ion dependency of binding, 20 mM HEPES, pH 7.4, 150 mM NaCl, 10 mM EDTA, 0.1% BSA and 0.05% Tween-20 was used as the buffer. To assay for competitive inhibition by soluble glycerol, 20 mM HEPES, pH 7.4, 150 mM NaCl, 10 mM CaCl 2 , 100 mM glycerol, 0.1% BSA and 0.05% Tween-20 was used as the buffer. Cells were washed with 20 mM HEPES, pH 7.4, 150 mM NaCl, 10 mM CaCl 2 , 0.1% BSA and 0.05% Tween-20, aggregates were removed with a flow cytometry cell-strainer cap (Falcon) and propidium iodide (Life Technologies) was added to a 1:500 dilution. Cells were analyzed on a BD FACSCalibur at the University of Wisconsin−Madison Carbone Canter Center (UWCCC) Flow Cytometry Laboratory. Propidium iodide was used to differentiate fixed S. pneumoniae cells from debris. Data were analyzed with FlowJo ( ). For analysis by microscopy, cell aliquots were taken directly from the flow cytometry samples before propidium iodide staining. Samples were subsequently stained with Hoechst 33342 (Life Technologies). Each sample was spotted onto a glass-bottomed microwell dish (MatTek corporation) and covered with a 1% (w/v) agarose pad prepared in a matched buffer. Images were collected at room temperature with a Nikon A1 laser scanning confocal microscope (Nikon Instruments). Images were acquired with a Nikon plan apo 100/1.4 oil objective with a 1.2-AU pinhole diameter and NIS-elements C software (Nikon Instruments). Laser settings were determined by imaging the brightest control sample, serotype 43 treated with 15 μg/mL Strep –hIntL-1 and a 1:250 dilution of Strep MAB-Classic Oyster 645 conjugate in calcium buffer, to prevent pixel oversaturation. The pinhole diameter, offset, PMT gain and laser power were then held constant for each prepared sample. Each image was taken at the Z plane that provided maximal signal for the given section.For Hoechst 33258, illumination was performed with a 405-nm laser, and emission was collected between 425 and 475 nm. For Strep MAB-Classic Oyster 645 conjugate, illumination was performed with a 638-nm laser, and emission was collected between 663 and 738 nm. Images were prepared with the open-source Fiji distribution of ImageJ, and brightness and contrast were adjusted in the control sample (serotype 43 treated with 15 μg/mL Strep –hIntL-1 with a 1:250 dilution of Strep MAB-Classic Oyster 645 conjugate in calcium buffer) and propagated to all selected sample images for comparison. Images were then converted to an RGB format to preserve normalization and then assembled into panels. Expression of mouse intelectin-1. A detailed description of mIntL-1 expression is available in the Supplementary Note . Accession codes. Coordinates and structure factors have been deposited in the Protein Data Bank under accession codes 4WMQ (apo−hIntL-1) and 4WMY (Gal f -bound hIntL-1). Accession codes Primary accessions Protein Data Bank 4WMQ 4WMY Referenced accessions Protein Data Bank 1HGE 1HGH 1KQR 2BAT 2I2S 2J3U 2P3I 2P3J 2P3K 4WMO 4WMQ 4WMY
The newfound ability of a protein of the intestines and lungs to distinguish between human cells and the cells of bacterial invaders could underpin new strategies to fight infections. Writing this week (July 6, 2015) in the journal Nature Structural and Molecular Biology, a team led by University of Wisconsin-Madison Professor Laura Kiessling describes the knack of a human protein known as intelectin to distinguish between our cells and those of the disease-causing microbes that invade our bodies. "This has the potential to change the game in terms of how we combat microbes," says Kiessling. The discovery by Kiessling and several collaborating groups also helps illuminate a previously unrecognized line of defense against microbial invaders. In addition to Kiessling's lab, groups in the labs of UW-Madison bacteriology Professor Katrina Forest, Scripps Research Institute cell and molecular biology Professor James Paulson, and Emory University biochemistry Professor Richard Cummings contributed to the study. Intelectin is not new to science, Kiessling notes, but its ability to selectively identify many different kinds of pathogens and distinguish those cells from human cells was unknown. "The protein is upregulated with infection," explains Kiessling, "and while no one has yet shown that it is an antimicrobial protein, there are multiple lines of evidence that suggest it is." TheWisconsin group established that intelectin has all the properties needed to function in the immune system's surveillance complex. That makes sense, Kiessling explains, as the protein is found mostly in the cells in the intestine and respiratory system, the places most likely to be entry points for microbial pathogens. Intelectin performs its surveillance role through its ability to selectively recognize the carbohydrate molecules that reside on the surface of cells. Both mammalian cells and microbial cells have carbohydrates known as glycans on their cell surfaces. However, the chemical structures of the glycan molecules vary, and the molecules that decorate the surface of human cells are markedly different from those on microbial cells. By exposing human intelectin to arrays of both human and microbial glycanS, Kiessling and her colleagues found that intelectin could recognize different kinds of microbes as well as distinguish between microbial and mammalian glycans. The role of intelectin in immune response, Kiessling believes, is likely ancient. The same kinds of proteins are found in many different kinds of animals, including sheep, mice frogs, eels, fish and even sea squirts, suggesting it has been conserved through evolutionary history. The glycans to which intelectin attaches, however, can be vastly different. In humans, for example, less than 35 chemical building blocks are used to make the cell surface molecules. In bacteria, nature deploys more than 700 chemical building blocks to make glycans. This immense increase in diversity can make accurate detection difficult. "Human intelectin just recognizes a small portion of the glycan, a shared feature like a handle," explains Kiessling. "Then it can recognize when the handles appear, even when different types of bacteria make different glycans." The larger insight from the study could aid in the design of the next generation of antibiotics, which are urgently needed as many pathogens have become resistant to the antibiotics now most commonly used to treat infection.
10.1038/nsmb.3053
Medicine
Scientists identify a new way to activate stem cells to make hair grow
Aimee Flores et al, Lactate dehydrogenase activity drives hair follicle stem cell activation, Nature Cell Biology (2017). DOI: 10.1038/ncb3575 Journal information: Nature Cell Biology
http://dx.doi.org/10.1038/ncb3575
https://medicalxpress.com/news/2017-08-scientists-stem-cells-hair.html
Abstract Although normally dormant, hair follicle stem cells (HFSCs) quickly become activated to divide during a new hair cycle. The quiescence of HFSCs is known to be regulated by a number of intrinsic and extrinsic mechanisms. Here we provide several lines of evidence to demonstrate that HFSCs utilize glycolytic metabolism and produce significantly more lactate than other cells in the epidermis. Furthermore, lactate generation appears to be critical for the activation of HFSCs as deletion of lactate dehydrogenase (Ldha) prevented their activation. Conversely, genetically promoting lactate production in HFSCs through mitochondrial pyruvate carrier 1 (Mpc1) deletion accelerated their activation and the hair cycle. Finally, we identify small molecules that increase lactate production by stimulating Myc levels or inhibiting Mpc1 carrier activity and can topically induce the hair cycle. These data suggest that HFSCs maintain a metabolic state that allows them to remain dormant and yet quickly respond to appropriate proliferative stimuli. Main The hair follicle is able to undergo cyclical rounds of rest (telogen), regeneration (anagen) and degeneration (catagen). The ability of the hair follicle to maintain this cycle depends on the presence of the hair follicle stem cells, which reside in the bulge ( Fig. 1 ). At the start of anagen, bulge stem cells are activated by signals received from the dermal papilla, which at that stage abuts the bulge area 1 , 2 . These stem cells exit the bulge and proliferate downwards, creating a trail that becomes the outer root sheath. Bulge stem cells are capable of giving rise to all the different cell types of the hair follicle. The ability of HFSCs to maintain quiescence and yet become proliferative for a couple days before returning to quiescence is unique in this tissue, and the precise mechanism by which these cells are endowed with this ability is not fully understood. While significant effort has produced a wealth of knowledge on both the transcriptional and epigenetic mechanisms by which HFSCs are maintained and give rise to various lineages 3 , 4 , little is known about metabolic pathways in the hair follicle or adult stem cells in vivo . Figure 1: Lactate dehydrogenase activity is enriched in HFSCs. ( a ) IHC staining for Ldha expression across the hair cycle shows Ldha protein confined to the HFSC niche, the bulge, indicated by the bracket. IHC staining for Sox9 on serial sections demarcates the HFSC population. Scale bars, 20 μm. ( b ) Immunoblotting on FACS-isolated HFSC populations (α6low/Cd34 + and α6hi/Cd34 + ) versus total epidermis (Epi) shows differential expression of Ldha in the stem cell niche. Sox9 is a marker of HFSCs, and β-actin is a loading control. ( c ) Colorimetric assay for Ldh enzyme activity in the epidermis shows highest activity in the bulge (brackets) and subcuticular muscle layer (bracket). This activity is enriched in the bulge across different stages of the hair cycle. Activity is indicated by purple colour; pink is a nuclear counterstain. Note also that developing hair shafts in pigmented mice show strong deposits of melanin as observed here; hair shafts never displayed any purple stain indicative of Ldh activity. Scale bars, 50 μm. ( d ) Ldh activity in sorted cell populations, measured using a plate-reader-based assay, also shows the highest Ldh activity in two separate HFSC populations (α6hi/Cd34 + and α6low/Cd34 + ) compared with epidermal cells (Epi) and fibroblasts (FBs). Each bar represents the average signal for each cell type where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05 shown for each cell type versus epidermal cells. ( e ) HFSCs and epidermal cells were isolated during telogen (day 50) by FACS, and metabolites were extracted and analysed by LC–MS. Heatmaps show relative levels of glycolytic and TCA cycle metabolites from cells isolated from different mice in independent experiments with cells from three animals in each. G6P-F6P, glucose-6-phosphate and fructose-6-phosphate; FBP, fructose-bisphosphate; DHAP, dihydroxyacetone phosphate; 3PG, 3-phosphoglycerate; and αKG, alpha-ketoglutarate. Asterisks indicate significant difference in metabolite levels between epidermal cells and HFSCs. For ( e ) paired t -test was performed; ∗ P < 0.05; ∗ ∗ P < 0.01; ∗ ∗ ∗ P < 0.001; NS, P > 0.05; n = 9 mice pooled from 3 independent experiments. Unprocessed original scans of blots are shown in Supplementary Fig. 6 . Full size image Considering the fact that there are essentially no published data on metabolic states of any cell in the hair follicle, a detailed study of metabolism was necessary to understand the nature of HFSCs and their progeny. Several previous studies employed genetic disruption of the mitochondrial electron transport chain in the epidermis by deletion under the control of a pan-epidermal keratin promoter and found that mitochondrial function was essential for maintenance of the follicle 5 , 6 , 7 , 8 . However, these studies did not explore the metabolic requirements for specific cell types within the tissue, nor did they explore a role for glycolytic metabolism. In this study, we present methods to study the metabolism of HFSCs in vivo , and provide evidence that these cells take advantage of a distinct mode of metabolism not found in their progeny. In the process, we also define small molecules that can take advantage of the unique metabolism of HFSCs to ignite the hair cycle in otherwise quiescent follicles. RESULTS Numerous studies have uncovered unique gene expression signatures in HFSCs versus other follicle cells or cells of the interfollicular epidermis 9 , 10 , 11 , 12 . Many of these signatures are regulated by transcription factors that were later shown to play important roles in HFSC homeostasis 13 . Lactate dehydrogenase is most commonly encoded by the Ldha and Ldhb genes in mammals, the protein products of which form homo- or hetero-tetramers to catalyse the NADH-dependent reduction of pyruvate to lactate and NAD + -dependent oxidation of lactate to pyruvate 14 . By immunostaining, Ldha appeared to be enriched in quiescent HFSCs in situ (telogen) ( Fig. 1a ), and immunohistochemistry (IHC) with an antibody that recognizes both Ldha and Ldhb showed that only Ldha appears to be localized to the HFSC niche ( Supplementary Fig. 1a ). HFSCs are known to go through successive rounds of quiescence (telogen) punctuated by brief periods of proliferation correlating with the start of the hair cycle (telogen–anagen transition) 4 , 15 . Proliferation or activation of HFSCs is well known to be a prerequisite for advancement of the hair cycle. IHC analysis also showed that Ldha expression was enriched in HFSCs (Sox9 + ) at three stages of the hair cycle ( Fig. 1a ). Consistently, immunoblotting of lysates from sorted cells showed strong expression of Ldha in the basal HFSCs (α6hi/CD34 + ), and suprabasal (α6lo/CD34 + ) HFSC populations relative to total epidermis ( Fig. 1b ) 9 (the sorting strategy is outlined in Supplementary Fig. 1b ). To determine whether Ldha expression patterns correlate with activity of the Ldh enzyme, we used a colorimetric-based enzymatic assay to assess Ldh activity capacity in situ . Typically performed on protein lysates or aliquots with a plate reader 16 , we adapted the Ldh activity assay to work in situ on frozen tissue sections. Note that since both the in situ and in vitro Ldh activity assays employ use of excess substrate (lactate), the results from these assays reflect the capacity for Ldh activity, and not the steady-state activity. Applying this assay to skin samples demonstrated that Ldh activity capacity was significantly higher in HFSCs, consistent with the expression pattern of Ldha ( Fig. 1c ). Furthermore, Ldh activity was enriched in HFSCs across the hair cycle ( Fig. 1c ). As a control, assays conducted without the enzymatic substrate (lactate) or on acid-treated tissue yielded zero activity ( Supplementary Fig. 1c ). To further validate these results, we sorted epidermal populations, generated cell lysates on the sorted cells, and performed a similar colorimetric-based enzymatic assay on the sorted cell lysates, which also showed increased Ldh activity in HFSCs ( Fig. 1d ). To better characterize the metabolism of HFSCs, we performed metabolomics analysis on sorted populations from mouse skin by liquid chromatography–mass spectrometry (LC–MS) ( Fig. 1e ). Several glycolytic metabolites, including glucose/fructose-6-phosphate, fructose-bisphosphate, dihydroxyacetone phosphate, 3-phosphoglycerate and lactate, were routinely higher in HFSCs relative to total epidermis across three independent experiments (isolated from different mice on different days). Conversely, most TCA cycle metabolites were not consistently different between the epidermis and HFSCs ( Fig. 1e ). Collectively these results suggest that while all cells in the epidermis use the TCA cycle extensively to generate energy, HFSCs also have increased Ldha expression, Ldh activity and glycolytic metabolism. Measuring metabolism across the hair cycle therefore would capture any dynamic changes that occur in HFSCs that correlate with activation or quiescence. Analysis of RNA-seq data from HFSCs isolated during either telogen or the telogen–anagen transition demonstrated not only that Ldha is the predominant Ldh isoform expressed in HFSCs ( Fig. 2c ), but it is also induced during the telogen–anagen transition ( Fig. 2a, b ) (NIHGEO GSE67404 and GSE51635 ). To confirm that the cells analysed by RNA-seq were indeed either in telogen or the telogen–anagen transition, important markers of this transition were assessed including the Shh and Wnt pathways ( Gli1 , 2 , 3 ; Lef1 , Axin1 , Axin2 and Ccnd1 ) as well as proliferation markers ( Ki-67 , Pcna and Sox4 ) ( Supplementary Fig. 2a ). Figure 2: Ldh activity increases during HFSC activation. ( a ) Gene set enrichment analysis (GSEA) on RNA-seq transcriptome data from HFSCs versus total epidermis shows enrichment for glycolysis-related genes in HFSCs (normalized enrichment score (NES) = 1.72). ( b ) GSEA on microarray transcriptome data from HFSCs versus total epidermis shows enrichment for glycolysis-related genes in HFSCs (NES = 1.45). Results were generated from three mice of each condition. ( c ) RNA-seq data from HFSCs sorted during telogen or telogen–anagen transition (Tel–Ana) show induction of Ldha 35 . Data represent the average of three separate animals at each time point ( n = 3), and subjected to Student’s t -test for significance ( P < 0.05). ( d ) Ldh activity in sorted stem cell populations, measured using a plate-reader-based assay, shows elevated Ldh activity as stem cells become activated in telogen–anagen transition. Each bar represents the average signal for each condition where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( e ) Heatmap showing relative levels of glycolytic and TCA cycle metabolites extracted from quiescent (Telogen, day 50), activated (Telogen–Anagen, day 70) and HFSCs that have returned to the quiescent state (Anagen, day 90). Pyr, pyruvate; Lac, lactate. Data shown were generated from n = 3 animals per time point in 3 independent experiments. Full size image The in vitro Ldh activity assay on lysates from sorted HFSCs uncovered a modest induction of Ldh activity correlating with the telogen–anagen transition ( Fig. 2d ). Hair cycle staging was validated by Ki-67 immunostaining to determine HFSC activation ( Supplementary Fig. 2b ). Additionally, measurements of steady-state metabolites extracted from sorted HFSCs showed an increase in lactate in HFSCs as they enter the telogen–anagen transition, and then decrease again in anagen as HFSCs return to quiescence ( Fig. 2e ). To determine whether Ldh activity is functionally related to the ability of HFSCs to remain quiescent or to activate at the start of a hair cycle, we deleted Ldha specifically in the HFSCs. Taking advantage of mice with floxed alleles of Ldha 17 , this enzyme was deleted in HFSCs by crossing to mice bearing the K15-CrePR allele 11 , known to be inducible by mifepristone specifically in HFSCs. Deletion of Ldha in HFSCs was initiated by administration of mifepristone during telogen (day 50) and led to a typically mosaic recombination of the floxed alleles across the backskin 11 , 18 . Mice with HFSC-specific deletion of Ldha failed to undergo a proper hair cycle, with most follicles remaining in telogen across at least 33 pairs of littermates 3–4 weeks after mifepristone treatment ( Fig. 3a ). A complete list of transgenic animals including birth date, sex and genotype is provided in Supplementary Table 1 . Figure 3: Deletion of Ldha blocks HFSC activation. ( a ) Ldha +/+ animals enter the hair cycle synchronously around day 70 as measured by shaving and observation beginning at day 50. K15-CrePR;Ldha fl / fl animals treated with mifepristone show defects in anagen entry. Results are representative of at least 33 animals of each genotype. ( b ) Skin pathology showing that K15-CrePR;Ldha fl / fl animals remained in telogen. Scale bars, 50 μm. ( c ) Ldh enzyme activity assay showed that K15-CrePR;Ldha fl / fl animals lacked this activity in the HFSCs (indicated by bracket). Scale bars, 20 μm. ( d ) Graph showing percentage of follicles in telogen, telogen–anagen transition and anagen in K15-CrePR;Ldha +/+ mice versus K15-CrePR;Ldha fl / fl mice ( n = 225 follicles from 3 mice per genotype). Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( e ) Heatmap showing relative levels of glycolytic and TCA cycle metabolites extracted from Ldha +/+ HFSCs and Ldha fl / fl HFSCs and measured by LC–MS. Asterisks indicate significant difference in metabolite levels between genotypes. For e , paired t -test was performed; ∗ P < 0.05; ∗ ∗ ∗ P < 0.001; NS, P > 0.05; n = 9 mice pooled from 3 independent experiments. ( f ) Immunohistochemistry staining for Ki-67, a marker of proliferation, is absent in Ldha fl / fl HFSCs. Phospho-S6, a marker in HFSCs at the beginning of a new hair cycle, is absent in Ldha fl / fl HFSCs. Staining for Ldha protein shows specific deletion in HFSCs. Brackets indicate bulge. Staining for Sox9 shows that HFSCs are still present in the Ldha -deleted niche. Scale bars, 20 μm. ( g ) Animals with Ldha deletion in their HFSCs as controlled by Lgr5-CreER show profound defects in the entry into anagen. Right, skin pathology showing that Lgr5-CreER;Ldha fl / fl animals mostly remained in telogen. Scale bars, 100 μm. Results are representative of at least 12 animals of each genotype. ( h ) Ldh enzyme activity assay in the epidermis shows that Lgr5-CreER;Ldha fl / fl animals lacked this activity in the HFSCs. Scale bars, 20 μm. ( i ) LC–MS analysis of metabolites from the indicated mice. Data were generated from n = 3 animals per condition pooled from 3 independent experiments. Full size image Histology showed that wild-type hair follicles entered into the telogen–anagen transition typically by day 70, and this was accompanied by typical expansion of the hypodermis below ( Fig. 3b ). However, in backskin with deletion of Ldha , the hypodermis did not expand, and the telogen–anagen transition was severely abrogated ( Fig. 3b ). In areas of strong phenotypic penetrance, Ldh activity was severely abrogated in the HFSC compartment ( Fig. 3c ), demonstrating that the Ldha allele is critically important for Ldh activity in HFSCs and consistent with the fact that isoform a of Ldh is expressed at the highest level. Quantification of hair cycle progression across numerous animals indicated that most follicles lacking Ldha remained in telogen ( Fig. 3d ). In addition, to confirm the phenotypes, we also deleted Ldha with an independent HFSC-specific Cre strategy. Lgr5-CreER has been used for lineage tracing in a variety of adult stem cell models, and has been shown to mark cells with high regenerative capacity, including HFSCs 19 . Lgr5-CreER;Ldha fl / fl mice, treated with tamoxifen at postnatal day 50 prior to a synchronized hair cycle, also failed to activate anagen across at least 20 littermate pairs ( Fig. 3g ). In situ Ldh assay and metabolomics confirmed the successful deletion of Ldha in these animals ( Fig. 3h, i ). We also monitored the effect of loss of Ldha activity in K15 + cells over a six-month period and found that deletion of Ldha led to a mosaic, but permanent block of HFSC activation in some portions of the backskin ( Supplementary Fig. 3a ). These data confirm that Ldh activity is required for HFSC activation, and is not simply a marker of HFSCs. A closer look at these long-term Ldha deletions showed that Ldha -null HFSCs continued expressing typical markers, but lacked Ldh activity, and failed to initiate new hair cycles, while those follicles that escaped deletion continued to express Ldha and to cycle normally ( Supplementary Fig. 3b, c ). After sorting HFSCs from animals with or without Ldha deletion, LC–MS-based metabolomics analysis demonstrated that lactate levels, as well as levels of other glycolytic metabolites, were strongly reduced in the absence of Ldha ( Fig. 3e ), functional evidence that the targeting strategy was successful. The fact that glycolytic metabolites upstream of lactate were also suppressed suggests that HFSCs could be adapting their metabolism to account for the loss of Ldh activity. Immunostaining for markers of HFSC activation and proliferation indicated a failure of HFSC activation. Ki-67 and pS6 have been clearly demonstrated to be abundant in the HFSC niche at the start of the hair cycle 20 , and both of these markers were absent in Ldha -deleted backskin ( Fig. 3f ). Immunostaining for Ldha also confirmed successful deletion of this protein, while staining for Sox9, a marker of HFSCs, indicated that these cells remained in their niche, but just failed to activate in the absence of Ldha ( Fig. 3f ). Induction of the hair cycle is also thought to be regulated by signalling from the Shh, Wnt and Jak–Stat pathways. We assayed each of these by IHC in normal or Ldha -deletion follicles and found that in general these pathways were not activated in Ldha -null HFSCs that failed to enter a telogen–anagen transition ( Supplementary Fig. 3d ). To determine whether induction of lactate production could affect HFSC activation or the hair cycle, we crossed K15-CrePR animals to those floxed for mitochondrial pyruvate carrier 1 ( Mpc1 ) ( K15-CrePR;Mpc1 fl / fl ). Mpc1, as a heterodimer with Mpc2, forms the mitochondrial pyruvate carrier MPC, a transporter on the inner mitochondrial membrane required for pyruvate entry into the mitochondria 21 . Loss of function of Mpc1 has been shown to drive lactate production through enhanced conversion of pyruvate to lactate by Ldh 22 . In animals with Mpc1 deletion in HFSCs, we observed a strong acceleration of the ventral and dorsal hair cycles with all the typical features of a telogen–anagen transition ( Fig. 4a ) ( n = 12 littermate pairs). Mifepristone-treated K15-CrePR;Mpc1 fl / fl animals were the only ones to show any signs of dorsal anagen by day 70. Western blotting on sorted HFSCs validated the loss of Mpc1 protein ( Fig. 4b ). Importantly, purified HFSCs lacking Mpc1 showed a strong induction of Ldh activity ( Fig. 4c ). Quantification of the dorsal hair cycle across three pairs of littermates showed a strong induction of anagen in backskin lacking Mpc1 ( Fig. 4d , right), and histology showed that the anagen induction was normal in appearance with a typical hypodermal expansion ( Fig. 4d ). Immunostaining demonstrated the induction in Mpc1 -null HFSCs of various markers of hair cycle activation such as Ki-67 and pS6, while Sox9 expression was unaffected ( Fig. 4e ). Long-term deletion of Mpc1 did not lead to aberrant follicles or exhaustion of HFSCs as judged by pathology and staining for Sox9 ( Supplementary Fig. 4a ). Furthermore, deletion of Mpc1 with Lgr5-CreER showed a very similar phenotype as deletion with K15-CrePR ( Fig. 4f, g ), validating the fact that deletion of this protein in HFSCs leads to their activation ( n = 12 pairs of littermates). Finally, immunofluorescence for the Ires-GFP of the Lgr5-CreER transgene along with Ki-67 and lineage tracing with K15-CrePR;Mpc1 fl/fl ;lsl-Tomato mice also demonstrated that the HFSCs were indeed proliferative following induction of Mpc1 deletion by tamoxifen or mifepristone ( Supplementary Fig. 4b ). Figure 4: Deletion of Mpc1 increases lactate production and accelerates the activation of HFSCs. ( a ) Mpc1 fl / fl animals show pigmentation and hair growth, consistent with entry into the anagen cycle at 8.5 weeks, whereas Mpc1 +/+ animals do not show dorsal pigmentation and hair growth this early. Animals shown are representative of at least 12 animals of each genotype. ( b ) FACS isolation of HFSC bulge populations in Mpc1 +/+ versus Mpc1 fl / fl mice followed by western blotting shows successful deletion of Mpc1 protein in the stem cell niche. β-actin is a loading control. ( c ) Plate-reader assay for Ldh activity on sorted HFSC populations shows elevated activity in Mpc1 fl / fl HFSCs compared with Mpc1 +/+ HFSCs. Each bar represents the average signal for each genotype where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( d ) Histology on wild-type versus Mpc1 deletion skin shows induction of anagen in absence of Mpc1. Scale bars, 100 μm. Quantification of phenotype at right shows percentage of dorsal follicles in telogen, telogen–anagen transition and anagen in Mpc1 +/+ mice versus Mpc1 fl / fl mice ( n = 250 follicles from 3 mice per genotype). Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( e ) Immunohistochemistry staining for Ki-67, a marker of proliferation that is active in HFSCs only at the beginning of a new hair cycle, is present in Mpc1 fl / fl HFSCs only at 8.5 weeks, consistent with their accelerated entry into a new hair cycle. Phospho-S6, another marker that is active in HFSCs only at the beginning of a new hair cycle, is present in Mpc1 fl / fl HFSCs. Staining for Sox9 shows that HFSCs are present in the Mpc1-deleted niche. Images taken at × 60 magnification. ( f ) Deletion of Mpc1 in mice bearing the Lgr5-CreER allele shows strong induction of the hair cycle. Results are representative of at least 9 animals per genotype. ( g ) Quantification of pigmentation in the indicated genotypes across three independent litters ( n = 5 mice per genotype). Unprocessed original scans of blots are shown in Supplementary Fig. 6 . Full size image On the other hand, deletion of Mpc1 in the top of the follicle (infundibulum, sebaceous gland progenitors) and a limited number of interfollicular cells with Lgr6-CreER (ref. 23 ) did not appear to affect the hair cycle ( Lgr6-CreER;Mpc1 fl / fl ) ( n = 10 littermate pairs) or general skin homeostasis over at least 2 months ( Supplementary Fig. 4c ). Ldh activity assay on Lgr6 + cells sorted from wild-type or deletion skin demonstrated that the Mpc1 deletion was effective ( Supplementary Fig. 4d ). Together, these results indicate that increasing lactate production through the blockade of pyruvate into the TCA cycle has a strong effect on the ability of HFSCs, but not other cells in the hair follicle, to become activated to initiate a new hair cycle. UK-5099 is a well-established pharmacological inhibitor of the mitochondrial pyruvate carrier and is known to promote lactate production as a result in various settings 24 . Topical treatment of animals in telogen (day 50) with UK-5099 led to a robust acceleration of the hair cycle, as well as minor hyperproliferation of the interfollicular epidermis ( Fig. 5a ). Quantification of the hair cycle across at least 6 pairs of animals (vehicle versus UK-5099) indicated a strong acceleration of the hair cycle, in as few as 6–9 days ( Fig. 5b ). Similar to genetic deletion of Mpc1 , pharmacological blockade of the mitochondrial pyruvate carrier by UK-5099 for 48 h during telogen promoted increased Ldh activity in HFSCs and the interfollicular epidermis, consistent with increased capacity for lactate production ( Fig. 5c ). Finally, metabolomic analysis demonstrated that topical application of UK-5099 increases total levels of lactate in sorted HFSCs ( Fig. 5d ). Figure 5: Pharmacological inhibition of Mpc1 promotes HFSC activation. ( a ) Animals treated topically with UK-5099 (20 μM) show pigmentation and hair growth, indicative of entry into anagen, after 8 days of treatment. Full anagen, indicated by a full coat of hair, is achieved after 14 days of treatment. Mice treated topically with vehicle control do not show pigmentation nor hair growth even after 12 days of treatment. Right, skin pathology showing that UK-5099 animals enter an accelerated anagen at 8 weeks typified by down growth of the follicle and hypodermal thickening, while vehicle control-treated animals showed neither and remained in telogen. Images shown are representative of at least 14 mice from 7 independent experiments. Scale bars, 100 μm. ( b ) Graph showing time to observed phenotype in vehicle- versus UK-5099-treated mice. n = 6 mice per condition. Shown as mean ± s.e.m. ( c ) Ldh enzyme activity assay in the epidermis shows strong activity in HFSCs in vehicle control- and UK-5099-treated animals. Ldh enzyme activity also seen in interfollicular epidermis of UK-5099-treated animals. Ldh activity is indicated by purple stain; pink is nuclear fast red counterstain. Scale bars, 50 μm. ( d ) Metabolomic analysis of lactate on HFSCs isolated from UK-5099-treated skin for 48 h; each bar represents the average signal for each condition where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. Full size image Because alteration of lactate production in HFSCs appeared to regulate their activation, we attempted to identify other small molecules that could take advantage of these findings to induce the hair cycle. Ldha is known to be transcriptionally regulated by Myc, which has been shown to play an important role in HFSC activation and the hair cycle 25 , 26 , 27 . RNA-seq on sorted HFSCs indicated that Myc is induced during the telogen–anagen transition ( Fig. 6a ). Western blotting for both c-Myc and n-Myc in sorted HFSCs versus total epidermis showed a strong increase in Myc protein in the nuclei of HFSCs ( Fig. 6b ). Figure 6: Stimulation of Myc levels promotes HFSC activation. ( a ) RNA-seq data from sorted HFSCs in telogen and telogen–anagen transition 35 . n = 3 mice per time point. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( b ) Nuclear protein fractions show expression of n-Myc and c-Myc in HFSCs compared with epidermal cells. H3k27ac is a loading control for nuclear proteins. ( c ) Total protein preparations from skin treated with two topical doses of RCGD423 (50 μM) show increased c-Myc, n-Myc and Ldha protein levels compared with animals that received two topical doses of vehicle control. β-actin is a loading control. ( d ) Plate-reader assay for Ldh enzyme activity in the epidermis. Each bar represents the average signal for each condition where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( e ) Ldh enzyme activity assay in the epidermis in vehicle control- and RCGD423-treated animals. Scale bars, 50 μm. ( f ) Metabolomic analysis of lactate on HFSCs isolated from RCGD423-treated skin for 48 h. Each bar represents the average signal for each condition where n = 9 mice pooled from 3 independent experiments. Shown as mean ± s.e.m. Paired t -test was performed, P < 0.05. ( g ) Immunohistochemistry staining for Ki-67 and phospho-Stat3, a downstream marker of RCGD423 activity. Scale bars, 20 μm. ( h ) Animals treated with RCGD423 (50 μM) show pigmentation and hair growth, indicative of entry into anagen, after 5 doses. Images shown are representative of at least 14 mice from 7 independent experiments. Scale bars, 100 μm. Quantification of phenotype showing time to observed phenotype in vehicle- versus RCGD423-treated mice. n = 6 mice per condition. Shown as mean ± s.e.m. Unprocessed original scans of blots are shown in Supplementary Fig. 6 . Full size image Taking advantage of a molecule with the robust ability to promote Myc expression through binding of GP130 and activation of Jak/Stat signalling, we topically treated mice for 48 h to determine the effect of RCGD423 on Stat signalling and Myc expression. We found that RCGD423 induced levels of both c-Myc and n-Myc as well as Ldha ( Fig. 6c ), consistent with activation of Stat3 signalling leading to induction of Myc and Ldha protein expression. In vitro measurement of Ldh activity on lysates from total epidermis showed an increase in activity by RCGD423 ( Fig. 6d ). In situ staining for Ldh activity showed a strong induction following treatment with RCGD423 in both the epidermis and even in the dermis, as expected with topical treatment ( Fig. 6e ). LC–MS-based metabolomics on epidermis isolated from vehicle or RCGD423 showed a large increase in lactate as well, even after just 48 h ( Fig. 6f ). RCGD423 binds to GP130, a co-receptor for Jak–Stat signalling, and activates Stat3. We found that Stat3 was activated in HFSCs by RCGD423 after topical treatment by immunostaining with phospho-Stat3 antibody ( Fig. 6g ). This also correlated with induction of Ki-67 in HFSCs in the same tissue ( Fig. 6g ). IHC for pStat1 and pStat5 suggested that RCGD423 does not dramatically affect these other Stat family members ( Supplementary Fig. 5 ). Topical treatment of animals in telogen (day 50) with RCGD423 led to a robust acceleration of the hair cycle ( Fig. 6h ), as well as minor hyperproliferation of the interfollicular epidermis. DISCUSSION Together, these data demonstrate that the production of lactate, through Ldha, is important for HFSC activation, and that HFSCs may maintain a high capacity for glycolytic metabolism at least in part through the activity of Myc. Our data also demonstrate that a genetic or pharmacological disruption of lactate production can be exploited to regulate the activity of HFSCs. It is possible that these results have implications for adult stem cells in other tissues. In an accompanying manuscript, the Rutter laboratory describes a role for Mpc1 in adult intestinal stem cells 28 . Consistent with data presented here on HFSCs, deletion of Mpc1 led to an increase in the ability of intestinal stem cells to form organoids. Previous work showed that haematopoietic stem cells (HSCs) show higher glycolytic activity, but disruption of glycolysis in the HSCs led to activation of their cycling 29 , 30 , 31 , 32 , contrary to what we find with HFSCs. While the distinction could be biological, there are technical reasons for potential discrepancies as well. First, there are no Cre transgenic lines that can delete genes specifically in HSCs, as opposed to HFSCs (K15 + or Lgr5 + ). Second, to block glycolysis in HSCs, the previous study deleted the PDK enzyme, which would only indirectly regulate glycolysis, whereas here we deleted the Ldh enzyme specifically. In addition, HSCs and HFSCs are functionally distinct in that HFSCs cycle only at well-defined moments (telogen–anagen transition), while the timing of HSC activation is not as well established or synchronized. Instead, we hypothesize that increased glycolytic rate in HFSCs allows them to respond quickly to the barrage of cues that orchestrate the onset of a new hair cycle. This has also been proposed to be the case for neural stem cells solely on the basis of RNA-seq data 33 , but as of yet no in vivo functional evidence exists to confirm this possibility. The fact that small molecules could be used to promote HFSC activation suggests that they could be useful for regenerative medicine. This is not only the case for hair growth, but potentially for wound healing as well. While HFSCs do not normally contribute to the interfollicular epidermis, in a wound setting, HFSCs migrate towards the wound site and make a contribution, as measured by lineage tracing 34 . Whether activation of Ldh enzyme activity by Mpc1 inhibition (UK-5099) or Myc activation (RCGD423) can promote wound healing will be the subject of intense effort going forward. Methods Mice. Several of the animal strains came from Jackson Labs ( K15-CrePR , Lgr5-CreER and Lgr6-CreER ), while others were generated in the Rutter ( Mpc fl / fl ) and Seth laboratories 17 ( Ldha fl / fl ) and maintained under conditions set forth by IUCUC and UCLA ARC. For experiments that include analysis of the telogen stage of the hair cycle, animals were harvested at postnatal day 50, for telogen–anagen transition animals were harvested at day 70, and for anagen animals were harvested at postnatal day 90. For experiments that include analysis of transgenic animals, K15-CrePR animals were shaved and treated by injection of mifepristone and Lgr5-CreER and Lgr6-CreER animals were shaved and treated with tamoxifen (10 mg ml −1 dissolved in sunflower seed oil, 2 mg per day for 3 days) during telogen (postnatal day 50), and monitored for hair regrowth following shaving. For Figs 5 and 6 , wild-type C57BL/6J animals were shaved at postnatal day 50 and treated topically with Transderma Plo Gel Ultramax Base (TR220) (vehicle), UK-5099 (Sigma PZ0160) (20 μM) or RCGD423 (50 μM) for the indicated periods of time. Both male and female animals were used in this study in approximately equal numbers with no apparent difference in phenotype between genders. All animal experiments were performed in compliance with ethical guidelines and approved by the UCLA Animal Research Committee (ARC) according to IACUC guidelines in facilities run by the UCLA Department of Laboratory Animal Medicine (DLAM). Histology, immunostaining and immunoblotting. Tissues were isolated from the indicated genotypes and embedded fresh in OCT compound for frozen tissue preparations, or fixed overnight in 4% formalin and embedded in paraffin. For frozen tissue, sectioning was performed on a Leica 3200 Cryostat, and the sections were fixed for 5 min in 4% paraformaldehyde. Paraffin-embedded tissue was sectioned, de-paraffinized and prepared for histology. All sections prepared for staining were blocked in staining buffer containing appropriate control IgG (goat, rabbit and so on). Immunohistochemistry was performed on formalin-fixed paraffin-embedded tissue with citrate or Tris buffer antigen retrieval with the following antibodies: Ki-67 (Abcam ab16667, 1:50), p-S6 (Cell Signaling CST2215, 1:50), Sox9 (Abcam ab185230, 1:1,000), Ldha (Abcam ab47010, 1:100), Ldh (Abcam ab125683, 1:100), p-Stat3 (Abcam ab68153, 1:200), p-Stat1 (Abcam ab109461, 1:200), p-Stat5 (Abcam ab32364; 1:50), Gli3 (Abcam ab6050; 1:100), β-catenin (Abcam ab32572; 1:500). The DAKO EnVision + HRP Peroxidase System (Dako K400911-2) and Dako AEC Substrate Chromogen (Dako K346430-2) was used for detection. Images were collected on an Olympus BX43 Upright Microscope and Zeiss Model Axio Imager M1 Upright Fluorescence Microscope. Protein samples for western blots and enzymatic assays were extracted from FACS-sorted epidermal populations in RIPA lysis buffer (Pierce) with Halt protease and phosphatase inhibitors (Thermo-Fisher) and precipitated in acetone for concentration. The following antibodies were used: β-actin (Abcam ab8227; 1:1,000), β-actin (Santa Cruz sc-47778; 1:1,000), c-Myc (Abcam ab32072; 1:1,000), n-Myc (Santa Cruz sc-53993; 1:200), H3K27Ac (Abcam ab177178; 1:200), Mpc1 (Sigma HPA045119). Cell isolation and FACS. Whole dorsal and ventral mouse skin was excised and floated on trypsin (0.25%) for 1 h at 37° or overnight at 4°. The epidermis was separated from dermis by scraping and epidermal cells were mechanically dissociated using a pipette. Epidermal cells were filtered with a 70 μM cell strainer into 20% BCS, collected at 300 g and washed twice with PBS. The cells were then filtered through a 40 μM cell strainer and stained for FACS processing with CD34 Monoclonal Antibody (RAM34), FITC, eBioscience (catalogue no. 11-0341-82) and CD49d (Integrin alpha 4) Monoclonal Antibody (R1-2), PE, eBioscience (catalogue no. 12-0492-81). The gating strategy is shown in Supplementary Fig. 1b . Cells were sorted using BD FACSAria high-speed cell sorters. Single-positive and double-positive populations were collected into 20% BCS, RIPA lysis buffer (Thermo Scientific, Pierce), or 80% methanol for enzymatic assays, western blots or mass spectrometry analyses respectively. Cell lines. No cell lines were used in this study. Plate-reader Ldh assay. Ldh activity was determined in cell lysates by measuring the formation of soluble XTT formazan in direct relation to production of NADH over time at 475 nm at 37 °C using a Synergy-MX plate reader (Biotek Instruments). Lysates were prepared in RIPA Buffer (Thermo Scientific Pierce). Protein content was determined using the BCA Protein Assay Kit (Thermo Scientific Pierce). Ten micrograms of protein was used per well. The staining solution contained 50 mM Tris buffer pH 7.4, 150 μM XTT (Sigma), 750 μM NAD (Sigma), 80 μM phenazine methosulfate (Sigma) and 10 mM of substrate lactate (Sigma). Ldh activity was determined in cell lysates by measuring the change in absorbance of their common substrate or product, NADH, over time at 340 nm at 25 °C using a Synergy-MX plate reader (Biotek Instruments). In situ Ldh assay. Cryostat sections of mouse skin were briefly fixed (4% formalin for 5 min), washed with PBS pH 7.4, and then incubated with the appropriate solution for LDH activity. Staining medium contained 50 mM Tris pH 7.4, 750 μM NAD (Sigma), 80 μM phenazine methosulfate (Sigma), 600 μM nitrotetrazolium blue chloride (Sigma), 10 mM MgCl 2 (Sigma) and 10 mM of the substrate lactate (Sigma). Slides were incubated with staining medium at 37 °C until they reached the desired intensity, then counterstained using Nuclear Fast Red (Vector) and mounted using VectaMount (Vector). Control reactions were performed by using incubation medium that lacked the substrate mixture or NAD. Mass spectrometry-based metabolomics analysis. The experiments were performed as described in ref. 17 . To extract intracellular metabolites, FACS-sorted cells were briefly rinsed with cold 150 mM ammonium acetate (pH 7.3), followed by addition of 1 ml cold 80% methanol on dry ice. Cell suspensions were transferred into Eppendorf tubes and 10 nmol D / L -norvaline was added. After rigorously mixing, the suspension was pelleted by centrifugation (18,000 g , 4 °C). The supernatant was transferred into a glass vial, metabolites dried down under vacuum, and resuspended in 70% acetonitrile. For the mass spectrometry-based analysis of the sample, 5 μl was injected onto a Luna NH2 (150 mm × 2 mm, Phenomenex) column. The samples were analysed with an UltiMate 3000RSLC (Thermo Scientific) coupled to a Q Exactive mass spectrometer (Thermo Scientific). The Q Exactive was run with polarity switching (+3.50 kV/−3.50 kV) in full scan mode with an m / z range of 65–975. Separation was achieved using A) 5 mM NH 4 AcO (pH 9.9) and B) ACN. The gradient started with 15% A) going to 90% A) over 18 min, followed by an isocratic step for 9 min and reversal to the initial 15% A) for 7 min. Metabolites were quantified with TraceFinder 3.3 using accurate mass measurements (≤3 ppm) and retention times. Normalized metabolite data are available at figshare.com ( ). Statistics and reproducibility. Experiments were performed on male and female animals in approximately equal numbers with no apparent difference in phenotype between sexes. All phenotypes described are representative of a minimum of n = 3 littermate pairs as indicated in the description of each experiment. For analysis of the hair regrowth phenotype no statistical measure was used to determine the sample size beforehand, nor were statistics used to measure effects, as the results were essentially positive or negative as represented in the figures. The results described include data from all treated animals. Investigators were not blinded to allocation during the experimental data collection. Experiments were not randomized. All results shown are representative images from at least three independently treated animals, and genotyping was performed both before and after animal treatment for confirmation. For graphs, all comparisons are shown by Student’s two-tailed unpaired t -test and all graphs, bars or lines indicate mean and error bars indicate standard error of the mean (s.e.m.). Data availability. Previously published transcriptomics data that were reanalysed here are available under accession code GSE67404 and GSE51635 (refs 35 , 36 ). Normalized metabolite data are available at figshare.com ( ). All other data supporting the findings of this study are available from the corresponding author on reasonable request. Additional Information Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Accession codes Accessions Gene Expression Omnibus GSE51635 GSE67404
UCLA researchers have discovered a new way to activate the stem cells in the hair follicle to make hair grow. The research, led by scientists Heather Christofk and William Lowry, may lead to new drugs that could promote hair growth for people with baldness or alopecia, which is hair loss associated with such factors as hormonal imbalance, stress, aging or chemotherapy treatment. The research was published in the journal Nature Cell Biology. Hair follicle stem cells are long-lived cells in the hair follicle; they are present in the skin and produce hair throughout a person's lifetime. They are "quiescent," meaning they are normally inactive, but they quickly activate during a new hair cycle, which is when new hair growth occurs. The quiescence of hair follicle stem cells is regulated by many factors. In certain cases they fail to activate, which is what causes hair loss. In this study, Christofk and Lowry, of Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA, found that hair follicle stem cell metabolism is different from other cells of the skin. Cellular metabolism involves the breakdown of the nutrients needed for cells to divide, make energy and respond to their environment. The process of metabolism uses enzymes that alter these nutrients to produce "metabolites." As hair follicle stem cells consume the nutrient glucose—a form of sugar—from the bloodstream, they process the glucose to eventually produce a metabolite called pyruvate. The cells then can either send pyruvate to their mitochondria—the part of the cell that creates energy—or can convert pyruvate into another metabolite called lactate. "Our observations about hair follicle stem cell metabolism prompted us to examine whether genetically diminishing the entry of pyruvate into the mitochondria would force hair follicle stem cells to make more lactate, and if that would activate the cells and grow hair more quickly," said Christofk, an associate professor of biological chemistry and molecular and medical pharmacology. The research team first blocked the production of lactate genetically in mice and showed that this prevented hair follicle stem cell activation. Conversely, in collaboration with the Rutter lab at University of Utah, they increased lactate production genetically in the mice and this accelerated hair follicle stem cell activation, increasing the hair cycle. "Before this, no one knew that increasing or decreasing the lactate would have an effect on hair follicle stem cells," said Lowry, a professor of molecular, cell and developmental biology. "Once we saw how altering lactate production in the mice influenced hair growth, it led us to look for potential drugs that could be applied to the skin and have the same effect." The team identified two drugs that, when applied to the skin of mice, influenced hair follicle stem cells in distinct ways to promote lactate production. The first drug, called RCGD423, activates a cellular signaling pathway called JAK-Stat, which transmits information from outside the cell to the nucleus of the cell. The research showed that JAK-Stat activation leads to the increased production of lactate and this in turn drives hair follicle stem cell activation and quicker hair growth. The other drug, called UK5099, blocks pyruvate from entering the mitochondria, which forces the production of lactate in the hair follicle stem cells and accelerates hair growth in mice. "Through this study, we gained a lot of interesting insight into new ways to activate stem cells," said Aimee Flores, a predoctoral trainee in Lowry's lab and first author of the study. "The idea of using drugs to stimulate hair growth through hair follicle stem cells is very promising given how many millions of people, both men and women, deal with hair loss. I think we've only just begun to understand the critical role metabolism plays in hair growth and stem cells in general; I'm looking forward to the potential application of these new findings for hair loss and beyond." The use of RCGD423 to promote hair growth is covered by a provisional patent application filed by the UCLA Technology Development Group on behalf of UC Regents. The use of UK5099 to promote hair growth is covered by a separate provisional patent filed by the UCLA Technology Development Group on behalf of UC Regents, with Lowry and Christofk as inventors. The experimental drugs described above were used in preclinical tests only and have not been tested in humans or approved by the Food and Drug Administration as safe and effective for use in humans.
10.1038/ncb3575
Physics
Scientists count microscopic particles without a microscope
Mikhail V. Rybin et al, Transition from two-dimensional photonic crystals to dielectric metasurfaces in the optical diffraction with a fine structure, Scientific Reports (2016). DOI: 10.1038/srep30773 Journal information: Scientific Reports
http://dx.doi.org/10.1038/srep30773
https://phys.org/news/2016-08-scientists-microscopic-particles-microscope.html
Abstract We study experimentally a fine structure of the optical Laue diffraction from two-dimensional periodic photonic lattices. The periodic photonic lattices with the C 4 v square symmetry, orthogonal C 2 v symmetry and hexagonal C 6 v symmetry are composed of submicron dielectric elements fabricated by the direct laser writing technique. We observe surprisingly strong optical diffraction from a finite number of elements that provides an excellent tool to determine not only the symmetry but also exact number of particles in the finite-length structure and the sample shape. Using different samples with orthogonal C 2 v symmetry and varying the lattice spacing, we observe experimentally a transition between the regime of multi-order diffraction, being typical for photonic crystals to the regime where only the zero-order diffraction can be observed, being is a clear fingerprint of dielectric metasurfaces characterized by effective parameters. Introduction Diffraction of waves of different nature (e.g., X-rays, electrons, neutrons, photons, etc) is a common phenomenon underlying many experimental tools employed for the study of crystalline structures and the analysis of physical properties of ordered bulk materials 1 . Nowadays, diffraction of electrons is widely used to detect the number of stacking sheets of planar two-dimensional (2D) materials such as graphene 2 , carbon nanofilms 3 , transition metal dichalcogenides 4 , etc. For the three-dimensional (3D) photonic crystals, when the period of the spatial modulation of dielectric permittivity becomes comparable with the wavelength of light, the Bragg diffraction gives rise to the appearance of the bandgaps in the energy spectrum 5 , 6 , 7 . An instructive example is the analysis of optical Bragg diffraction from different opal-based photonic structures, including thin opal films 8 , 9 , 10 , bulk samples of synthetic opals 11 , 12 , 13 and opal-based colloidal structures 14 , 15 . Opals are built up of quasi-spherical particles of amorphous silica a -SiO 2 , each of them having a rather hard shell and porous core 16 . Opals possess bandgaps in the visible range due to the typical size of the constitutive particles a -SiO 2 of some hundreds of nanometers. This provides a unique way for the direct observation of angle-resolved diffraction patterns in the visible spectral range. Thin photonic films composed of several layers have been studied by spectroscopic techniques. In particular, the results of optical studies of opal films composed of 6 layers are given in ref. 10 . Also, diffraction patterns from woodpile films consisting of 20 layers were studied depending on the internal refractive index contrast Δ n in ref. 17 . A detailed picture of a transformation of the optical diffraction patterns during the transition from thin opals films to 3D opal-based photonic crystals was studied in refs 7 and 10 . In spite of a large amount of experimental studies of periodic structures 18 , 19 , 20 , 21 , diffraction of light from two-dimensional planar photonic structures composed of just a single layer (the so-called metasurface) and several elements in the plane was not studied experimentally in detail, to the best of our knowledge. We emphasize that diffraction is a unique tool for the studies of optical properties of true two-dimensional structures because other methods, such as reflection and transmission spectroscopy, produce a very weak response from a single sub-micron layer. In contrast, here we demonstrate that the 2D structures composed of a finite number of sub-micron dielectric scatterers give rise to surprisingly strong intensity of optical diffraction that can be visible by a naked eye on a screen placed just behind the metasurface sample. With the intention to further deepen our understanding of light scattering in periodic media, a number of challenging problems can be formulated: What are novel effects in optical diffraction from finite-size 2D photonic structures, beyond the well-known results of X-rays, neutrons and photons diffraction from thin films and 3D structures? Can one obtain from the light scattering direct information about the number and spatial distribution of sub-micron particles? And finally the most intriguing question: is it possible to observe in optical diffraction a transition between 2D photonic films and metasurfaces? The study of metasurfaces has attracted much attention in recent years 22 , 23 due to their many useful functionalities and potentially important applications, ranging from simple elements of flat optics for unusual beam steering 24 , 25 , high efficiency of planar sensing devices 26 and other types of light control 27 , 28 in both linear and nonlinear regimes 29 . We notice that a transition in light scattering regimes from photonic crystals to all-dielectric metamaterials and a corresponding phase diagram were studied in ref. 30 . In this study, we use a direct laser writing (DLW) 31 , 32 , 33 technique to fabricate true 2D photonic structures or metasurfaces as periodic arrays of submicron dielectric particles or their inverted counterparts with the square C 4 v , orthogonal C 2 v and hexagonal C 6 v lattice symmetry. We study experimentally optical diffraction from fabricated direct and inverted finite-size 2D structures and observe directly (on a screen placed after the sample) a variety of diffraction patterns of exceeding beauty. The fine structure of the patterns allows detecting exact number of scatterers in any direction. Using a set of anisotropic samples with orthogonal C 2 v lattice symmetry, we demonstrate both experimentally and theoretically a transition from multi-order diffraction regime which is characteristic for photonic crystals to the regime where only the zero-order diffraction was observed being a fingerprint of metasurfaces characterized by effective parameters. Results Sample fabrication The problem of fabrication of 2D photonic structures of almost arbitrary shape can be solved with the recently developed DLW method. This technology is based on the nonlinear two-photon polymerization of a photosensitive material in the focus of a femtosecond laser beam. A high resolution of the technique is due to the intensity-threshold character of the polymerization process which occurs in a region with sizes significantly smaller than the size of the focused beam. This method makes it possible to create a dielectric structure with a transverse resolution below 100 nm 34 . Using DLW technique we fabricated a variety of high-quality finite-size 2D photonic structures with submicron-scale features. To realize the DLW approach, we use the apparatus fabricated at the Laser Zentrum Hannover (Germany) and a train of femtosecond pulses centered at around 780 nm wavelength and at repetition frequency of 80 MHz (12.5 ns time between the adjacent pulses). These pulses are derived from a 50 fs TiF-100F laser (Avesta-Project, Russia). Photonic structures are fabricated by using a hybrid organic–inorganic material based on zirconium propoxide with an Irgacure 369 photoinitiator (Ciba Specialty Chemicals Inc., USA). We fabricate both direct and inverted dielectric photonic structures as 2D periodic arrays of scatterers with the square C 4 v , orthogonal C 2 v and hexagonal C 6 v lattice symmetry. The samples with the square and orthogonal lattice symmetry were fabricated with the square or rectangular shape. The direct photonic structures are composed of dielectric particles with an ellipsoid-like shape (called ‘voxels’ in what follows) with a typical size of 100–300 nm in the surface plane. With “inverted photonic structure” we term a structured thin dielectric film with an array of holes. The number of scatterers varied from 10 s to 10000 s. The lattice parameters varied in different samples in the range of 0.5 μm . Examples of the images obtained from both direct and inverted photonic structures with different symmetries with a help of a scanning electron microscope are presented in Fig. 1 . Figure 1 Examples of fabricated structures shown with SEM images: ( a ) direct square structure (10 × 10 voxels, a 1 = a 2 = 1 μm), ( b ) inverted rectangular structure with square symmetry (10 × 20 holes, a 1 = a 2 = 1 μm), ( c ) direct hexagonal structure ( ), ( d ) inverted hexagonal structure ( a = 1.5 μm). ( e ) Schematic of the zero-order ( n = 0) and first-order ( n = ±1) Laue diffraction from the horizontally and vertically oriented chains of scatterers and from the structure with square symmetry composed of both types of chains in the case a 1 = a 2 . Diffraction patterns on a flat screen are shown by thick lines. Scattered light is shown by different colors for clarity. ( f ) Experimental pattern for diffraction of monochromatic light (λ = 0.53 μm) from an inverted structure of the square symmetry (100 × 100 holes, a 1 = a 2 = 1 μm) observed on a flat screen positioned behind the sample. Main diffraction maxima are marked with the pairs of the diffraction indices ( n 1 , n 2 ). ( g ) Schematic of the zero- and first-order Laue diffraction from the hexagonal structure. ( h ) Experimental pattern for zero-, first- and second-order Laue diffraction of monochromatic light (λ = 0.53 μm) from an inverted hexagonal structure ( a = 1.5 μm) observed on a flat screen positioned behind the sample. Full size image Fine structure in diffraction patterns To analyze the fine structure of the diffraction patterns, we consider the scattering from one-dimensional (1D) linear chain of scatterers lying along a 1 , for this we set N 2 = 1 in the Eq. (4) in Methods. The positions of the 1D diffraction strong maxima in the square of the structure factor modulus | S ( q )| 2 (called ‘the main maxima’ in what follows) corresponding to the condition of constructive interference are determined in the limit sin( qa 1 /2) → 0 that yields qa 1 = ( k i − k s ) a 1 = 2 πn and where n is integer that enumerates the diffraction order, θ i is the incidence angle between the wave vector k i and the normal vector to the chain, θ s is the scattering angle between vectors a 1 and k s . Now we analyze Eq. (1) to derive the conditions when the diffraction of the n -th order does not exist, that is the cosine in the left-hand side does not fit into the (−1, 1) interval For the normal light incidence k i ⋅ a 1 = 0, the equation k s ⋅ a 1 = 2 πn takes the form that describes a family of cones pairs with the axes of symmetry coinciding with a 1 and the apex angle of scattering is given by In the current study we mainly focus on the most important case of the normal incidence. The zero-order cone ( n = 0) degenerates to the plane normal to a 1 since the angle between k s and a 1 becomes θ s = π/2. A pair of diffraction cones of the n -th order appears when a 1 > nλ , but it is prohibited when the argument is outside of the arccosine domain | nλ / a 1 | > 1. We notice that the prohibited arcs related to the evanescent wave that do not affect far-field pattern. A chain of scatterers with the period of a 1 = 1 μm illuminated by a Nd laser with λ = 0.53 μm scatters light in the zero-order plane and the first-order ( n = ±1) cones with θ s = 58° [ Fig. 1(e) ]. In our experiments, a photonic structure with square lattice symmetry [ Fig. 1(a,b) ] can be considered as a structure composed of two sets of orthogonal chains along the x - and y -axes. In such case one can expect in the diffraction patterns two orthogonal planes and two families of orthogonal couples of cones, as shown in Fig. 1(e) . The experimentally measured diffraction patterns from such structures with sufficiently large number of scatterers agree well with such a simple model [ Fig. 1(f) ]. Also we discuss the case of the oblique incidence, when the zero-order diffraction condition takes the form sin(θ i ) + λ/ a 1 > 1 for the specified incident angle. For the case of an arbitrary incident angle, we have the following condition λ/ a 1 > 2. Here we notice that the current analysis is limited to the case of 2D structures that do not support guided modes. Otherwise, numerical methods can be exploited and the criteria of zero-order diffraction should be corrected by the effective refractive index. Now, we analyze a fine structure of the diffraction planes and cones. The function from Eq. (4) in Methods has N − 1 zeros between any two adjacent main maxima and therefore N − 2 additional (called ‘subsidiary’ in what follows) maxima. Therefore, we can define the number of scatterers N directly from the experimental diffraction patterns. For conventional 2D photonic films with a large number of scatterers ( ), the intensity of the subsidiary maxima is much less or even negligible in comparison with the intensity of the main maxima [ Fig. 2(a) ]. As a result the cones do not detected when the diffraction is measured from a sample with a big number of scatterers 2 , 3 , 4 . Additionally, the subsidiary maxima located very close to each other, so that any small structural disorder or divergence of a light beam will lead to degradation of a fine structure and the subsidiary maxima cannot be resolved in the averaged profile of the diffraction patterns [ Fig. 1(f,h) ]. The entire picture changes dramatically at smaller N when the intensity of the subsidiary maxima becomes comparable with the intensity of the main maxima ( N = 10, Fig. 2(a) ) and we obtain a unique chance to observe by eyes the diffraction images registering directly the fine structure in experiment. Figure 2 ( a ) Modulus of a square of the structure factor | S ( q )| 2 of a linear 1D chain of scatterers with the number N = 10 and N = 50. ( b ) Calculated 3D image of the diffraction pattern of a 2D photonic structure composed of 10 × 10 elements (for a 1 = a 2 = 1 μm, λ = 0.53 μm). ( c ) Calculated and ( d,e ) experimentally measured diffraction patterns from direct ( d ) and inverted ( e ) 2D square photonic structures with the number of scatterers N 1 × N 2 = 5 × 5 observed on a flat screen positioned behind the sample. Insets show the SEM images of the corresponding structures. a 1 = a 2 = 1 μm, λ = 0.53 μm. Full size image The results of our experimental studies of light diffraction vs. the size and shape of direct and inverted photonic structures under the normal laser incidence are presented in Figs 2 , 3 , 4 , together with the calculated patterns and SEM imagines of the samples. In experiment, a beam expander is used with an objective for ensuring that the probing laser spot (λ = 0.53 μm) is always larger than the overall area of a finite-size photonic structure. Figure 3 Numerically calculated and corresponding experimentally measured diffraction patterns from 2D square structures with lattice constants a 1 = a 2 = 1 μm ( a–h ) and hexagonal structures with a = 1.5 μm ( i–p ). 2D square structures with the number of holes N 1 × N 2 = 10 × 10 ( a,f ), 20 × 20 ( b–d,g ), 50 × 50 ( e,h ). 2D hexagonal structures with the number of forming triangles along the side N = 5, ( i,n ), 10 ( j–l,o ), 20 ( m,p ). The patterns observed on a flat screen positioned behind the sample at normal incident beam λ = 0.53 μm. Insets show SEM images of the samples. Full size image Figure 4 ( a ) Calculated and ( b ) experimentally measured diffraction patterns from rectangular 2D photonic structure N 1 × N 2 = 10 × 20 holes observed on a flat screen positioned behind the sample. Insets show the SEM image of the structure. a 1 = a 2 = 1 μm, λ = 0.53 μm. Full size image First, we should explicitly identify the type of scatterers for the inverted photonic structures: a hole in the structure or some dielectric elements of the structure scatter light. Figure 2(d,e) show the experimental diffraction patterns from direct 2D structure composed from N 1 × N 2 = 5 × 5 voxels and inverted structure which can be considered either a structure of N 1 × N 2 = 5 × 5 holes or a fishnet – type structure of N 1 × N 2 = 6 × 6 stripes. It is clearly seen that the calculated diffraction pattern for N 1 × N 2 = 5 × 5 scatterers [ Fig. 2(c) ] and both experimental patterns [ Fig. 2(d,e) ] are absolutely identical with 3 subsidiary diffraction reflexes between 2 main maxima, 5 reflexes in total. It means that the hole scatters light in 2D inverted dielectric photonic structures. For all structures with the square symmetry, the diffraction patterns on a flat screen placed after the sample demonstrate the C 4 v symmetry at normal incident beam and for all hexagonal structures the diffraction patterns have the C 6 v symmetry at normal incidence ( Fig. 3 ). We notice a surprisingly strong intensity of diffraction from a rather small number of submicron dielectric scatterers. For square structures ( a = 1 μm, λ = 0.53 μm, λ < a < 2λ), we can distinguish two types of the diffraction features: (i) two orthogonal (vertical and horizontal) strips that correspond to the zero-order scattering ( n 1 = 0 or n 2 = 0); (ii) four arcs that correspond to the first-order scattering ( n 1 = 1 or n 2 = 1) being formed by intersections of four cones with a flat screen, as shown schematically in Fig. 1(e) . For hexagonal structures ( a = 1.5 μm, λ = 0.53 μm, 2λ < a < 3λ), we can distinguish three types of the diffraction features: (i) three strips (directed at an angle of 60 degrees relative to each other) that correspond to the zero-order scattering; (ii) six arcs that correspond to the first-order scattering and (iii) six another arcs that correspond to the second-order scattering. All arcs are formed by intersections of 6, 12 etc cones with a flat screen, as shown schematically in Methods. For photonic structures with a large total number of the scatterers ( N 1 ⋅ N 2 ~ 10 4 ), the experimentally observed strips and arcs look like solid curves with a poorly resolved fine structure only near the main maxima [ Fig. 1(f) ]. However, with decreasing the number of scatterers N 2 , the whole diffraction curves are splitted into isolated reflexes in accord with the theoretical predictions. As a characteristic example, we present the experimental diffraction patterns obtained from direct and inverted photonic structures with different number of scatterers ( Figs 2 , 3 , 4 ). We observe that for the square photonic structure with 10 × 10 scatterers the arc with fine structure between (0, 1) and (1, 1) main maxima (the notation of the main maxima are shown on Fig. 1(f) ) consists of 10 diffraction reflexes (including the two main maxima), for the structure with 20 × 20 scatterers, the arc consists of 20 reflexes [ Fig. 3(c) - calculations, Fig. 3(d) – experiment] and so on. For hexagonal structures, the experimentally observed fine structure in arcs between main maxima consist of 2 N diffraction reflexes (including the two main maxima) in agreement with Eq. (9) . Note that for hexagons N defines a number of trigonal holes along the side that is the half of the maximal number of trigonal holes 2 N between two opposite angles of hexagon, as shown in Methods. For a rectangular structure with 10 × 20 elements, the arc between (0, 1) and (1, 1) consists of 10 reflexes while the arc between (1, 0) and (1, 1) has 20 reflexes ( Fig. 4 ). Note that according to Eq. 3 , the apex angle of scattering 2θ s depends only on lattice parameters a i in particular direction (for given λ and order of scattering n ). Therefore for the square lattice with a 1 = a 2 four cones produce diffraction patterns with C 4 v symmetry. However the numbers of scatterers N 1 and N 2 define the fine structure of the cones and planes that reduce the symmetry of the diffraction patterns to orthogonal C 2 v . Therefore we can conclude that the general diffraction rules for both square and hexagonal 2D photonic structures are the same and the number of diffraction reflexes is defined by maximal number of scatterers in particular direction. A specific case is observed experimentally for the zero-order diffraction reflex (0, 0) that coincides with the non-diffractive part of the transmitted laser beam. For this direction, two beams can interfere and, as a result, they produce additional diffraction reflexes thus the total number of experimentally observed reflexes between (0, 0) and the neighboring main maxima (0, 1), (0, −1), (−1, 0), (1, 0) are N 1 + 1 and N 2 + 1, as can be seen from Fig. 4(b) for experimentally observed fine structure in the (−1, 0) − (0, 0) diffraction stripe. This effect is reproduced numerically for the 3D diffraction patterns by using the CST Microwave Studio software [ Fig. 2(b) ], but it missed in the framework of the Born approximation when only the effects from a sum of single scatterings are evaluated [ Fig. 4(a) ]. Variation of diffraction patterns with the sample rotation Here, we analyze the angular dependence of the diffraction patterns for samples with square and hexagonal symmetry. Figure 5 shows experimental diffraction patterns as the samples are rotated around vertical axis from the normal incidence θ i = 0 to the angle of θ i = 80° that is nearly parallel to the laser incident beam. In order to explain the observed effects of appearance, displacement and disappearance of different elements of the diffraction patterns with changing rotation angle θ i , we present in Fig. 6 the calculated scattering angles θ s as a function on the incident angle θ i and the order of diffraction n . The calculations were performed for horizontal chains of scatterers which are rotated about vertical axis. Figure 5 Experimentally recorded transformation of the diffraction patterns for the case of varying the sample rotation angle about vertical axis a 2 from the normal incidence θ i = 0 to θ i = 80°. Twelve upper panels - square sample with N = 100, a 1 = a = 1 μm, twelve lower panels - hexagonal sample with N = 50, a 1 = a = 1.5 μm. The patterns are observed on a flat screen positioned behind the sample. λ = 0.53 μm. Full size image Figure 6 ( a ) Calculated dependencies of the scattering angles θ s as a function on the incident angle θ i for different orders of diffraction n . ( b ) Schematic of the Laue diffraction from a horizontally oriented chain of scatterers at the incident angle θ i = 30°. The sketch shows the vertical profiles of five cones for five orders of diffraction, n = −1, 0, 1, 2, 3. Full size image The experimental data shows that the transformation of the diffraction patterns from horizontal chains are absolutely identical for square and hexagonal samples. At normal incidence θ i = 0 the diffraction patterns are symmetrical about vertical axes ( Fig. 5 ). When the samples start to rotate, the diffraction patterns demonstrated several significant changes. i With θ i increasing, the curvature of the left arcs ( n 1 = −1, −2) increases indicating the decreasing of the apex angle of scattering cones 2θ s , as also seen from Fig. 6 . The calculations show that the cone n 1 = −2 collapses (θ s = 0) at θ i ≈ 17° while the cone n 1 = −1 collapses at θ i ≈ 40° in general agreement with experimental data. ii Vertical straight line of zero-order diffraction pattern ( n 1 = 0) becomes the arc. It means that the plane of scattered light evolve into the cone with the apex angle 2θ s . The experimental data clearly show that with the sample rotation new circle from zero-order diffraction appears on the flat screen positioned behind the sample. In accordance with the calculations, the apex angle 2θ s decreases and the cone becomes a horizontal straight line (θ s = 0) when the laser beam and the chain of scatterers becomes parallel (θ i = 90°). iii Nontrivial angular dependence is demonstrated by the right arcs ( n 1 = +1, +2). Figure 5 shows that with θ i increasing the curvature of the right arcs decreases indicating the increasing of the apex angle of scattering cones 2θ s . The calculations show that the cone n 1 = +1 evolves into the plane (2θ s = 180°) at θ i ≈ 20° while the cone n 1 = +2 evolves into the plane at θ i ≈ 45° ( Fig. 6 ). With further increase in θ i , both planes evolve back into the cones but with inversed orientation in space as shown in Fig. 6(b) for θ i = 30°. Indeed, the experimental diffraction patterns for both square and hexagonal structures show that new circles from first- and second-order diffraction ( n 1 = +1, +2) appear on the flat screen positioned behind the sample [ Fig. 6(a) ]. Transition from photonic crystals to metasurface It is known that metasurface properties are related to the existence of an effective medium behavior. At the same time we cannot proceed with a homogenization procedure when Bragg diffraction to exist. In this section we analyze a case when Bragg diffraction associated with photonic crystal behavior is suppressed and only zero-order process forms diffraction pattern. The possibility of the experimental observation of a certain order of diffraction n is determined by the lattice constant a to wavelength λ ratio. For variable lattice parameter a 1 and green laser line λ = 0.53 μm, the diffraction condition a 1 > | n 1 λ| allows three pairs of cones ( n 1 = ±1, ±2, ±3) for a 1 > 1.59 μm, two pairs of cones ( n 1 = ±1, ±2) for 1.59 > a 1 > 1.06 μm and only the first-order diffraction ( n 1 = ±1) with one pair of cones for 1.06 > a 1 > 0.53 μm. Figure 7 demonstrates a train of collapses of diffraction cones experimentally observed from a set of 2D square photonic structures with constant lattice parameter a 1 = 1 μm and variable lattice parameter . The experimental patterns for the samples with a 1 = 2 and 1.8 μm contain three pares of cones from chains oriented along horizontal axis a 1 , two pairs of cones are observed in the cases of a 1 = 1.4 and 1.2 μm and only one pair of cones is observed for a 1 = 1, 0.8, 0.7 μm ( Fig. 7 ). For the sample with a 1 = 0.5 μm < λ all these cones are collapsed and only vertical zero-order diffraction plane ( n 1 = 0) is observed. Figure 7 Transition from photonic crystals to metasurfaces. Experimental diffraction patterns obtained from 2D structures with varied lattice parameter and constant parameter a 2 = 1 μm. Patterns are observed on a flat screen positioned behind the sample. λ = 0.53 μm. Full size image In this experiment, we observe a transition between two regimes of light diffraction, i.e. the regime of Laue diffraction which is characteristic for photonic crystals and the regime where only the zero-order ( n = 0) diffraction can be observed under condition a < λ that is a fingerprint of metasurfaces 22 , 23 , 25 , 26 , 28 , 29 , 35 , 36 . As to the diffraction from chains oriented along vertical axis a 2 , the patterns remain unchanged for all samples: one pare of the first-order cones ( n 2 = ±1) and horizontal zero-order plane ( n 2 ) are observed. As clearly seen from Figs 2 , 3 , 4 , 5 , 6 , 7 , the diffraction processes along a 1 and a 2 directions are completely independent for both square and rectangular samples that are quite natural for low-contrast photonic structures 7 . Note that in these structures one can identify another families of chains of scatterers including and with the highest priority the diagonal chains a 1 ± a 2 . However, we can not observe experimentally or in simulations any traces of the characteristic diffraction patterns along a 1 ± a 2 directions even for the photonic structures with high number of scatterers [ Fig. 1(f) ]. And this despite the fact that the zero-order diffraction planes ( n = 0) oriented at the diagonal angles of ±45° to the a 1,2 directions should be observed at any ratio between λ and a 1,2 . This effect differs fundamentally from 3D light diffraction observed for example in synthetic opals where different { hkl } crystal planes determine the 3D Bragg diffraction patterns 37 . Discussion Our experimental and theoretical studies have shown that 2D photonic structures reveal many remarkable optical effects. By specially, choosing the lattice parameters and laser wavelength, we have visualized the diffraction features for both direct and inverted 2D structures on a flat screen placed behind the sample. We have observed experimentally a fine structure of the diffraction from finite-size 2D dielectric structures that provides not only information about the structure symmetry but also allows characterizing the shape and determining exactly the number of scatterers. When N 1 or N 2 increases, isolated reflexes start to overlap and finally merge into continuous diffraction patterns similar to the general merging of isolated energy levels into continuous bands in crystalline structures. The symmetry of continuous diffraction patterns is defined by the symmetry of the 2D lattice but the number of isolated reflexes is defined by the maximal number of scatterers ( N 1 , N 2 ) in particular direction. Therefore the exact symmetry of fine structure in diffraction patterns is defined by the shape of the 2D sample. Taking advantage of the Laue diffraction independency in different orientations, we present an elegant way to demonstrate the transition in the light regimes between 2D photonic structure and metasurface. Our theory is in very good agreement with experiment. It was found both theoretically and experimentally that the diffraction patterns at different high symmetry directions are independent. As a result a set of anisotropic samples with orthogonal C 2 v lattice symmetry, variable lattice parameter along a 1 and unchangeable lattice parameter along a 2 , demonstrates unvarying diffraction patterns in one direction and transition from Laue diffraction typical for photonic crystals to a non-diffraction regime characteristic for metamaterials in other direction. The 3D diffraction pattern obtained for the photonic structure with small number of scatterers can be considered as radiation patterns of an optical antenna with spatially-resolved lobes [ Fig. 2(b) ]. The photonic structure converses a laser beam into several sets of high-directive lobes and moulds wavefront in exact correspondence with the number and arrangement of the dielectric scatterers. This property gives a promise for applications of 2D finite-size photonic structures as a superdirective all-dielectric metaantennas. Each antenna’s lobe corresponds to the maxima of the structure factor | S ( q )| 2 . We notice that the similar scattering problem is known as N -element RF-antenna array 38 . Methods Diffraction from two-dimensional periodic structures Square and rectangular structures For the 2D photonic structures with the square and orthogonal lattice symmetry, the position of each scatterer (voxels or hole) is determined by the 2D vector r i = a 1 n 1 + a 2 n 2 , where a 1 and a 2 stand for the basis mutually orthogonal vectors ( a 1 ⋅ a 2 = 0) of the square ( a 1 = a 2 ) or rectangular ( a 1 ≠ a 2 ) lattice, 0 ≤ n j ≤ N j and N j are integer. For the analysis of diffraction patterns from low-contrast periodic structures (that include the photonic structures fabricated by the DLW technique), it is usually sufficient to use the Born approximation when the interaction between the scatterers is neglected 39 . In the Born approximation, the diffraction intensity is determined by a product of the squares of the structure factor S ( q ), which is associated with the lattice periodicity, the scattering form factor F ( q ), which takes into account the contribution from a unit cell and a polarization factor. A comparison of the theoretical and experimental data shows that in our case of the quasi-point scatterers it is sufficient to consider only the structure factor S ( q ). Under these conditions, the diffraction angles and peak intensities become simple functions of the crystallographic 2D structure 40 : Here q = k i − k s is the scattering vector, whereas k i and k s are the wave vectors of the incident and scattered waves. Thus, the diffraction patterns depend either on the size of the sample if the whole sample is illuminated or on the number of illuminated scatterers N j along the directions of the vectors a 1 and a 2 , respectively. Equation (4) is valid for a structure with a parallelogram shape and for the square and rectangle as the special cases of the parallelogram 40 . Hexagonal structures To analyze the diffraction from the 2D photonic structures with the hexagonal symmetry C 6 v it is convenient to consider three basis vectors a 1 , a 2 and a 3 instead of two vectors because all three directions in the lattice are equivalent ( Fig. 8 ). To calculate the structural factor S ( q ), we subdivide the hexagon into three parallelograms forming by three pairs of vectors ( a 1 , a 2 ), ( a 2 , a 3 ) and ( a 3 , a 1 ), as shown in Fig. 8 . The structural factor S ij ( q ) for each parallelogram is calculated from Eq. (4) and can be written as: Figure 8 Schematic of the hexagon’s subdivision into three parallelograms. The length of the hexagon side equals to Na . The maximal size of the hexagon (its diagonal) equals to 2 Na . a j are the basis vectors of the hexagonal lattice. Full size image When dealing with the whole hexagon, one should combine three Eqs (5 , 6 , 7) taking into account the origin of coordinates for all parallelograms, to obtain: It is straightforward to demonstrate that the square of the structure factor modulus can be rewritten as: where , φ is an angle between k s and k i . The x -component of k s is assumed to be zero. In our numerical calculations a hexagon is given by the lattice constant a and by the number of forming triangles N ( Fig. 8 ). Additional Information How to cite this article : Rybin, M. V. et al. Transition from two-dimensional photonic crystals to dielectric metasurfaces in the optical diffraction with a fine structure. Sci. Rep. 6 , 30773; doi: 10.1038/srep30773 (2016).
Scientists from Russia and Australia have proposed a simple new way of counting microscopic particles in optical materials by means of a laser. A light beam passing through such a material splits and forms a characteristic pattern consisting of numerous bright spots on a projection screen. The researchers found that the number of these spots corresponds exactly to the number of scattering microscopic particles in the optical material. Therefore, the structure and shape of any optical material can be determined without resorting to the use of expensive electron or atomic-force microscopy. According to the researchers, the new method will help design optical devices much faster. The work was published in Scientific Reports. The production of optical circuits requires devices that can amplify optical signals, bring them into focus, rotate and change their type of motion. Ordinary lenses cannot cope with these tasks at nanoscale, so scientists are working with artificial optical materials—photonic crystals and metamaterials, which can control the propagation of light in extraordinary ways. However, fabricating optical materials with desired properties is a laborious process that needs improvement. The scientists from ITMO University, Ioffe Institute, and Australian National University have suggested analyzing the structure of photonic crystals using optical diffraction—that is, by looking at the light pattern generated when the sample is exposed to a laser beam. The study has shown that the number of spots in the pattern is equal to the number of scattering microscopic particles in the sample structure. Previously, such small particles could only be seen and counted with powerful and expensive electron or atomic-force microscopes. "The light senses heterogeneity," says Mikhail Rybin, first author of the paper, senior researcher at the Department of Nanophotonics and Metamaterials at ITMO University. "Depending on the shape and relative position of the scatterers, the light wave continues to propagate differently behind the sample. In other words, the structure of the sample affects the diffraction pattern, which will be projected on the screen. We found out that it is possible to determine the precise number of scatterers in the material. This helps understand not only the type of the sample lattice (square, triangular), but also to establish its structure (20 to 20 particles, or 30 to 15) just by counting light spots on the screen." Experimentally obtained and simulated diffraction patterns for a sample. Credit: ITMO University The new method is a much more affordable alternative to expensive electron or atomic-force microscopy and in this case, does not spoil the sample. "Even a schoolboy can buy a laser pointer, adapt a small lens to focus the light better, fix the sample and shine a laser beam on it," notes Mikhail Rybin. "In addition, our method makes it possible to study optical materials without changing their structure in contrast to electron microscopy, where the sample surface has to be covered by a conductive metal layer, which impairs optical properties of the sample." The new method has already enabled scientists to investigate the transition between two main classes of optical materials: photonic crystals and metasurfaces. In the study, they have determined the lattice parameters, which define whether the light perceives the material as a two-dimensional photonic crystal or a metasurface. In both classes, the scattering particles (rings, balls, cylinders of 200 to 300 nanometers) are arranged in a flat lattice. However, with a two-dimensional photonic crystal, the light perceives the sample as a set of separate particles and generates a complex pattern on the screen behind the sample. Metasurfaces appear homogenous via the technique—the screen shows a single bright spot, indicating that the scattering particles are located close enough to each other that the light does not register separate particles and passes through the sample without splitting. In order for the light beam to pass through a metasurface, the distance between the particles has to be smaller than the wavelength of light. For some structures, it is necessary to produce a lattice in which the distance between particles is two to three times smaller than the light wavelength. Often, however, meta-properties can manifest themselves at larger distances between the particles. It is important to find the maximum allowable distance, since reducing the structure by one single nanometer makes the technology more expensive. For the light with a wavelength of 530 nanometers (green color), the distance of 500 nanometers between the scattering particles is already enough. "A green light beam perceives the structure with a period of 500 nanometers as a homogenous material. Therefore, it is not always necessary to fabricate a lattice with a period smaller than the wavelength, because producing larger structures is much easier from technological standpoint. For one wavelength, the material will act as a photonic crystal and as a metasurface for another. That is why designing such structures, we can evaluate maximum lattice period with laser," concludes Mikhail Rybin.
10.1038/srep30773
Biology
Study shows that mothers prefer daughters and fathers prefer sons
Robert Lynch et al. Sexual conflict and the Trivers-Willard hypothesis: Females prefer daughters and males prefer sons, Scientific Reports (2018). DOI: 10.1038/s41598-018-33650-1 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-018-33650-1
https://phys.org/news/2018-11-mothers-daughters-fathers-sons.html
Abstract Because parental care is expected to depend on the fitness returns generated by each unit of investment, it should be sensitive to both offspring condition and parental ability to invest. The Trivers-Willard Hypothesis (TWH) predicts that parents who are in good condition will bias investment towards sons, while parents who are in poor condition will bias investment towards daughters because high-quality sons are expected to out-reproduce high quality daughters, while low-quality daughters are expected to out-reproduce low quality sons. We report results from an online experiment testing the Trivers-Willard effect by measuring implicit and explicit psychological preferences and behaviorally implied preferences for sons or daughters both as a function of their social and economic status and in the aftermath of a priming task designed to make participants feel wealthy or poor. We find only limited support for predictions derived from the TWH and instead find that women have strong preferences for girls and men have preferences for boys. Introduction Our current understanding of how resources are allocated to maximize the reproductive success of sons and daughters is based on Fisher’s principle of equal investment in the sexes 1 . Carl Duesing was the first to demonstrate that many sexually reproducing species produce an equal number of males and females because the total reproductive value of males and females is necessarily equal 2 . However, because sex allocation depends on the fitness returns to parental effort, selection will favor equal investment in sons and daughters only when the cost of producing each sex is identical. This argument is one of the best understood and most celebrated theories in evolutionary biology 3 because it provides a framework for understanding the observed variance in sex ratios across species. In humans, although parental expenditure in each sex is expected to be equal, excess male mortality throughout the period of parental investment decreases the average costs of producing sons 1 . This is why most human populations are male- biased at birth 4 , 5 but nearly equal among individuals who are sexually mature. Although population-wide sex ratios are expected to be highly constrained due to frequency-dependent selection, evolution can favor deviations from Fisherian sex ratios if producing one sex has a greater payoff than the other in terms of the production of grand-offspring. Trivers and Willard, for example, hypothesized that natural selection should favor a parent’s ability to adjust offspring sex ratios according to their ability to invest 6 . The Trivers-Willard Hypothesis (TWH) relies on 3 critical assumptions: (1) that parental condition is correlated with offspring condition, (2) that offspring condition is correlated with condition in adulthood and (3) that condition differentially affects the mating success of each sex (e.g., males in good condition out-reproducing females in good condition and females in poor condition out-reproducing males in poor condition). Because these conditions are seen to hold for many species of mammals, Trivers and Willard argued that mothers in good condition should invest more in producing males while mothers who are in poor condition should invest more in producing females. In the years since it was proposed, the TWH has been subject to hundreds of tests, with mixed results and ongoing debates about such issues as the appropriateness of the species or population being studied, the timing of when such biases are expected to occur, and the validity of various measures of parental investment, parental condition, and offspring condition 7 . In its original formulation, the TWH focused exclusively on maternal condition and sex ratios at birth in mammals. However, because most investment in human offspring occurs postnatally and comes from both parents, many of the most convincing studies of TW effects in human populations have also analyzed care by fathers and have included measures of sex-biased investment after birth 8 . For example, Cronk collected data on sex biases in parental investment among the Mukogodo of Kenya, who are at the bottom of a regional hierarchy whereby men often either do not marry or have delayed marriage, and, as a result, have lower mean reproductive success than the women. He found that, on average, Mukogodo parents wean daughters later, spend more time nursing and holding daughters, remain physically closer to daughters, and are more likely to take their daughters than their sons for medical care, all of which results in better growth performance for Mukogodo girls, than boys, and a female bias in the sex ratio of children aged 0 through 4 years 9 , 10 , 11 . Meanwhile, a study of parents in the United States, using self-reports and diary data, found no evidence that parents’ socioeconomic status is associated with biased investment in either sex 12 . These contradictory results point to another potential pitfall that many studies purporting to test the TWH encounter by failing to sufficiently distinguish between physiological, psychological and behavioral outcomes. There is an abundance of evidence that there are often large discrepancies between stated offspring sex preferences and parents’ actual behavior toward offspring 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 and it is often the case that studies using behavioral measures are more likely to provide support 16 , 17 , 18 . Therefore it is crucial that researchers distinguish between physiological outcomes (e.g., factors that hinder the implantation of embryos), preferences implied by behavior (e.g., time spent nursing) and those assessed psychologically (e.g., self-reports or implicit preferences) that depend on whether the offspring is male or female. Other studies have further complicated attempts to reach a scientific consensus by showing that offspring condition, regardless of sex, and overall family income, can affect investment decisions. Increasing a family’s resources, for example, has been shown to result in a shift from concerns with efficiency to concerns with equity 19 and some studies have shown that poor households reduce investment in children who are at high risk but then increase investment in those children once the families obtain more resources 20 , 21 . Much of the confusion and inconsistent results over TWH research has also resulted from a failure to distinguish between when it is optimal for parents, especially mothers, to bias offspring sex ratios vs. when it is optimal for them to bias investment by sex. Although many researchers have considered sex ratios biases and post conception investment to be on the same continuum, seeing them as similar ways of optimizing the allocation of parental investment, this is not always the case. The distinction relies on the third assumption of the TWH — that condition differentially impacts the reproductive success of each sex6. Veller et al . 22 , for instance, have shown that it makes sense to produce male biased sex ratios when the mother is in good condition because the absolute returns on producing sons with high fitness is higher than it is for producing daughters with high fitness. This is because the fitness value of males increases more in response to improved maternal condition. However, this is not necessarily true of post-conception investment biases because they depend on marginal returns on investment. Here investment should be biased towards whichever offspring improves parental fitness more per unit invested and this is not necessarily linked to the overall fitness value of the offspring 12 . If mothers who are in poor condition receive higher fitness returns per unit invested than mothers who are in good condition they may be expected to bias investment towards sons. In other words, whenever male fitness increases faster with condition than does female fitness (i.e. the male fitness function has a steeper slope), the fitness returns on sons will always be greater and parents will have greater marginal gains (per unit of investment) by investing in males regardless of their own condition. Several studies provide support for making a distinction between when parents are expected to bias sex ratios vs. when they are expected to bias investment. A meta-analysis of mammalian sex ratios, for example, showed that studies analyzing sex ratios around conception showed nearly unanimous support for the hypothesis that mothers in good condition bias litters towards sons 23 . Meanwhile a broad review of the literature surveying Trivers-Willard effects on postnatal parental investment in humans yields somewhat less consistent results, with studies that operationalized key variables in more appropriate ways and those which were conducted on populations that better conformed to the assumptions of the hypothesis tending to show more support for it 7. To date, most evolutionary hypotheses on sex-biased parental investment have assumed that resource constraints affect both parents in exactly the same way, and that under certain conditions, mothers and fathers will converge on the same investment biases and preferences. This is despite considerable evidence showing that fathers prefer sons and mothers prefer daughters 24 . Which parent controls and distributes resources has also been shown to influence outcomes for boys and girls: in a small-scale horticulturalist society where food is not always abundant, maternal control of resources was positively associated with increased BMI of daughters (necessary for gestation and lactation) relative to sons 25 . In the United States, fathers are also more likely to be present in the home if their child is male 26 , and male offspring reduce the risk that fathers’ will initiate divorce by approximately 9% 27 . Another study found that American men work more and harder following the birth of sons but not of daughters 28 . Meanwhile, American mothers who head the household after a divorce pay more attention to their daughters than to their sons 26 . Some researchers have argued that these biases are adaptive because children are more likely to benefit from investment from their same-sex parent who can better help them by providing information about their future sex roles 24 , 25 . Therefore, these sex biases may be expected to work in both directions, such that parents are not only primed to transmit sex-specific information to their same-sex offspring, but that offspring are also predisposed to learn from their same-sex parents 29 . The sex of the parent has even been shown to affect the heights of same- and opposite-sex offspring, which may indicate biases in PI. A study conducted in Brazil, the United States and Ghana showed that mothers’ level of education was positively correlated with the height and health of their daughters, but not their sons, while father’s educational attainment was positively correlated with the height of their sons 29 . Evolution can select for parents who favor same-sex offspring when the evolutionary interests of males and females diverge. On a genetic level, whenever males and females have different optimal outcomes for traits that are expressed in both sexes, intralocus sexual conflict is expected 30 . Intralocus sexual conflict occurs when genes that benefit one sex are detrimental to the other 31 , which can affect the transmission of genetic fitness to same- and opposite-sex offspring. In other words, fathers with ‘good genes for males’ may produce sons with high fitness but will produce daughters with low fitness. At the same time, mothers with ‘good genes for females’ may produce daughters with high fitness but sons with low fitness. This disruption of the transmission of genetic quality to same and opposite sex offspring 30 violates one of the crucial assumptions of the TWH — that parental condition is positively correlated with offspring condition. The uneven transfer of fitness to same- and opposite-sex offspring might also be expected to affect selection on parental investment and even sex ratios. In a species of flour beetles, for example, low-fitness females produced more sons while high-fitness females produced more daughters 32 . Sexual conflict can therefore alter optimal investment strategies such that sex-biased PI may depend not only on the condition of the parent but also on their sex. The evolutionary importance of intralocus sexual conflict was not understood when Trivers and Willard wrote their paper in 1973, and understanding the interactions between the condition of mothers and fathers and the condition of sons and daughters may help to shed light on some of the inconsistent and contradictory findings of TWH research over the years. Frequency dependent constraints on local and population-wide sex ratios (Fisher’s principle), the disparate benefits of transmitting culturally learned traits to same- and opposite-sex offspring, the uneven benefits of sexually antagonistic genes transferred from mothers and fathers to sons and daughters, the influence of cultural and societal norms regarding which sex is favored and the varied impact of resources on the fitness of sons and daughters all conspire to complicate ‘optimal’ PI decisions for mothers and fathers. Detecting these effects, or even knowing what to expect, can be difficult when the expected outcome of one strategy (e.g. wealthy mothers should favor sons) conflicts with or masks another (e.g. mothers with good genes should favor daughters). The Current Study One of the most important questions that remains in the literature on the Trivers-Willard Hypothesis is the nature of the proximate mechanism that allows parents to bias their investment. Although there are several good experimental studies showing physiological triggers, including increasing the fat content in diet 33 , inducing diabetes 34 and decreasing the circulating levels of glucose 35 in mice, we are aware of only two studies that include experimental manipulations of potential proximate psychological mechanisms. Mathews 36 attempted to prime childless participants to feel “in poor condition” by having them think about their own mortality, and found no impact of this prime on the desired sex ratio of participants’ future children. However, it is unclear why thoughts about one’s own mortality would lead to a sense that one is in poor condition as a parent. A more promising approach was taken by Durante et al . 37 who used slides depicting either an economic upswing or an economic recession to prime participants on Amazon Mechanical Turk (MTurk). Participants who saw the slide depicting the effects of a recession reported preferences favoring investments in daughters including a stronger desire to give a hypothetical US Treasury bond to a daughter than to a son and a willingness to bequeath more assets to a daughter than a son in their will. Like Durante et al . 37 , we attempted to trigger a Trivers-Willard effect by priming MTurk participants with IP addresses in the United States to feel either poor or rich. We also collected survey and behavioral data on participants’ preferences and backgrounds, offered them the choice of donating to a charity that benefits either both or girls who participants were asked to think of as their own, and had them take an Implicit Attitude Test (IAT) regarding their feelings about boys vs. girls. Previous research has suggested that parental investment decisions may be influenced by conditions faced by the parent 6 as well as those experienced in childhood 38 . We predicted that individuals who were primed to be poor would P1a) prefer to adopt daughters, P1b) donate more money to a charity that helps baby girls than to one that helps baby boys, P1c) show implicit preferences for girls and P1d) express explicit preferences for daughters. We also analyzed the relationship between both childhood and adult socio-economic condition on offspring sex preferences. We predicted that individuals who had low social status either as children or as adults would P2a) prefer to adopt daughters, P2b) donate more money to a charity that helps baby girls than to one that helps baby boys, P2c) show implicit preferences for girls and P2d) express explicit preferences for daughters. Results Descriptive statistics Overall, 347 females (coded as 0) and 423 males (coded as 1) with a mean age of 36.3 years old, 316 (40%) of whom were either currently married or had been divorced and 502 (65%) of whom had already had at least one child completed the survey and were paid on Amazon Turk. The survey was generated through Qualtrics software, which requires subjects to complete each section before they are able to continue, and all 770 subjects who completed the survey were used in these analyses. Participants reported their incomes as follows: (1) less than $20,000 (N = 140), (2) $20,000–$45,000 (N = 263), (3) $45,001–$70,000 (N = 185), (4) $70,001–$100,000 (N = 116) and (5) greater than $100,000 (N = 66). The median response to the question about income was the same as the mode, i.e., $20,000–$45,000. Most participants reported that they either were currently enrolled in school (N = 76), had graduated from high school (N = 86), had attended some college (N = 216), or had graduated from either a two-year (N = 70) or a four-year college (N = 311). Smaller numbers reported that they had less than a ninth grade education (N = 1), some high school (N = 2), a master’s degree (N = 62), a professional degree (N = 15), or a doctoral degree (N = 7). Most participants reported that their parents had graduated from high school (N = 189), a two-year college (N = 75), or a four-year college (N = 210). Smaller numbers reported that their parents had less than a ninth grade education (N = 12), some high school (N = 24), a master’s degree (N = 88), a professional degree (N = 14), or a doctoral degree (N = 19). The mean response to the question about perceived relative status was 4.96, which is slightly below the center of the ladder (5.5). Most rated their health as good (N = 239), very good (N = 319) or excellent (N = 120), with relatively few rating it as either poor (N = 11) or fair (N = 81). Most participants were single (N = 427), some were married (N = 276), a few were divorced (N = 53), and very few were either separated (N = 6) or widowed (N = 8). In answer to the question about adoption, more participants elected to adopt ‘the girl’ (N = 442) rather than ‘the boy’ (N = 328). Implicit Association Tests were completed by 307 female and 375 male participants. Because the IAT test required subjects to access a separate website and Qualtrics software was unable to verify whether or not they had successfully completed it in order to permit them to proceed with the survey, 88 participants did not complete this task. There also may have been some confusion about how to access the website. The mean score for all participants on the Implicit Attitude Test was −0.137 indicating, on average, an overall preference for girls. The effect of priming individuals to feel rich or poor on offspring sex biases (Predictions 1a–1d) Males who were primed to feel wealthy donated significantly more to charities supporting girls. Overall, however, the experimental prime had very little impact on any of these results and statistical significance did not survive post-hoc tests for multiple comparisons (e.g. Bonferroni correction) (see Table 1 ). Table 1 Parameter estimates for predictors in top ranked ranked models (lowest WAIC score) for males, females and full sample. Full size table The effect of the participant’s condition on offspring sex biases (Predictions 2a–2d) Among male participants, lower childhood poverty predicted a preference to adopt males (Fig. 1 ), higher perceived status predicted more donations to charities supporting boys (Fig. 2 ), and younger males donated significantly more to charities supporting girls (Table 1 ). The participants’ own education and the participants’ parents’ education had opposite effects on preferred sex ratios among males: higher education of the participant predicts a preference for a female-biased sex ratio and higher education of the participant’s parents predicts a preference for a male-biased sex ratio. Again, however, none of these results survive post-hoc tests for multiple comparisons. Figure 1 The preference for adopting a boy is lower for females than for males across all conditions. Males who experience more poverty in childhood are more likely to express a preference to adopt females. The plot holds all the other variables (see Table 1 ) in the top models constant (i.e. holds them at their mean values) across each level of poverty in childhood (Shading around line is 95% CI). Full size image Figure 2 Both sexes, but especially females, donated more to charities supporting girls. Males who reported higher perceptions of their own status donated more to charities supporting boys. Full size image Model selection We used candidate sets of all the combinations of the predictor variables to model each of the four dependent variables described above (see Methods) in a Generalized Linear Model regression in R Studio 3.4.1. We fitted models with the package ‘lme4’ and used the ‘MuMIn’ package to fit all combinations of the predictor variables.Because the strongest correlations between any of the candidate predictor variables were between ‘Perceived relative status’ and ‘Income’ (r = 0.48) and between ‘Parent’s education’ and ‘Education’ (r = 0.29), we do not think that multicollinearity would adversely affect our models. Therefore we allowed all predictors to be entered into each model and ranked them by Akaike’s Information Criterion Score ultimately using all variables within 2 AICc units of the top-ranked model and averaging across them by their weight (see Supplementary Materials Tables S1 : S12 ). The final predictor variables used were considered to be ‘informative’ and were seen as being the most useful in striking a balance between model complexity and overfitting 39 . See Supplementary materials Tables S1 – S12 for all the top models, their AIC scores and rankings for males, females and the whole sample for each of the 4 dependent variables used. We evaluated model performance by calculating the area under the curve (AUC) of the receiver operating characteristic (ROC) for each of the top models 40 . The AUC evaluates a model’s performance by indicating how well the model predicts a participant’s response to the dependent variable. An AUC value of 1.0 indicates perfect predictability, and a value of 0.5 indicates the model’s predictability is equal to random. We considered values with 95% Confidence Intervals (CI’s) that did not overlap with 0.5 to be reasonable models 41 . Models Adoption Overall, females strongly preferred to adopt girls (65.7% preferred to adopt a girl, S.E. = 2.5%) and males showed no preference for either boys or girls (51.6% preferred to adopt a girl. S.E. = 2.4%) (see Table 2 ). A preference for adopting boys among men was significantly predicted by lower childhood poverty (Table 1 and Fig. 1 ). Among women, lower adult poverty predicted adoption preferences for boys and education level predicted a preference for adopting girls. Neither result, however, was statistically significant at the p < 0.05 level. Table 2 Percentage of males and females who prefer to adopt boys, percent donated to a charity benefiting boys vs. girls, implicit association test score (positive = boy preference) and preferences for a male biased sex ratio. Full size table Donations Females and males both gave more money to charities supporting baby girls than to charities supporting baby boys, but females gave substantially more (mean = 61.1 cents, S.E. = 0.7) to girls than did males, who gave 53.8 cents (S.E. = 0.8) to girls (Table 2 ). The prime seemed to affect only males, such that males who were primed to feel rich were significantly more likely to donate to the girl’s charity. For males who were primed to feel poor, the mean donation to charities supporting girls was 52.2 cents (S.E. = 1.5); males in the control group donated 50.8 cents (S.E. = 1.4), and males primed to be rich donated 58.5 cents (S.E. = 1.4). No differences were seen between treatments on the donations to females (Poor: 59 cents, Control 60.5 cents, Rich: 61.1 cents). Males who had higher perceived status gave significantly more to charities favoring boys (Fig. 2 ). Lower childhood poverty was a suggestive but non-significant predictor of donations to boys among males. Among females, education and parents’ education were non-significant predictors of donations to charities supporting girls (Table 1 ). Implicit Association Test Overall, participants showed an implicit preference for girls (mean = −0.137, S.E. = 0.018), but females had a stronger implicit preference for girls (mean = −0.438, S.E. = 0.002) while males showed a slight preference for boys (mean = 0.11, S.E. = 0.022) (Fig. 3 and Table 1 ). The order of the IAT test also affected the results among females such that women who took the IAT test before they took the survey were more likely to show an implicit preference for boys (Table 1 ). None of the other independent variables appeared to have an impact on these results. Figure 3 Both sexes showed implicit preferences for same sex children but females showed a stronger preference than males. Full size image Preferred sex ratios Overall, individuals expressed no preference for having daughters or sons (mean preferred sex ratio = 0.51, S.E. = 0.008) but within sexes, females expressed a slight preference for daughters (mean preferred sex ratio = 0.48, S.E. = 0.01) and males expressed a slight preference for sons (mean preferred sex ratio = 0.53, S.E. = 0.01) (Fig. 4 ). For males, education of the participants and their parents affected preferences oppositely such that the education of a male’s parents was a barely significant predictor of a son preference while their own education was a barely significant predictor of a daughter preference. No notable effects were observed for female explicit offspring sex ratio preferences (Table 2 ). The dependent variables were all positively correlated (Table 3 ). Figure 4 Each sex showed a weak, but statistically significant explicit preference for same sex offspring. Full size image Table 3 Covariance amongst the dependent variables. Full size table Discussion There was no convincing evidence for any of our a priori predictions. The only results that provide any support for the TWH at all are that males who grew up in poverty and males with lower perceived SES prior to any priming condition were more likely to choose to adopt girls. However, neither of these results survive statistical tests for multiple comparisons (e.g. a Bonferroni correction compensating for the fact that more than one hypothesis was tested by redefining a ‘significant’ p-value as one that is less than 0.05/the number of hypotheses tested). Although these results provide only suggestive support for TWH, it is worth noting that adoption preference falls under the more general conditions under which TW effects are expected (i.e. conditions that depend on the fitness value of offspring), rather than the more limited conditions expected to trigger sex biased investment (i.e. conditions that depend on the marginal fitness returns per unit of parental investment) 22 . However, our experiment did uncover an unpredicted and interesting association between participants’ own sex and their preferences for girls and boys, with females exhibiting a strong preference for girls and males exhibiting a weaker preference for boys. Female participants showed a strong preference for adopting girls, donated far more to charities supporting girls rather than boys, scored much lower on the Implicit Association Test (i.e. implicit preference for girls), and preferred female-biased offspring sex ratios. Males, meanwhile, showed no significant preference for adopting daughters vs. sons, a modest preference for donating to charities supporting girls, a slight implicit preference for boys and a slight explicit preference for a male-biased offspring sex ratio (see Table 1 ). We discuss these results within the theoretical context of sex-biased PI. Constraints on resources Why should females across all experimental conditions and low-status males prefer daughters? Focusing on constraints on resources rather than on the sex or the condition of the parent offers one way to understand these results and put them in a larger theoretical context. Economists studying sex biased investment in offspring often focus on maximizing the socioeconomic benefits to the household 19 , 42 and some have argued that constraints on resources produce an unequal optimal allocation of goods and services within the household 43 , 44 . Evolutionary theorists, meanwhile, have focused on fitness benefits 6 , 45 , 46 , 47 , 48 and have primarily tried to explain how parental condition can affect optimal investment in sons and daughters. The TWH, however, was originally about parental ability to invest in offspring rather than parental condition6 and parental condition was seen as a proxy of parental ability to invest. If males and females who share a common household have differential access to ‘shared’ resources, and if increasing this access induces a parent to bias investment towards sons, while decreasing this access induces a parent to bias investments towards daughters, we may then be able to better understand why mothers and fathers in the same household might differ in their investment in daughters and sons. In other words, by focusing on sex differences in access to household resources, we may gain insight into why mothers and fathers in the same household might differ in their investment in daughters and sons. Godoy et al . 25 , for example, argued that in some cultural and social contexts there are systematic sex differences in access to household resources between men and women and hypothesized that the sex facing more resource constraints will exhibit a stronger preference for girls while the sex facing fewer constraints will show a preference for boys. In other words, when resources are pooled and one sex has more access to them than the other, we may expect that offspring sex preferences will be driven by both the sex of the parent (owing to differential access to ‘shared’ resources) and their condition (total shared resources). If this interpretation is correct, then our finding that females exhibit a preference for daughters may be the consequence of females having lower access to shared resources than males. Similarly, our finding that lower-status males also exhibit a preference for daughters may be the consequence of lower-status males facing higher constraints on their ability to invest. Sexually antagonistic genes Intralocus sexual conflict, which has now been confirmed in humans 49 , 50 , may also help to explain these results. If male condition is positively correlated with male genetic quality, and if some proportion of the genes that affect male condition are sexually antagonistic (i.e., have opposing fitness effects in males and females), then fathers with poor genes will produce low quality sons and high quality daughters. In this situation, the predictions made by sexual conflict theory and the TWH are the same — males with poor genes and males who are in poor condition will invest more in daughters. In contrast, when males either have good sexually antagonistic genes (i.e. good genes for sons) or when they are in good condition, the Trivers-Willard hypothesis and sexual conflict theory respectively predict that they will invest more in sons. For females, however, the predictions made by the two theories conflict. For females with poor genes, sexual conflict predicts that they would be better off investing in sons. If, however, these poor genes result in a mother being in poor condition, the TWH predicts they should invest in daughters. This situation is exactly reversed for females who are in good condition due to their having good genes. Here sexual conflict theory predicts greater investment in daughters for females (good SA genes) while TWH predicts greater investment in sons (good condition). Whenever genetic quality and condition are positively correlated this can produce opposing selection pressures on females (see Fig. 5 for the theoretical predictions made by sexual conflict and the TWH). Figure 5 Theoretical predictions made by sexual conflict and TWH for the reproductive success of sons and daughters as a function of mother and fathers sexually antagonistic genes ( a ) and ( b ) [adapted from 71 ] and the condition of both parents ( c ) [adapted from 9 ]. Full size image Although this study does not directly measure the genetic quality of participants, we do find limited confirmation for preferences emerging from sexual conflict. Overall, our finding of a preference for girls on all four dependent variables could be attributed to the low status of MTurk workers relative to the population as a whole. MTurk workers tend to be educated but have low incomes (median income between 20,000 and 30,000) 51 . Our participants reported incomes in that same range (20,000–45,000), which puts them in the bottom 35% of the United States population. If true, then these daughter preferences for both males and females are consistent with the TWH. However, if sexual conflict affects these preferences and if most of our participants have found themselves in poor condition (low SES), but many of them have good genes, then we might expect that the males who have good genes but who are in poor condition will face a conflict. These males will be pulled by TW effects to favor daughters, but by good genes to favor sons. This conflict may help to explain the more moderate preferences for daughters exhibited by males in our study. On the other hand, the poor socio-economic condition of the females in our sample will push them towards favoring daughters (TWH) while those with good genes will also favor daughters. In this case all that we need to assume to explain these results is that all (or most) of our sample was in poor condition but only half of them had poor genes. If this assumption is true, then we would predict that daughter preferences will be stronger amongst females. This potential for conflict between the TWH and sexual conflict theory may also help to explain some of our more peculiar results. For example, if more educated women who come from more educated families have better genes but have found themselves in relatively poor condition we might expect these results, i.e., that they should prefer to both adopt girls and donate more to charities supporting girls (see Table 1 ). Cultural explanations Parental investment patterns have been changing rapidly in developed countries like the United States over the past few decades and some evidence indicates that, overall, parents now invest more in daughters than they do in sons 52 and prospective couples are 45% more likely to express an interest in adopting daughters over sons 53 . Another study showed that since 2008 there has been a sharp decrease in the likelihood of native-born Americans having another child after the birth of a daughter 54 suggesting either an increase in preferences for daughters or a decrease in preferences for sons. Therefore these results showing overall preferences for daughters may reflect the cultural impact of parental sensitivity to increasing economic prospects for females in Western, industrial societies. Hazan and Zoabi have suggested that, if parents are attempting to maximize returns on human capital (e.g., household income), then, as the returns on human capital increase, the relative advantage of females in education also increases, which in turn triggers more investment in daughters 55 . Because in the United States girls outperform boys in school and are far more likely to attend college, the expected return on investment for daughters is rapidly increasing, which may account for the overall girl preference in our sample. In Iceland, which is widely considered one of the most gender neutral countries on Earth, girl preferences are strong 56 , which suggests that, as opportunities increase for girls and decrease for boys in the United States, offspring sex preferences may follow suit. There is also some evidence that, although overall parents tend to express preferences for their same sex offspring, fathers are increasingly likely to prefer daughters as genders roles have changed (e.g., girls are increasingly more likely to play sports) 57 . Caveats Our failure to find stronger support for the TWH may be due to a disconnect between our study design and the likely nature of the Trivers-Willard psychological mechanism. Given that the selection pressures that would have favored the evolution of a Trivers-Willard mechanism have existed for far longer than our species and given that conscious, deliberative thought is, in evolutionary terms, a new aspect of our psychology, it is likely that any Trivers-Willard psychological mechanism is ancient, deeply rooted, and largely unconscious 7 . The idea that many parenting decisions may not involve conscious thought is supported by the frequency with which researchers have found mismatches between actual parental behavior and parents’ stated offspring sex preferences (reviewed in:13–15). Using the IAT test, which is often viewed as a way to circumvent introspection, decrease the mental resources available to produce a deliberate response, and reduce the role of conscious intention 58 was our attempt to reduce the effects of conscious deliberation on our results. For the same reasons, we also deliberately avoided asking subjects whether they preferred sons or daughters after the prime and instead simply asked them to donate to a charity and to allocate their donation between boys and girls. Importantly, this is not simply a measure of stated preference but is a measure of actual behavior. It is also worth noting that latency times on implicit association tests have also been positively correlated with actual behavior 59 , 60 . Nevertheless, we acknowledge that it is far from clear how any of these processes enter conscious awareness, and we realize that alternative approaches may be better designed to avoid triggering conscious deliberation about which sex to favor. Another important limitation of our study is the generalizability of these results. As we mentioned previously, MTurk workers are not a nationally representative probability sample of the United States. Therefore, in the strictest sense, these results are representative only of Amazon Turk workers. Nevertheless, analyses of the characteristics of MTurk workers show that they meet or exceed psychometric standards of published research (e.g. completion rates or test-retest reliabilities) and are significantly more diverse and more representative than those of college populations, internet-based samples 61 , or in-person convenience samples — the modal sample in published experimental political science journals - but less representative than subjects from national probability samples 62 . Another issue concerns the fact that participants were self-selected in the sense that they chose whether or not to participate in the study. However, we do not feel that this limits our ability to interpret our results. Although it is true that MTurk workers decide for themselves whether to participate, they do so without any foreknowledge of what the project is about. Furthermore, because this study employed an experimental design in which participants were randomly assigned to groups, random sampling is not necessary to obtain meaningful and interpretable results. This is because convenience sampling of participants such as MTurk workers does not threaten the internal validity of experiments in which there is random allocation of the sample members. Our random assignment of participants to one of three groups (control, rich prime and poor prime) suggests that any systematic differences in outcomes between treatment groups was due to differences in treatment and not to differences in some other unknown characteristic resulting from self selection or biased sampling. Implications These results may also have implications for rising income inequality and intergenerational social mobility. A recent study using the tax records of 40 million Americans between 1996 and 2012 showed that the single best predictor of lower intergenerational social mobility was having a single or divorced parent 63 . Because most of these single parents are females 64 , and females prefer daughters, we might expect even lower reduced intergenerational mobility for the sons of these single mothers. Conclusion Strong frequency-dependent selection on optimal investment and allocation in the sexes driven by Fisher’s principle of equal investment in the sexes1 means that deviations from these ratios are expected to be subtle and extremely difficult to detect. Therefore, we should not expect, and should actually be suspicious of, strong effects on sex-biased investment for both statistical 65 and theoretical3 reasons. The Trivers-Willard hypothesis 6 , sexual conflict 31 , economic models of sex biased PI 25 , and cultural practices 66 can all generate different predictions for optimal parental investment strategies. One effect (e.g., sexual conflict) can also often mask the effects of another (e.g., TWH). For example, a male who has low socioeconomic status and good genes is expected to produce sons with better genes than his daughters; but, owing to his low socioeconomic status, he may be better off investing in his daughters, even though they are predicted to have worse genes than his sons. Results of this study contribute to the complex, and often contradictory, literature on sex-biased investment and preferences in humans. Methods Participants were recruited on Amazon Turk and were asked to complete a Qualtrics survey and an implicit association test [IAT] 67 online. Although subjects chose whether or not to participate, they did not know what the task would involve and only saw that it was an ‘online survey’ and saw that they would be paid $2.00 for completing the task. The Qualtrics survey design template and Amazon Mechanical Turk oversight allowed for sufficient control over who actually completed the survey and tests so that every worker who completed the survey and was paid was used in these analyses. Thirty individuals started the survey but did not complete it and their responses were discarded. At the beginning of the survey, participants were randomly assigned to one of three groups: (1) viewed an experimental prime that was designed to make them feel wealthy, (2) viewed an experimental prime that was designed to make them feel poor (see Experimental prime below) or (3) was not primed and were assigned to a control group. The tasks were counterbalanced such that half the participants took the IAT test before the prime and half took the IAT test after the prime. All participants made donations to charities supporting girls or boys (see below) immediately after the experimental prime. After these tasks all participants were asked to fill out an online survey (see below). This study was approved by the Office of research and Regulatory Affairs, Rutgers University. Informed consent was received from all participants and all experiments were performed in accordance with relevant guidelines and regulations provided by the Rutgers Institutional Review Board. Experimental Prime Two randomly assigned groups of participants were primed to feel either poor (the words top, best and most were used in the script below) or wealthy (the words bottom, worst and least were used in the script below) by asking them to read the following script that we copied from an experiment priming people on social class 68 . “Think of the ladder below as representing where people stand in the United States. At the top/bottom are the people who are the best/worst off— those who have the most/least money, most/least education, and the most/least respected jobs. In particular, we’d like you to think about how YOU ARE DIFFERENT FROM THESE PEOPLE in terms of your own income, educational history, influence and job status.” Then each of these groups (primed to feel poor or primed to feel rich) were asked to write about how they felt: “Now imagine yourself in a getting acquainted interaction with one of the people you just thought about from the ladder above. Think about how the DIFFERENCES BETWEEN YOU might impact what you would talk about, how the interaction is likely to go, and what you and the other person might say to each other. Please write 5 complete sentences about how you think this interaction would go”. As a control, a third group was not primed with any script and was simply asked to “Please write 5 complete sentences about today’s weather where you live”. Predictors The independent variables used were designed to assess the socio-economic condition of the participants in childhood and adulthood. For clarity and simplicity all scores were coded such that higher scores indicate higher status or condition and lower values indicated lower status or condition. Sex For the statistical analyses, we coded female participants as 0 and male participants as 1. Childhood poverty was assessed by consolidating responses to survey questions on whether participants had received Medicaid, benefits for low income families (AFDC, TANF, “welfare”), SNAP (food stamps), free school lunches, lived in public housing or experienced homelessness, eviction or hunger when they were growing up. Each yes response provided a participant with one negative point such that lower scores indicated more poverty in childhood. Current poverty was assessed by consolidating responses to survey questions on whether participants currently receive Medicaid, benefits for low income families (AFDC, TANF, welfare), SNAP (food stamps), or live in public housing. Each yes response provided a participant with one negative point such that lower scores indicated more poverty currently. Income was assessed as the approximate annual income of the participant’s household in the following ranges: (1) less than $20,000, (2) $20,000–45,000, (3) $45,001–$70,000, (4) $70,001–$100,000, and (5) greater than $100,000. Education and Parents ’ education were assessed as the highest level of education received in the following categories: (1) less than 9th grade, (2) some high school, (3) graduated from high school, (4) some college, (5) graduated from a 2-year college, (6) graduated from a 4-year college, (7) master’s degree, (8) professional degree (e.g., law) and (9) doctoral degree. Perceived relative status was assessed by asking participants to “Think of the ladder below as representing where people stand in the United States. The top rung represents people who are best off and the bottom rung represents those who are worst off.” Participants were then asked to “Please click on ONLY ONE area between the rungs that you think best represents where you stand in relation to other people.” The rungs were ranked 1–10 with 1 at the bottom and ten at the top. We coded participants’ clicks between the rungs as 0.5 at the low end and 9.5 at the high end. Health was assessed by asking participants “How would you rate your own health?” and then providing them with five options: poor, fair, good, very good, and excellent. Marital status Single, married, divorced, separated, or widowed. Children Dummy coded as 0 = did not have any children, 1 = had at least one child. Dependent variables Outcome variables were designed to assess preferred sex ratios and sex biased investment preferences of the participants. Outcome variables expressing a preference for boys have higher values and those expressing a preference for girls have lower values. Donations to charities supporting girls or boys was expected to assess investment preferences and was obtained by asking participants the following: “In addition to the $1.50 that you will earn from participating in this experiment, you also have the opportunity to donate an additional $1 to a charity that benefits needy infants. How would you like to allocate this donation? Try to imagine that these children are your own.” The choices (their order was randomly generated) were as follows: (a) 0 cents to a boy and $1.00 to a girl, (b) 20 cents to a boy and 0.80 cents to a girl (c) 40 cents to a boy and 60 cents to a girl (d) 60 cents to a boy and 40 cents to a girl (e) 80 cents to a boy and 20 cents to a girl or (f) $1.00 to a boy and 0 cents to a girl. The outcome variable indicates the percentage donated to boys (i.e., higher numbers equal preference for boy charities). When data collection was complete, we did donate the allocated funds to Save the Children. However, contrary to what we told our participants, those funds were not directed preferentially at either boys or girls. On a debriefing screen at the end of the study, we made participants aware of this difference between what they were told regarding the donations and the reality of the situation. Adoption preference (0 = girl, 1 = boy) was expected to assess sex ratio preferences and was assessed with a forced choice question “Imagine that you and your partner want to adopt a child. You visit an orphanage and pay the adoption fee. The orphanage only allows couples to adopt one child. You are given the choice of fraternal (non-identical) twins, one boy and one girl. Both are 12 months old. Whom do you choose to adopt?” Implicit Association Tests were created to analyze the timed associations of positive and negative words with boy and girls words. These tests are designed to assess less deliberative and more automatic processing than self-reports 69 and are seen to be less influenced by the desire for enhancement or social desirability 70 . It is unclear whether these tests assess sex ratio preferences, investment preferences or both. Boy words were “Masculine”, “Son”, “Male”, “Brother”, “He” and “Father” and girl words were “Feminine”, “Daughter”, “Female”, “Sister”, “She” and “Mother”. Positive words were “Healthy”, “Alive”, “Good”, “Attractive”, “Superior” and “Fertile” and negative words were “Sick”, “Dead”, “Bad”, “Ugly”, “Inferior” and “Childless”. Positive values indicated faster association times between boy words and positive words and or girl words and negative words suggesting an implicit preference for boys. Meanwhile negative indicated faster association times between boy words and negative words and or girl words and positive words suggesting an implicit preference for girls. The IAT test we developed and used can be found at: . Preferred sex ratio preferences were expected to determine the explicit sex ratio preferences of participants and were assessed by asking: “If you could choose the number and sex of all of the children you will have in your lifetime: How many boys would you want?— and How many girls would you want?— The survey can be found by clicking the following link: [Qualtric Survey].
Finnish and American researchers in evolutionary biology conducted an online experiment and survey revealing that women prefer and are more likely to invest in their daughters and men in their sons. The study was designed to test the impact of parental resources on offspring sex preferences. Specifically, the authors sought to test the Trivers–Willard hypothesis that predicts that parents in good conditions will bias investment toward sons, while parents in poor conditions will bias investment toward daughters. "However, our study failed to show that the parents' preferences for the offspring's gender are affected by their status, wealth, education or childhood environment. Instead, parental preferences were best predicted by their sex. Women from all socioeconomic backgrounds expressed implicit and explicit preferences for daughters: they chose to donate more to charities supporting girls and preferred to adopt girls. In contrast, men expressed consistent, albeit weaker, preferences for sons," explains lead author, Postdoctoral Researcher Robert Lynch from the University of Turku, Finland. The researchers tested the Trivers–Willard effect with an online experiment by measuring implicit and explicit psychological preferences and behaviourally implied preferences for sons or daughters both as a function of their social and economic status and in the aftermath of a priming task designed to make participants feel wealthy or poor. The results of the research help to make sense of the often contradictory findings on offspring sex preferences. The effects of parental condition and status, competing genetic interests between males and females, economic constraints on families, and the effects of cultural practices all conspire to complicate the evolutionary outcomes of parental investment strategies. "Frequently, the impact of one factor, for example, the genetic sexual conflict between males and females, can mask the impact of another, such as the Trivers–Willard Hypothesis. This can make it difficult to parse their effects and make clear predictions about 'optimal' parental investment strategies from an evolutionary perspective. We hope that our study can shed new light on these strategies and provide a better understanding of evolutionary biology in humans," states Lynch. In addition to Postdoctoral Researcher Robert Lynch from the University of Turku, the study included researchers from Rutgers University and Arizona State University in the United States. The article was published in the journal Scientific Reports.
10.1038/s41598-018-33650-1
Medicine
Alcohol, tobacco and time spent outdoors linked to brain connections
Multimodal population brain imaging in the UK Biobank prospective epidemiological study, Nature Neuroscience, DOI: 10.1038/nn.4393 Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/nn.4393
https://medicalxpress.com/news/2016-09-alcohol-tobacco-spent-outdoors-linked.html
Abstract Medical imaging has enormous potential for early disease prediction, but is impeded by the difficulty and expense of acquiring data sets before symptom onset. UK Biobank aims to address this problem directly by acquiring high-quality, consistently acquired imaging data from 100,000 predominantly healthy participants, with health outcomes being tracked over the coming decades. The brain imaging includes structural, diffusion and functional modalities. Along with body and cardiac imaging, genetics, lifestyle measures, biological phenotyping and health records, this imaging is expected to enable discovery of imaging markers of a broad range of diseases at their earliest stages, as well as provide unique insight into disease mechanisms. We describe UK Biobank brain imaging and present results derived from the first 5,000 participants' data release. Although this covers just 5% of the ultimate cohort, it has already yielded a rich range of associations between brain imaging and other measures collected by UK Biobank. Main The primary clinical role of brain imaging to date has been in diagnosis and monitoring of disease progression, rather than providing predictive markers for preventative stratification or early therapeutic intervention. The predominant strategy for finding image-based markers of neurological and psychiatric disease has been to identify patients early in the diagnostic process to maximize statistical power in a small cohort (tens to hundreds of subjects). A key factor motivating the use of small, clinically defined cohorts is the expense, time and specialized hardware associated with imaging. This approach has been effective in providing markers of disease progression, but identifying imaging markers of early disease requires measurements at the pre-symptomatic stage. Image-based measures of brain structure and function may evolve in a complex way throughout aging and the progression of neuropathology. Thus, markers with utility in monitoring disease progression post-diagnostically may not manifest pre-symptomatically, and conversely the most sensitive early predictors of disease may have plateaued by the time existing diagnoses become accurate. Nevertheless, when known risk factors have enabled risk-stratified cohorts, imaging has been able to predict disease before symptom presentation. For example, magnetic resonance imaging (MRI) has demonstrated altered brain activity that is associated with the APOE genotype decades in advance of symptoms associated with Alzheimer's disease 1 , and conversion from mild cognitive impairment to Alzheimer's has been predicted 2 . These studies suggest that the primary obstacle to identifying early imaging markers is obtaining data in pre-symptomatic cohorts drawn from the general population. Alternatively, pre-symptomatic cohorts can be assembled using a prospective approach, in which a large number of healthy participants are intensively phenotyped (including imaging) and subsequently monitored for long-term health outcomes. Although this approach is expensive, it is also efficient, as it captures early biomarkers and risk factors for a broad range of diseases. It further becomes possible to discover unexpected interactions between risk factors (such as lifestyle and genetics). To date, the largest brain imaging studies have gathered data on a few thousand subjects. Although this approach has identified associations between imaging and highly prevalent diseases, existing cohorts are still too small to produce sufficient incidence of many diseases if participants are recruited without identifying risk factors. UK Biobank is a prospective epidemiological resource gathering extensive questionnaires, physical and cognitive measures, and biological samples (including genotyping) in a cohort of 500,000 participants 3 . Participants consent to allow access to their full health records from the UK National Health Service, enabling researchers to relate phenotypic measures to long-term health outcomes. This is particularly powerful as a result of the combination of the number of subjects and the breadth of linked data. Participants were 40–69 years of age at baseline recruitment; this aims to balance the goals of characterizing subjects before disease onset against the delay before health outcomes accumulate. The cohort is particularly appropriate for the study of age-associated pathology. All data from UK Biobank are available to researchers world-wide on application, with no preferential access for the scientists leading the study. An imaging extension to the existing UK Biobank study was funded in 2016 to scan 100,000 subjects from the existing cohort, aiming to complete by 2022. Imaging includes MRI of the brain, heart and body, low-dose X-ray bone and joint scans, and ultrasound of the carotid arteries. Identification of disease risk factors should increase over time with emerging clinical outcomes. For example, in the imaged cohort, 1,800 participants are expected to develop Alzheimer's disease by 2022, rising to 6,000 by 2027 (diabetes: 8,000 rising to 14,000; stroke: 1,800 to 4,000; Parkinson's: 1,200 to 2,800) 4 . Here we present example analytic approaches and studies that will be enabled by UK Biobank. The identification of new imaging biomarkers of disease risk could support diagnosis, development of therapeutics and assessment of interventions. The multi-modal, multi-organ imaging enables the study of interactions between organ systems, for example, between cardiovascular health and dementia. The breadth of imaging makes this data set valuable for multi-systemic syndromes such as frailty, accelerated aging characterized by general loss of reserves and poor tolerance to stressors, which indicates increased risk for a range of conditions including dementia 5 . This kind of resource can also evince hypotheses regarding causal mechanisms of disease that could be tested in follow-up interventional studies. Examples include modifiable risk factors, such as the association of obesity with later life cognitive dysfunction 6 , and the ability to study complex interactions of risk factors with lifestyle, environment and genetics. Finally, UK Biobank will enable validation and extension of associations identified by smaller-scale studies, including the testing of hypotheses that combine results from multiple previous studies. Results Design rationale and initial imaging phase The imaging study was designed to achieve the target of 100,000 subjects, each scanned once, over 5–6 years at three dedicated, identical centers operating 7 days per week, each scanning 18 subjects per day (ref. 7 ). This requirement places tight timing constraints, corresponding to one subject imaged every 36 min (Online Methods ). The first imaging center was built to establish feasibility and scanned 10,000 subjects over a 2-year ramp-up period. Two further identical centers are being commissioned, with the three centers being strategically positioned at population hubs: Manchester, Reading and Newcastle. To capture imaging phenotypes relevant to the widest possible range of diseases and hypotheses, our protocol must deliver data with the broadest predictive power for neuropathology and mental health. We therefore included modalities that drive estimates of anatomical and neuropathological structure (structural MRI), brain activity (functional MRI, or fMRI), and local tissue microstructure (diffusion MRI, dMRI). The resulting imaging protocol ( Supplementary Table 1 ) included three structural modalities, T1-weighted, T2-weighted and susceptibility-weighted MRI (referred to here as T1, T2 and swMRI); dMRI; and both task and resting-state fMRI (tfMRI and rfMRI). Recent advances in MRI acquisition technology 8 enabled high spatial resolution dMRI and fMRI with high angular and temporal resolution, respectively, despite strict time constraints. For example, the protocol acquires dMRI data with 100 diffusion-encoding directions over two shells in just 7 min, enabling advanced model fitting of microstructural parameters that would not have been possible under these time constraints with previous generation technology. Following optimization of acquisition protocols, streamlining of participant preparation and minimization of scanner dead time (Online Methods ), UK Biobank was able to incorporate six neuroimaging modalities in just 36 min. Unlike most of the measurements included in the original UK Biobank resource (for example, alcohol consumption and cognitive test scores), raw imaging data is not a directly useful source of information. In addition to requiring image processing to remove artifacts and align images across modalities and individuals, most useful image phenotypes are derived through complex calculations that combine many voxels and/or images. A fully automated processing pipeline was developed that produces both processed images as well as image-derived phenotypes (IDPs); there are currently 2,501 distinct individual measures of brain structure and function. Example IDPs include the volume of specific brain structures, the strength of connectivity between pairs of brain regions and the estimated dispersion of fibers in a given white-matter tract. IDPs are intended to be useful for non-imaging experts; however, understanding of the confounds and pitfalls of imaging is required to draw appropriate conclusions. Here we present results from the first data release ( ), which includes outputs from the processing pipeline for 5,285 subjects scanned in 2014–2015. As determined by the processing pipeline, 98% of participants' data sets resulted in a usable T1, which is crucial for deriving usable information from the other modalities. Of these, data for the other brain imaging modalities were suitable for processing in the following percentages of subjects: T2 = 97%, swMRI = 93%, dMRI = 95%, tfMRI = 92% and rfMRI = 95%. All modalities were acquired and were usable in 89% of subjects. Results from this data release are illustrated in Figures 1 , 2 , 3 , 4 , including a multimodal atlas (separate population-average images for each of the modalities, all aligned to each other), available for download and online browsing at . Figure 1: Data from the three structural imaging modalities in UK Biobank brain imaging. ( a ) Single-subject T1-weighted structural image with minimal pre-processing: removal of intensity inhomogeneity, lower neck areas cropped and the face blanked to protect anonymity. Color overlays show automated modeling of several subcortical structures (above) and segmentation of gray matter (below). ( b ) Single-subject T2-weighted FLAIR image with the same minimal pre-processing showing hyperintense lesions in the white matter (arrows). ( c ) Group-average ( n ≈ 4,500) T1 atlas; all subjects' data were aligned together (see Online Methods for processing details) and averaged, achieving high-quality alignment, with clear delineation of deep gray structures and good agreement of major sulcal folding patterns despite wide variation in these features across subjects. ( d ) Group-average T2 FLAIR atlas. ( e ) Group-average atlas derived from SWI processing of swMRI phase and magnitude images. ( f ) Group-average T2* atlas, also derived from the swMRI data. ( g ) Manhattan plot (a layout common in genetic studies) relating all 25 IDPs from the T1 data to 1,100 non-brain-imaging variables extracted from the UK Biobank database, with the latter arranged into major variable groups along the x axis (with these groups separated by vertical dotted lines). For each of these 1,100 variables, the significance of the cross-subject univariate correlation with each of the IDPs is plotted vertically, in units of –log 10 ( P uncorrected ). The dotted horizontal lines indicate thresholds corresponding to multiple comparison correction using FDR (lower line, corresponding to P uncorrected = 3.8 × 10 −5 ) and Bonferroni correction (upper line, P uncorrected = 1.8 × 10 −8 ) across the 2.8 million tests involving correlations of all modalities' IDPs against all 1,100 non-imaging measures. Effects such as age, sex and head size are regressed out of all data before computing the correlations. As an indication of the corresponding range of effect sizes, the maximum r 2 (fractional variance of either variable explained by the other) is calculated, as well as the minimum r 2 across all tests passing the Bonferroni correction. Here, the maximum r 2 = 0.045 and the minimum r 2 = 0.0058. ( h ) Plot relating all 14 T2* IDPs to 1,100 non-imaging variables. Maximum r 2 = 0.034, minimum r 2 = 0.0063. Marked Bonferroni and FDR multiple comparison threshold levels are presented as in g . Full size image Figure 2: The diffusion MRI data in UK Biobank. ( a ) Group-average ( n ≈ 4,500) atlases from six distinct dMRI modeling outputs, each sensitive to different aspects of the white matter microarchitecture. The atlases shown are: FA, MD (mean diffusivity) and MO (tensor mode); and ICVF (intra-cellular volume fraction), ISOVF (isotropic or free water volume fraction) and OD (orientation dispersion index) from the NODDI microstructural modeling. Also shown are several group-average white matter masks used to generate IDPs (for example, pink (r) are retrolenticular tracts in the internal capsules; upper green (s) are the superior longitudinal fasciculi). ( b ) Tensor ellipsoids depicting the group-averaged tensor fit at each voxel for the region shown in the inset in c . The shapes of the ellipsoids indicate the strength of water diffusion along three principal directions; long thin tensors indicate single dominant fiber bundles, whereas more spherical tensors (within white matter) generally imply regions of crossing fibers (seen more explicitly modeled in corresponding parts of c ). ( c ) Group-averaged multiple fiber orientation atlases, showing up to three fiber bundles per voxel. Red shows the strongest fiber direction, green the second and blue the third. Each fiber bundle is only shown where the modeling estimates that population to have greater than 5% voxel occupancy. Inset shows the thresholded mean FA image (copper) overlaid on the T1, with the region shown in detail in b and c . ( d ) Four example group-average white matter tract atlases estimated by probabilistic tractography fed from the within-voxel fiber modeling: corpus callosum (genu), superior longitudinal fasciculus, corticospinal tract and inferior fronto-occipital fasciculus. ( e ) Plot relating all 675 dMRI IDPs (nine distinct dMRI modeling outputs from tensor and NODDI models × 75 tract masks) to 1,100 non-imaging variables (see Fig. 1g for details). Maximum r 2 = 0.057, minimum r 2 (passing Bonferroni) = 0.0065. Dotted horizontal lines (multiple comparison thresholds) are described in Figure 1g . Full size image Figure 3: The task fMRI data in UK Biobank. ( a ) The task paradigm temporal model (time running vertically) depicting the periods of the two task types (shapes and faces); for more information on this paradigm view, see . ( b ) Example fitted activation regression model versus time-series data (time running horizontally) for the voxel most strongly responding to the 'faces > shapes' contrast in a single subject (Z = 12.3). ( c ) Percentage of subjects passing simple voxel-wise activation thresholding (Z > 1.96) for the same contrast. Note the reliable focal activation in left and right amygdala. The underlying image is the group-averaged raw fMRI image. ( d ) Group-averaged activation for the three contrasts of most interest, overlaid on the group-average T1 atlas (fixed-effects group average, Z > 100, voxelwise P corrected < 10 −30 ). ( e ) Plot relating the 16 tfMRI IDPs to 1,100 non-imaging variables (see Fig. 1g for details). Maximum r 2 = 0.018, minimum r 2 (passing Bonferroni) = 0.0062. Dotted horizontal lines (multiple comparison thresholds) are described in Figure 1g . Full size image Figure 4: The resting-state fMRI data in UK Biobank. ( a ) Example group-average resting-state network (RSN) atlases from the low-dimensional group-average decomposition showing 4 of 21 estimated functional brain networks, including the default mode network (red-yellow), dorsal attention network (green), primary visual (copper) and higher level visual (dorsal and ventral streams, blue). The three slices shown are (top to bottom) sagittal, coronal and axial. ( b ) The 55 non-artifact components from a higher dimensional parcellation of the brain (axial views). These are shown as displayed by the connectome browser ( ), which allows interactive investigation of individual connections in the group-averaged functional network modeling. The 55 brain regions (network nodes) are clustered into groups according to their average population connectivity, and the strongest individual connections are shown (positive in red, anticorrelations in blue). ( c ) Plot relating the 76 rfMRI 'node amplitude' IDPs to 1,100 non-imaging variables (see Fig. 1g for details). Maximum r 2 = 0.065, minimum r 2 (passing Bonferroni) = 0.0059. ( d ) Plot relating the 1,695 rfMRI 'functional connectivity' IDPs to 1,100 non-imaging variables. Maximum r 2 = 0.032, minimum r 2 = 0.0059. Dotted horizontal lines (multiple comparison thresholds) in c and d are described in Figure 1g . Full size image Imaging data, atlases and imaging-derived phenotypes The three structural modalities ( Fig. 1 ) provide information about different aspects of the brain's tissues, structures and neuropathologies. Data quality at the single-subject level is illustrated in Figure 1a,b . The group-averaged images produced for each modality are included in the initial data release as high-quality atlases ( Fig. 1c–f ), depicting strong tissue contrast and excellent fidelity of alignment across subjects. The T1 modality ( Fig. 1a,c ) is the most informative about the basic structure of the brain, including the depiction of the main tissue types (gray and white matter) and gross structure of the brain (main anatomical landmarks). From the T1 data, we derived 25 volumetric IDPs: total tissue volumes (gray, white and ventricular cerebrospinal fluid) and the volumes of subcortical gray matter structures such as thalamus, caudate, putamen, pallidum, hippocampus and amygdala. The T1 data and T1-derived IDPs provide sensitive markers of atrophy (tissue loss), which can be both global (for example, thinning of the cortex in aging) 9 and local (for example, reduction of hippocampal volume in Alzheimer's disease) 10 . The T2 data ( Fig. 1b,d ) is a fluid-attenuated inversion recovery (FLAIR) acquisition that also depicts basic anatomy, but is valuable primarily for detection of focal 'hyperintensities' (that is, high-signal regions) in white matter. T2 hyperintensities represent white matter lesions that have been associated with a broad range of neuropathological conditions 11 (for example, small vessel ischemic disease), and occur with increasing incidence in aging populations without (or potentially before) manifestation of neurological symptoms. IDPs relating to the volume of these white matter lesions will be included in future data releases. swMRI is a flexible modality that can be processed in multiple ways, each sensitive to different clinically relevant properties. The first data release includes T2* signal decay times and enhancement of venous vasculature using susceptibility-weighted image (SWI) filtering 12 ( Fig. 1e,f ). swMRI IDPs in the current data release are the median T2* in each of 14 major subcortical gray matter structures, for example, reflecting increased iron deposition associated with neurodegeneration 13 . dMRI ( Fig. 2 ) reflects the random diffusion of water molecules, which is affected by the microscopic structure of tissue 14 , enabling us to infer the local density of cellular compartments in tissue (for example, neurites). In addition, axon bundles in white matter create an orientation dependence of water movement as a result of hindrance of diffusion perpendicular to the long axis of white matter tracts, an effect that can be tracked from voxel to voxel (tractography) to derive long-range white matter pathways. Three complementary diffusion models were fit to the signal in each voxel: (i) the diffusion tensor model 15 , describing the signal phenomenologically as resulting from a three-dimensional ellipsoid profile of water displacement; (ii) the neurite orientation dispersion and density imaging (NODDI) model 16 , estimating microstructural properties (for example, neurites versus extracellular space); and (iii) the ball and sticks model 17 , estimating the orientation of multiple fiber populations in a voxel for tractography. We extracted 675 IDPs by averaging parameters estimated by the first two models over 75 different white matter tract regions on the basis of both subject-specific tractography 18 and population-average white matter masks 19 . fMRI reflects neural activity indirectly, measuring dynamic changes in blood oxygenation and flow resulting from changes in neural metabolic demand 20 . The task deployed in tfMRI ( Fig. 3 ) involved matching shapes and emotionally negative faces 21 and was chosen to engage a range of neural systems, from low-level sensory and motor to perceptual (for example, fusiform) and emotional (for example, amygdala) areas. The 16 tfMRI IDPs quantitate the strength of brain activity changes for specific aspects of the task in regions defined using the group-averaged activation maps shown across three task conditions. Resting-state fMRI ( Fig. 4 ) identifies connected brain regions on the basis of common fluctuations in activity over time in the absence of an explicit task 22 . Sets of voxels that cofluctuate most strongly correspond to brain regions, referred to as network 'nodes'; different nodes may have weaker cofluctuations, indicating a connection between them, or a network 'edge'. The group analysis of the rfMRI data generated two atlases of these functional networks: a low-dimensional decomposition of the brain into 21 functional subdivisions and a higher dimensional parcellation into 55 subdivisions. IDPs represent edge connectivity strengths and node fluctuation amplitudes ( Fig. 4 ). Voxel-wise associations with aging IDPs reduce raw data into a compact set of biologically meaningful measures, with current measures condensing ∼ 2GB of raw data per subject into 2,501 IDPs, but such summary measures can lose valuable information. For example, once aligned to common coordinate systems, images can be analyzed for cross-subject variation at the voxel level to provide a more spatially detailed exploration than can be achieved via IDPs. However, this requires greater imaging expertise and computational resources, as well as often leading to lower statistical power (as a result of the greatly increased number of multiple comparisons and the higher noise in voxel-wise measures compared with regional averages). Figure 5 presents voxel-wise correlations of age with several parameters modeled from the dMRI data (along the centers of the main white matter tracts), as well as normalized T2 FLAIR intensity in the white matter. Fractional anisotropy (FA), a sensitive, but nonspecific, marker of white matter integrity, predominantly demonstrated the established reduction of FA with aging ( Fig. 5a,g ). However, some voxels exhibited the opposite, with FA increasing with aging, which may reflect degradation of secondary fibers or reduced fiber dispersion 23 ; notably, none of the FA-based IDPs exhibited this significant positive correlation, demonstrating that averaging across tracts can sacrifice richness of information. The tensor mode 24 ( Fig. 5b ), which primarily describes whether a voxel contains one versus multiple tracts, was even more sensitive, with highly significant positive correlations in certain association fiber areas and posterior corpus callosum, which is likely the same effect seen as FA increases 23 . We further observed an increase in free water with aging ( Fig. 5d ); the strongest increase, in the fornix, was likely a result of an increase of the fraction of cerebrospinal fluid in voxels spanning this thin tract as it atrophies. Finally, we calculated voxel-wise cross-subject correlation of age with T2 images. This analysis identified peri-ventricular areas, which are most susceptible to white matter hyperintensities known to be associated with aging ( Fig. 5e ). Figure 5: Voxel-wise correlations of participants' age against several white matter measures from the dMRI and T2 FLAIR data. ( a ) Voxel-wise (cross-subject) correlation of FA versus age. Group-average FA in white matter is shown in green, overlaid onto the group-average T1. ( b ) Correlation of MO versus age, using the same color scheme. Nearby areas of MO increase are shown in greater detail in f , which also shows the distinct primary fiber directions. ( c ) Correlation of OD versus age, including a reduction in dispersion in posterior corpus callosum. ( d ) Correlation of ISOVF versus age, showing increases in freely diffusing water with age in a broad range of tracts. ( e ) Voxel-wise correlation of T2 FLAIR intensity showing increased intensity with aging in white matter. For a – e , blue and red-yellow show negative and positive Pearson correlation with age, respectively ( P corrected < 0.05, with Bonferroni correction across voxels resulting in significance at r = 0.1 (dMRI n = 3,722; T2 FLAIR n = 3,781). ( g ) Histograms (across voxels) of the voxel-wise age correlation of the correlation maps shown above, with correlation value on the x axis. FA and MO largely decreased with age, whereas OD and ISOVF largely increased. Full size image A further example voxelwise analysis is shown in Supplementary Figure 1 , in which we used the rfMRI data to investigate aging effects in the default-mode resting-state network 25 . This also provides a demonstration that group analyses do not degrade with increasingly large subject numbers (for example, as a result of alignment issues), as we used group sizes from 15 to 5,000. With increasing subject numbers, background noise was suppressed without increase in spatial blurring, and localized estimates of age-dependence stabilized, with statistical significance rising indefinitely. Pairwise associations between brain IDPs and other measures We conducted simple univariate association analyses to illustrate the richness of relationships between IDPs and other available variables, as well as the statistical power afforded by ∼ 5,000 subjects. We individually correlated all 2,501 brain IDPs with 1,100 other Biobank variables; the latter were broadly grouped into 11 categories ( Figs. 1 , 2 , 3 , 4 , 6 , and Supplementary Fig. 2 ). Even after false discovery rate (FDR) multiple comparison correction for these 2.8 million correlations, 57 of the 66 combinations of brain modalities and non-brain-imaging categories showed significant associations. Some variable categories exhibited large numbers of associations with IDPs (for example, height and weight), whereas others (for example, cognitive measures and alcohol and tobacco intake) had more focused associations. Figure 6: Visualization of 2.8 million univariate cross-subject association tests between 2,501 IDPs and 1,100 other variables in the UK Biobank database. ( a ) Manhattan plot showing, for each of the 1,100 non-brain-imaging variables, the statistically strongest association of that variable with each distinct imaging sub-modality's IDPs (that is, six results plotted for each x axis position, each with a color indicating a brain imaging modality; this plot differs from the other Manhattan plots, which show correlations with all IDPs). Whereas the Manhattan plots in Figures 1 , 2 , 3 , 4 showed associations for each brain imaging modality separately, all associations are depicted in a single plot. ( b ) List of all IDP-cognitive score associations passing Bonferroni correction for multiple comparisons ( P corrected < 0.05; P uncorrected < 1.8 × 10 −8 ). The first column lists the age-adjusted correlation coefficient, and the second shows the unadjusted correlation, both being correlations between a specific brain IDP (fifth column) and a cognitive test score (sixth column). The UK Biobank cognitive tests carried out included fluid intelligence, prospective memory, reaction time (shape pairs matching), memorized pairs matching, trail making (symbol ordering), symbol digit substitution, and numeric memory. ( c ) IDP associations with the cognitive phenotype variables (the full set of 174 cognitive variables, repeated for each brain imaging modality). Shown behind, in gray, are the same associations without adjustment for age, with a large number of stronger associations. Dotted horizontal lines (multiple comparison thresholds) in a and c are described in Figure 1g . ( d ) Scatterplot showing the relationship between adjusted correlations and those obtained without first regressing out the confound variables (each point is a pairing of one IDP with one non-brain-imaging variable, 2.8 million points). The grid lines indicate Bonferroni-corrected significance level (as described in Fig. 1 ). ( e ) Example association between unadjusted white matter volume and fat-free body mass is high ( r = 0.56) when pooling across the sexes. After adjusting for several variables (including sex), the correlation falls almost to zero. Full size image The above associations were estimated after adjusting all variables for age, sex, age-sex interaction, head motion and head size (de-confounding). Some factors can unambiguously be considered a confound to be removed (for example, head motion, which can corrupt IDPs, but also correlates with disease and aging 26 ). For other factors (for example, age), the appropriateness of de-confounding depends on the question being asked and needs to be taken into consideration when interpreting associations (see Discussion). The relationship between the correlations estimated with versus without de-confounding ( Fig. 6d ) revealed that, in almost all cases, the strength of association was reduced by de-confounding, and in some cases was almost entirely removed (horizontal cloud around y = 0). We considered associations between cognitive tests and brain IDPs, including potential age interactions, in greater detail. Sex, head motion and head size were regressed out of all data before computing correlations (Online Methods ). Figure 6b shows Bonferroni-significant ( P uncorrected < 1.8 × 10 −8 ) associations with brain IDPs, both with and without adjusting for age. The task-fMRI versus fluid intelligence associations were unchanged by adjusting for age, whereas all other cognition-IDP correlations were approximately doubled, being significantly stronger ( P corrected < 0.005) without age adjustment. In the symbol digit substitution test, participants replaced symbols with numbers using a substitution key. Strong IDP associations were found with two scores: the number of symbol digit matches made correctly and the number of symbol digit matches attempted in the time allowed (because subjects rarely made mistakes, these two scores are highly correlated, r = 0.97). These scores correlated negatively with measures of water diffusivity in the corona radiata and superior thalamic radiation, and with FA in the posterior fornix (consistent with previous sudies 27 , which may reflect variations in tract thickness 28 ). Finally, there was a significant association with thalamus volume (right thalamic volume significant, left thalamic volume close to significance with r = 0.10), consistent with previous findings 29 . These negative associations likely reflect lower cognitive performance with aging and pathology (increased diffusivity and atrophy). In the reaction time test, subjects confirmed whether two abstract symbols matched as quickly as possible. The mean time to correctly identify matches was found to correlate inversely with left putamen volume (right putamen had similar correlation, r = −0.06, but was below significance). These negative associations are consistent with previous findings 30 and indicate that increased volume correlated positively with cognitive speed (and negatively with reaction time). The fluid intelligence score reports how many numerical, logic and syntactic questions subjects were able to answer in 2 min. This was negatively correlated with the strength of gray matter activation in the simple shapes matching task in tfMRI, with no age interaction. The shapes matching task incurs low cognitive demand, and it is plausible that higher intelligence requires less neural activity for this task, a mechanism that has previously been ascribed to minimization of cognitive workload 31 . All cognitive scores reported above involve processing speed as a significant factor, consistent with previous studies 27 . However, the observation that different test scores do not all correlate identically with each other or with the same brain IDPs suggests that there is not a single (speed-related) cognitive factor involved. The increases in association strengths when not controlling for age suggest that age-related cognitive decline is a major source of cross-subject variability for these IDP-cognition associations 28 . Plotting all IDP-cognitive associations ( Fig. 6c ) revealed that a large number of non-age-adjusted associations were stronger than the results after age adjustment; below, we show how interpretation of such results can be aided further through multivariate analyses. These age interactions provide an early indication that UK Biobank should provide cognitive biomarkers of clinical relevance as health outcomes accumulate. Multivariate associations: modes of population variation We conducted multivariate analyses using canonical correlation analysis (CCA) 32 combined with independent component analysis (ICA 33 ; Figs. 7 and 8 , Supplementary Figs. 3 , 4 , 5 , 6 , 7 , and Online Methods ). This analysis identifies 'modes' of population covariation linking IDPs to non-imaging measures. Each mode consists of one linear combination of IDPs and a separate combination of non-imaging measures that have a highly similar variation across subjects. The strength of involvement of a variable in a given mode is dictated by the variable weight ( Fig. 7 ). Multiple population modes may be identified, provided that they describe different (independent) cross-subject variation, meaning that the implied association between a given pair of variables can vary from mode to mode. Figure 7: Details of three modes from the doubly-multivariate CCA-ICA analyses across all IDPs and non-brain-imaging variables. IDPs are listed in orange and non-brain-imaging variables in black. The lists show the variables most strongly associated with each mode; where multiple very similar (and highly correlated) non-imaging variables are found, only the most significant is listed here for brevity. The first column shows the weight (strength and sign) of a given variable in the ICA mode, the second shows the (cross-subject) percentage variance of the data explained by this mode, and the third column shows the percentage variance explained in the data without the confounds first regressed out. ( a ) Mode 7 links measures of bone density, brain structure/tissue volumes and cognitive tests. ( b ) Mode 8 links measures of blood pressure and alcohol intake to IDPs from the diffusion and functional connectivity data; two functional network connections strongly involved are displayed, with the population mean connection indicated by the bar connecting the two nodes forming the connection (red indicates positive mean correlation, blue negative, and the width of the bar indicates the connection strength). The group-ICA maps are thresholded at Z > 5, and the colored text is the ICA weight shown in the table list. ( c ) Mode 9 includes a wide range of imaging and non-imaging variables; as well as showing three strong functional network connections, we also show two functional nodes whose resting fluctuation amplitude is associated with this mode. Full size image Figure 8: Hypothesis-driven study of age, BMI and smoking associations with subcortical T2*. ( a ) UK Biobank population-average map of T2*, overlaid with the main subcortical structures being investigated. The T2* IDPs reflect individuals' median T2* values in these regions. The relatively low T2* in putamen and pallidum likely reflects greater iron content. ( b ) BMI regression betas from multiple regressions of R2* (from the ASPS study) and T2* (from UK Biobank) against relevant covariates (see c ). All variables are standardized so that beta values can be interpreted as (partial) correlation coefficients. R2* significance is reported as FDR-corrected P . T2* significance is reported as –log 10 P uncorrected with the more conservative Bonferroni correction (for P corrected = 0.05) resulting in a threshold here of 3.6. ( c ) Full set of univariate and multiple regression betas and significance values for all brain regions tested and all model covariates. The regression results are much sparser, reflecting the higher associational specificity obtained by reporting unique variance explained. Full size image From the current UK Biobank release, we identified nine modes that were highly significant ( P corrected < 0.002, no further modes significant at P corrected < 0.05). Similar methodology using Human Connectome Project (HCP) data previously identified a single statistically significant mode of population covariation in 461 young healthy adults 8 , 34 . Our ability to identify more modes than in the HCP data set could be a result of the tenfold increase in the number of subjects, the larger range of imaging modalities and non-imaging variables, and the older ages of subjects. Although these modes are not guaranteed to reflect biological processes, in practice ICA often produces such interpretability 35 . Of the nine modes, some reflected dominant physical factors (for example, body size or heart rate), whereas others linked rich subsets of non-imaging measures to IDPs. Modes 7–9 are displayed in Figure 7 , and modes 1–6 are overviewed in Supplementary Figures 3 , 4 , 5 , 6 , 7 . The relationships of these multivariate associations to potential confounds and variables of interest, including some clinical outcomes, are shown in Supplementary Figure 8 . Modes 1, 2, 4, 5, 7 and 8 were strongly associated with aging, whereas 3, 6 and 9 were not. Mode 7 primarily linked bone density measures and cognitive scores to brain structure and dMRI measures. There is extensive literature linking volume and diffusivity measures to cognition, but a relationship between these measures and bone density has not, to our knowledge, been reported. This link could reflect variations in physical properties of noninterest that are not fully accounted for by de-confounding. However, correlations between low bone density and accelerated cognitive decline have been reported 36 , including association of bone density with Alzheimer's disease 37 . Mode 9 exhibited the most complex population pattern ( Fig. 7c ). The most strongly involved non-imaging measures were intelligence, education levels and occupational factors; in addition, some physical and dietary measures were involved that may reflect socio-economic status as a latent factor (for example, cheese intake or time spent outdoors in winter). Associated brain IDPs included task fMRI (with a negative weight, consistent with the sign of univariate associations), followed by a range of functional and structural IDPs. There was some overlap between modes 7 and 9 in terms of cognitive tasks (for example, symbol digit matches), bone density and T1-based brain volumes. However, the fact that CCA-ICA separated modes 7 and 9 indicates that they constitute distinct biophysical patterns of variation across subjects; for example, mode 7 correlated with age, whereas mode 9 did not. The broader range of non-imaging measures involved in mode 9, and the ability to interpret many of them in terms of positive or negative life factors, is reminiscent of the single mode previously reported from HCP data 8 , 34 . That mode resembled the well-established observation of strong correlations in subject performance across a broad range of cognitive and behavioral tests (the general intelligence g-factor), but also included demographic and life factors. However, the correspondence between the mode 9 reported here and the previous HCP mode is not perfect. This may be a result of key differences in the HCP and Biobank data sets, including different non-imaging measures, the use of only rfMRI in the HCP analysis, the different cohort profiles (for example, age range) and the ability to separate more modes from the larger Biobank cohort. Illustrative hypothesis-driven study The Austrian Stroke Prevention Study (ASPS) recently reported associations between aging, smoking and body mass index (BMI) with gray matter T2* in 314 participants (38–82 years) 38 , likely reflecting iron accumulation in local tissue 13 . We sought to replicate several of their key findings as a demonstration of a hypothesis-led investigation. The ASPS reported R2*, which is the reciprocal of the T2* value that we estimated in UK Biobank; thus, we expected T2* associations with opposite signs to those reported by ASPS. The main results from ASPS in deep gray matter structures were that BMI was generally the strongest determinant of R2* and was significantly related to R2* in amygdala (beta = 0.23, P FDR = 0.009) and hippocampus (beta = 0.14, P FDR < 0.0001). Further associations with R2* (averaged across subcortical structures) were found for age (beta = 0.03, P FDR = 0.027) and recent smoking level (beta = 0.02, P FDR = 0.001). No equivalent associations were found for sex or hypertension. The ASPS conducted univariate correlations and multiple regressions to identify both shared and unique variance in the associations, using FDR correction. On the basis of these results, we hypothesize negative association between T2* in subcortical structures with BMI, age and smoking. We conducted similar analyses, applying univariate correlations and multiple regressions against a similar set of covariates to ASPS ( Fig. 8 ). The regressions used the n = 4,891 subjects with complete data in all IDPs and covariates. To maximize the complementarity of information content between the univariate correlations and multiple regressions, we applied no adjustments for factors such as age and sex in correlations, whereas we included these factors as confound covariates in the multivariate regressions. We applied Bonferroni multiple comparisons correction across covariates and brain regions, resulting (for P corrected < 0.05) in a –log 10 P uncorrected threshold of 3.6. Our results were highly concordant with ASPS. BMI was significantly associated with T2* in amygdala (averaged across left and right: beta = –0.07, –log 10 P uncorrected = 3.9) and hippocampus (beta = −0.15, −log 10 P uncorrected = 17.0; for comparison, FDR correction would result in P FDR < 10 −10 ). Individual subcortical BMI associations are shown in Figure 8b . In accordance with our hypothesis, the signs of regression betas are universally negative with T2* from UK Biobank data. Associations with T2* were found for age in thalamus, caudate and putamen ( Fig. 8c ) and for smoking status in caudate, putamen and right pallidum (beta ranging from –0.03 to –0.1). Association of T2* with sex was only found in right amygdala, and no association was found for hypertension. The increased specificity of multiple regression is notable for many of the tests, for example, a significant univariate association of T2* with cholesterol disappeared after controlling for the other covariates. Similarly, for T2* in hippocampus and amygdala, many of the associations with age, sex, BMI and other factors became much weaker after controlling for all variables, particularly the amount of head motion. Despite the fact that this motion was recorded from the functional data (not the T2* data), it is likely a general indicator of head motion, and these results illustrate why interpretation of imaging associations requires care. For example, BMI could be predictive of head motion (for example, comfort in the scanner) while also potentially relating to biophysical parameters of deeper interest. The BMI and smoking associations with T2* are found in distinct subcortical structures. Notably, this distinction is reflected in the CCA-ICA results, where these associations appear in separate population modes. The association of T2* in caudate and putamen with smoking (and more weakly with alcohol; Fig. 8c ) was highly concordant with CCA-ICA mode 5 ( Supplementary Fig. 4b ), and was associated with aging ( Fig. 8c and Supplementary Fig. 8 ). The association of T2* in hippocampus and amygdala with BMI was highly concordant with CCA-ICA mode 3 ( Supplementary Fig. 3c ), a distinct mode of population covariation that was not associated with aging (in either analysis). Neither mode includes cognitive test scores, suggesting that, although these associations clearly relate to biological processes, they may be only indirectly linked to cognitive health. Discussion Challenges of population imaging UK Biobank data is openly available to researchers, including non-imaging experts. However, imaging data is considerably more complex than most of the existing UK Biobank measures. Extensive post-processing is required to align images across subjects and remove artifacts. Moreover, information is usually encoded across multiple voxels, requiring further processing to extract relevant features. Even with carefully prepared IDPs, meaningful interpretation requires care because MRI is generally an indirect measure of the biology of interest. Apparent structural atrophy can be susceptible to misinterpretation 39 , fMRI signals can reflect vascular properties rather than neural activity 40 , and dMRI is sensitive to many aspects of tissue microstructure 14 . A final challenge is that data sizes have become extremely large, requiring 'big data' techniques; the brain imaging data in UK Biobank will ultimately surpass 0.2 PB even without data inflation during post-processing. Large cohorts face the further challenge that statistically significant associations are identified even when their explanatory power is small. In our data set, significance was reached at a correlation of just r ≈ 0.1, that is, 1% of population variance explained 41 even with multiple comparison correction. Large genome-wide association studies (GWAS) face this challenge, where it is accepted that small effect sizes can be meaningful, particularly when multiple factors combine to create a large effect. However, in GWAS, genetic variants can be interpreted as causal factors (whether direct or indirect 42 ), whereas apparent associations across IDPs and non-imaging phenotypes could result from a shared latent (non-measured) cause. For example, education level could result in a dietary factor associating with a brain IDP, despite no direct causal connection between diet and IDP. This danger is inflated with larger subject numbers, but may be mitigated by the rich life factor and biological variables that can be controlled for or used to match subgroups. Population variances explained in the pairwise associations reached maxima of around 5% ( Supplementary Fig. 2 ), but these were higher with the multivariate analyses (up to 20–50% variance explained in the most highly involved variables in population modes), partly reflecting increased sensitivity gained when appropriately combining across related variables. The importance of accounting for relevant confounds is exemplified in Figure 6e , which displays a strong apparent association between total white matter volume and fat-free body mass (one scatter point per subject) without de-confounding. In fact, this association is largely driven by the average differences in body mass and head size between sexes and disappears after adjusting for sex, age and head size. This is an example of Simpson's paradox 43 , in which suboptimal pooling across variables (here, sex) results in a misleading association. Other pitfalls include failing to consider study population selection bias 44 and inappropriate de-confounding of variables that are caused by (and not feeding into) the variables of interest 45 . Although there is no guarantee that UK Biobank is an unbiased sample of the full population, that does not imply that studies using subsets of the data have to retain any biases (although it is again still possible for bias to arise 44 ); one important aspect of study design will be the method of subselection of Biobank subjects to feed into an analysis. In the case of focused hypothesis testing, it is likely that carefully selected subgroups of subjects should be used. For example, once a group of subjects is identified with a clinical diagnosis, it is likely that optimal sensitivity and interpretability will require a control subgroup that is matched over many relevant factors (for example, sex, age and relevant life factors not appearing in the predictive model). Future studies might seek to find causalities between variables, for example, using structural equation modeling, Bayes Nets or nonlinear/non-Gaussian methods 46 . The dangers of inferring causalities from observational data sets such as UK Biobank are well known; the inclusion of genotype and other 'instrumental' measures enable analyses such as Mendelian randomization, although important caveats must be considered 42 . The safest way to confirm causal results discovered from such observational data sets is to use such results to form hypotheses for new focused interventional studies. The mapping of disease associations and population patterns (for example, learned from UK Biobank data) onto individuals will be an important long-term goal. For example, population distributions of imaging measures and health outcomes can be learned and used to form patient-specific prior distributions to combine with measures from a new patient. Although this might not provide statistical certainty for a diagnosis or interventional recommendation, it should allow single-patient imaging to be used in a similar way to current state-of-the-art patient-tuned genetic testing. Data analysis in population imaging Our analyses demonstrate some of the possibilities offered by the UK Biobank resource. Focused association studies may select just two variables to investigate, such as one IDP correlated against one life factor, genetic marker, physical assay or health outcome. More complex analyses could model a larger number of variables simultaneously, for example, looking to predict health outcome from multiple linear regression against several predictor variables. Nonlinear methods (for example, penalized regression or data-driven feature selection) 22 could enable use of much larger number of predictor variables. A further extension could identify nonlinear interactions between predictor variables, for example, considering an imaging measure, a life factor and an interaction term between the two as three distinct predictors. An even more complex analysis might predict multiple outcome variables, looking for 'doubly multivariate' associations between two or more sets of variables; the CCA-ICA analyses presented above are an example of this. Finally, imaging measures may in some cases be more sensitive or specific than clinical symptoms 47 , thereby providing proxies for healthcare outcomes and/or enabling clustering of patients that is more predictive of prognosis or therapeutic response 48 . Pairwise correlation analyses result in simple outcomes that require an understanding of the caveats in imaging-derived measures. Data-driven multivariate analyses identifying associations between sets of variables have complementary benefits, including improved sensitivity to biological processes and a streamlined set of results compared with millions of univariate associations. Furthermore, multivariate analyses can separate distinct biological processes with opposing relationships between variables. For example, our CCA-ICA analysis revealed one aging-related process that involved changes in heart rate and fMRI measures (mode 4), whereas another aging-related process related blood pressure and white matter microstructure (mode 8). A simple correlational analysis would show associations between all of these factors, including even those that appeared in separate modes (for example, fMRI and white matter changes). In addition, as with multiple regression, simultaneous identification of multiple modes of association reduces the unexplained residual variance (effectively data 'de-noising'). Multivariate analyses of multi-modal data such as UK Biobank enable discovery of (potentially complex) clinical phenotypes. This is a powerful alternative to diagnostic categories that rely on clinical symptoms that do not map cleanly onto underlying disease mechanisms. For many complex diseases, the discovery of distinct mechanisms and/or subdiseases that are currently conflated may be unlikely to occur solely through symptom-based investigations. Discovering relevant population axes and subgroups on the basis of imaging, genetics and other objective markers may therefore be expected to increase our understanding of the etiology and pathogenesis of a wide variety of diseases. For example, this concept is at the heart of the recently proposed Research Domain Criteria (RDoC) in psychiatry 48 . Population imaging landscape In the early 2000s, several ambitious studies built cohorts consisting of thousands of subjects. Several recent brain imaging studies are aiming to image tens of thousands of subjects, including the Maastricht Study ( n = 10,000) 49 , the German National Cohort ( n = 30,000) 50 and the Rhineland Study ( n = 30,000). In addition to having even larger numbers, UK Biobank will benefit from the breadth of organ systems imaged, the highly multi-modal brain protocol and the existing rich phenotyping. A longitudinal component is planned for a subset of the UK Biobank imaging participants ( n = 10,000), as in the Rhineland and German National Cohort studies. Most of these studies use identical MRI scanners at a small number of dedicated sites, with the goal of maximizing data homogeneity within the study. A future challenge to further leveraging these large data sets is to develop analysis tools that can harmonize data across these studies for combined analyses, where there could be considerable benefit in focusing on harmonization of a few very large cohorts. Even with just 5% of the eventual cohort size, our results demonstrate the statistical benefits that are conferred by large numbers. However, the primary rationale for the size of the study is not to boost statistical power across 100,000 subjects, but rather to provide prospective imaging data that are suitable for discovering early markers and risk factors for as broad a set of diseases as possible. For some rare diseases with few established risk factors, this approach is uniquely suited to discovery of pre-symptomatic markers; for example, 50–100 imaging participants are expected to develop sporadic amyotrophic lateral sclerosis (ALS) by 2027. This rich imaging addition to the ongoing UK Biobank study will provide scientists with insights into the causes of brain disease, provide markers with predictive power for therapeutic interventions and advance noninvasive imaging-based screening for preventative healthcare. Methods Protocol overview. Imaging protocols were designed by the UK Biobank Imaging Working Group ( ), in consultation with a large number of brain imaging experts (listed in the acknowledgments). MRI provides many imaging modalities offering complementary information. As part of this consultancy, a number of modalities were determined to be infeasible or lower priority for a range of reasons. Considerations included time constraints, generalizability, feasibility of automated analysis, and existence of robust, well-tested acquisition methods. The advisory network therefore decided not to include quantitative relaxometry, MR spectroscopy or angiography. Arterial spin labeling is currently being piloted, as described below. To maximize data compatibility, three dedicated imaging centers will have identical scanners with fixed platforms (that is, no major software or hardware updates throughout the study). Each center is equipped with a 3T Siemens Skyra (software platform VD13), 1.5T Siemens Aera (VD13), carotid ultrasound and dual energy X-ray absorptiometry (DEXA). Brain imaging is being conducted on the 3T system using a 32-channel receive head coil. Key acquisition parameters for each modality are summarized in Supplementary Table 1 , grouped according to primary modality categories (structural MRI, dMRI and fMRI). Order of acquisition was optimized in consideration of subject compliance, assuming subject motion might increase over the scan (favoring early acquisition of the T1 due to its central importance; for example, the processing pipeline cannot run without the T1) and subject wakefulness might decrease (favoring early acquisition of fMRI). The order is: (1) T1, (2) resting fMRI, (3) task fMRI, (4) T2 FLAIR, (5) dMRI, (6) swMRI. Further protocol details are available at and further description of post-processing pipelines and data outputs included in the first data release are available at . All software used in these pipelines is freely available 51 , 52 and full pipeline processing scripts will shortly be publicly available. The processing pipeline used for the initial data release was primarily based on tools from FSL (the FMRIB Software Library 51 ), but it will be gradually expanded to utilize a broader range of methods and software, where this will increase the quality, robustness and scope of IDPs generated. For example, one high priority is to adapt the Human Connectome Project pipelines 53 to provide cortical surface modeling. The intention is that non-imaging experts will be able to use the IDPs directly without having to become expert in the complexities of data processing, although we encourage engagement with imaging experts in light of the numerous and subtle caveats and confounds associated with interpreting these data. Data access requests from all academic or commercial researchers (with no exclusive or preferential access) are processed by the UK Biobank's Research Access Administration Team and approved relatively rapidly provided that they fulfill UK Biobank's aims of supporting health research in the public interest. Researchers' institutions then sign a Material Transfer Agreement agreeing not to attempt to identify any participant, and to return any derived data (for example, new IDPs) to UK Biobank, to be made available to other approved researchers after an agreed 'embargo' period (to allow findings to be published or IP protected by the researchers). Thus, while the first set of IDPs described here from internally commissioned research is being made available immediately, the range of IDPs is expected to grow rapidly as additional contributions from the wider user community are added. Protocol considerations. Design of the brain imaging protocol was conducted through broad consultation with neuroimaging experts and required careful balance of a range of considerations, often specifically relating to the high throughput nature of UK Biobank. In setting up the pilot protocol, the primary challenge was to achieve the target of one participant scanned every 36 min without serious compromise to data quality compared to research protocols that might conventionally require up to an hour of scan time. Despite these tight time constraints, we aimed to include as many MRI modalities as possible, to take advantage of the full richness of information that can be provided by MRI. Here, we highlight the primary considerations that required a different approach from more conventional imaging studies. With each additional minute of scanning per subject effectively costing an additional ∼ £1million, there is enormous value associated with seemingly small efficiency savings. We recovered several minutes of scan time by systematically minimizing the overheads associated with subject placement, scan prescription, and calibration measurements. For example, corrections to the static magnetic field homogeneity (shimming) and strict enforcement of a single shim calibration harvested 2 min (changing system defaults to improve and accelerate shimming), which is equivalent to the scan time associated with some of the included modalities. Tight imaging FOVs (fields of view, the physical size of the imaged volume) are in general favorable to reduce scan time; however, these restrictions exclude subjects with larger heads or brains. For UK Biobank, even a 'conservative' FOV that includes 99% of the population will exclude 1,000 participants. As detailed statistics on brain size (as distinct from head size) were not available in the literature, we conducted a study of population brain size 54 that (in conjunction with optimal slice angling) enables our FOVs to target 99.9%. It is critically important that all analyses are automated. This translates to an additional role for certain imaging modalities beyond their intrinsic information content. Thus, although we considered methods for reducing scan time for T1-weighted structural scans while retaining coverage and resolution (for example, elliptical sampling with consequent image blurring), this was deemed an unacceptable risk given the central role of the T1 to cross-subject and cross-modal alignment for most processing pipelines, including that implemented here for the initial data release. The EPI (echo-planar imaging) acquisitions for fMRI and dMRI result in significant image distortion that creates local misalignment in certain brain regions. Correction of this requires measurement of the magnetic field inhomogeneities that cause distortion. Two types of measurements are possible: a non-EPI gradient-echo acquisition with two echoes (conventional field map) or two EPI-based spin echo acquisitions with opposite phase encode direction 55 . We chose the latter, which can be incorporated into the dMRI protocol as additional b = 0 scans to reduce acquisition time (total acquisition time ∼ 30 s). To provide data with as rich and broad a range of applications as possible, we include imaging modalities that are not yet widely used in clinical practice, such as fMRI and dMRI. These modalities have demonstrated mechanistic and biological insights, and will hopefully see greater clinical take-up in the future, in part because of projects such as UK Biobank. We took advantage of recent advances in acquisition, largely developed as part of the Human Connectome Project, to obtain research quality data in limited time. Specifically, simultaneous multi-slice (or multiband, MB) acquisitions 56 , 57 , 58 , 59 that enable rapid fMRI and dMRI without sacrificing statistical robustness or directions/b-values (ref. 60 ), respectively. Without these accelerations, a seven-minute dMRI scan of the same spatial resolution would have been limited to ∼ 32 directions and a single shell, precluding NODDI 16 and other more advanced biological modeling. After early piloting, a clinical T2/PD-weighted acquisition was removed from the protocol. This decision reflected the limited relevance to UK Biobank goals (given the inclusion of the higher-quality and more biologically informative T2 FLAIR) and the value in recovering this scan time (just over 1 min). One shortcoming of the current protocol is the lack of a direct measure of neurovascular health. We are piloting a protocol change to include a 2-min perfusion scan (using arterial spin labeling). This would require reducing task fMRI to 2 min; while this is an extremely short task, early analyses (using truncated copies of existing initial tfMRI data sets) predict that it will be sufficiently robust. A major ethical question in studies of this nature relates to identification and handling of incidental findings of previously unknown pathology. The procedure to be followed in UK Biobank has been considered in great depth with major external ethical, legal and clinical radiology bodies, and with the funders and their external review group. An assessment of different approaches to the identification of incidental findings and the impact of their feedback on participants and the health service has been conducted as part of the pilot phase of UK Biobank's imaging project, and will be published separately. Based on its results and the deliberative process undertaken with external experts, the UK Biobank protocol for dealing with incidental findings does not involve the routine review of all scans for potential pathology by radiologists. Instead, if a radiographer incidentally identifies evidence of potentially serious pathology (that is, likely to threaten life span, quality of life or major body functions) during the imaging process then a formal radiologist review is undertaken and, if it is confirmed as potentially serious, feedback is given to the participant and their doctor. Informed consent is obtained from all UK Biobank participants; ethical procedures are controlled by a dedicated Ethics and Guidance Council ( ) that has developed with UK Biobank an Ethics and Governance Framework (given in full at ), with IRB approval also obtained from the North West Multi-center Research Ethics Committee. Subjects are excluded from scanning according to fairly standard MRI safety/quality criteria, such as exclusions for metal implants, recent surgery, or health conditions directly problematic for MRI scanning, such as problems hearing, breathing or extreme claustrophobia. Once the second and third imaging centers are complete and running, UK Biobank will use phantom objects and traveling volunteers to confirm quality and consistency across sites. Structural imaging. The T1 structural protocol is acquired at 1mm isotropic resolution using a three-dimensional (3D) MPRAGE acquisition, with inversion and repetition times optimized for maximal contrast. The superior-inferior field-of-view is large (256 mm), at little cost, in order to include reasonable amounts of neck/mouth, as those areas will be of interest to some researchers (for example, in the study of sleep apnea). Pre-processing of this modality included removal of the face (which was deemed important to subject anonymization for the standard data dissemination), brain extraction (removal of non-brain tissues from the image), linear alignment to the standard MNI152 brain template 61 and nonlinear warping to this template 62 to maximize correspondence across individuals in light of significant cross-subject variation in brain structure. These alignments are used throughout the majority of the processing pipeline for other modalities. T1 images are further analyzed to estimate volumes of a range of tissues and structures in each subject, which may reflect atrophy due to age and disease, as well as normal variation due to (for example) use-dependent plasticity. Images are segmented into tissue types (gray matter, white matter and cerebrospinal fluid) 63 . Cortical gray matter volume is estimated, comparing the segmented gray matter to an atlas reference (where the external skull surface is used to normalize for head size) 64 . Sub-cortical volumes are estimated 65 , using population priors on shape and intensity variation across subjects. T1-based IDPs are generated for the volumes of major tissue types of the whole brain and for specific structures (primarily sub-cortically). Too much reliance on spatial registration could limit the usefulness or accuracy of some IDPs. This is in part why many of the IDPs are in fact generated from within-subject analyses that do not depend on exact voxelwise spatial alignment to standard space (or between subjects): for example, 283 of the 715 structural and diffusion IDPs do not rely on exact spatial alignment and are carried out in the original space of each subjects' data. The T2 protocol uses a fluid-attenuated inversion recovery (FLAIR) contrast with the 3D SPACE optimized readout 66 . This shows strong contrast for white matter hyperintensities. An automated pipeline for delineating these hyperintensities is currently being developed and future data releases will include IDPs reflecting the lesion 'load'. The swMRI scan uses a 3D gradient echo acquisition at 0.8x0.8x3mm resolution, acquiring two echo times (TE = 9.4 and 20 ms). Anisotropic voxels can enhance certain contrast mechanisms, particularly for vascular conspicuity due to through-plane dephasing effects, but are less ideal for other susceptibility-based processing. Ultimately, however, this choice was motivated by the desire for whole brain coverage in the face of very limited scan time (2.5 min). Signal decay times (T2*) are estimated from the magnitude images at the two TEs, and the generated IDPs are the median T2* estimated within the various subcortical regions delineated from the T1 processing. Venograms are generated through nonlinear filtering of the magnitude and phase images 12 , which produces enhanced conspicuity of medium and large veins. Automated segmentation of microbleeds and venograms would provide significant value, but to our knowledge robust tools for this are not yet available; future pipeline versions can hope to include such analyses. Future work will also consider whether this data will support quantitative susceptibility mapping, which would provide further information on tissue constituents as discussed in the main text. Diffusion imaging. Diffusion data is acquired with two b-values (b = 1,000 and 2,000 s/mm 2 ) at 2-mm spatial resolution, with multiband acceleration factor of 3 (three slices are acquired simultaneously instead of just one). For each diffusion-weighted shell, 50 distinct diffusion-encoding directions were acquired (covering 100 distinct directions over the two b-values). The diffusion preparation is a standard (monopolar) Stejskal-Tanner pulse sequence. This enables higher SNR due to a shorter echo time (TE = 92 ms) than a twice-refocused (bipolar) sequence at the expense of stronger eddy current distortions, which are removed using the Eddy tool 67 (which also corrects for static field distortion and motion 68 ). Both diffusion tensor and NODDI models are fit voxel-wise, and IDPs of the various model outputs are extracted from a set of white matter tracts. Tensor fits utilize the b = 1000 s/mm 2 data, producing maps including fractional anisotropy, tensor mode and mean diffusivity. The NODDI 16 model is fit using the AMICO (Accelerated Microstructure Imaging via Convex Optimization) tool 52 , with outputs including intra-cellular volume fraction (which is often interpreted to reflect neurite density) and orientation dispersion (a measure of within-voxel disorganization). For tractography, a parametric approach is first used to estimate fiber orientations. The generalized ball & stick model is fit to the multi-shell data, estimating up to three crossing fiber orientations per voxel 17 , 69 . Tractography is then performed in a probabilistic manner to estimate white matter pathways using the voxel-wise orientations. Cross-subject alignment of white matter pathways is critical for extracting meaningful IDPs; here, two complementary approaches are used. The first used tract-based spatial statistics (TBSS) 18 , 70 , in which a standard-space white matter skeleton is mapped to each subject using a high-dimensional warp, after which ROIs are defined as the intersection of the skeleton with standard-space masks for 48 tracts 71 (see the JHU ICBM-DTI-81 white-matter labels atlas described at for definitions of the tract regions and names). The second approach utilizes subject-specific probabilistic diffusion tractography run using standard-space protocols to identify 27 tracts 18 ; in this case, the output IDPs are weighted by the tractography output to emphasize values in regions that can most confidently be attributed to the tract of interest. Currently, no structural connectivity estimates from the diffusion tractography are provided as IDPs, but the probabilistic maps are available and future work will generate measures similar to those provided for resting-state fMRI. Functional MRI. Task and resting-state fMRI use the same acquisition parameters, with 2.4-mm spatial resolution and TR = 0.735 s, with multiband acceleration factor of 8. A 'single-band' reference image (without the multiband excitation, exciting each slice independently) is acquired that has higher tissue-type image contrast; this is used as the target for motion correction and alignment. For both data sets, the raw data are corrected for motion 72 and distortion 55 and high-pass filtered to remove temporal drift. The task scan used the Hariri faces/shapes 'emotion' task 21 , 73 , as implemented in the HCP 22 , but with shorter overall duration and hence fewer total stimulus block repeats. The participants are presented with blocks of face or shape trials and asked to decide which of two faces (or shapes) presented on the bottom of the screen match the face (or shape) at the top of the screen. The faces have either angry or fearful expressions. The ePrime stimulus script is available for download ( ). Task-induced activation is modeled with FEAT, including auto-correlation correction 74 , using five activation contrasts. Of these, the three activation contrasts of most interest (shapes, faces and faces>shapes) are used to generate output measures, including two IDPs for the faces-shapes task (one including all voxels above a group-level fixed-effects Z > 120, and one including only the amygdala regions above threshold). IDPs corresponding to both percent signal change and statistical significance (Z statistics) are generated. During resting-state scans, subjects are instructed to keep their eyes fixated on a crosshair, relax and 'think of nothing in particular'. Resting-state networks are identified using ICA (independent component analysis 33 , 75 ), which identifies components within the data that are spatially independent (where a component comprises a spatial map and a single associated time course). Following the pre-processing described above, resting-state fMRI data for each subject is further 'cleaned' using an ICA-based algorithm for automatically identifying and removing structured artifacts 76 . This data is fed into group-level ICA (including an initial group-level dimensionality reduction 77 ), which is used to parcellate the data set into sets of 25 and (separately) 100 spatially independent components. Where a small (<30) number of components is estimated 78 , it is common to consider each component as a separate 'network' in its own right; each component will often include several non-contiguous regions, all having the same time course (according to the model). If a higher number of components is estimated 79 , these are more likely to be smaller regions (parcels), which can then be considered as nodes for use in network analysis 80 , where the spatial maps are used to define subject-specific time courses (the first stage of dual regression 1 ). These time courses are used to estimate the size of signal fluctuation in each node, as well as to estimate connectivity between pairs of nodes using L2-regularised partial correlation 81 . The connectivity estimates are provided as IDPs at both parcellation dimensionalities (25 and 100 nodes); after removal of group-ICA components considered to be artifactual (that is, relating either to scanning artifacts, or to non-neuronal biophysical processes such as cardiac fluctuations and head motion), this results in 21 and (respectively) 55 nodes left for forming the IDPs such as network matrices (functional connectivities between pairs of nodes). Quality control. To date, raw data and pipeline outputs have been manually checked for gross problems of quality and robustness, with problematic data tagged and removed from pipeline outputs; see main text for results on proportions of usable data in the different modalities. However several quality-related IDPs are automatically generated by the pipeline (for example, number of outlier slices in the dMRI data, and measures of signal-to-noise ratio in the various modalities), and these can be used to help automatically identify problematic data. An expanded set of such quality measures is being produced, in addition to an automated machine learning system for flagging problematic data on the basis of the many IDPs and quality measures; future versions of the pipeline and data releases will benefit from the results of these ongoing developments. Statistics. The two sections below describe the statistical analysis carried out using IDPs and non-brain-imaging measures. As described below, univariate statistics were primarily carried out using Pearson correlation (see details below regarding Gaussian-distribution normalization and linear removal of confound effects) and multivariate statistics were carried out using a combination of canonical correlation analysis and independent component analysis (with permutation testing used to identify the significant number of components estimable). As discussed in the main text, the primary rationale for the size of the study is not to boost statistical power across 100,000 subjects, but rather to provide prospective imaging data suitable for discovering early markers and risk factors for as broad a set of diseases as possible, both rare and highly prevalent. Hence while calculations have been made to estimate the expected numbers of subjects developing different diseases over coming years (see introductory section of main text), no statistical methods were used to pre-determine sample sizes for any one specific disease, given that individual disease sample sizes are not prospectively controlled, and given the very broad expected set of future tests between different imaging measures and different diseases that will be ultimately applied from this prospective long-term resource. Details on significance testing and multiple comparison corrections are included in the two sections below. A Supplementary Methods Checklist is available. Simple associations between brain IDPs and other measures. We report simple correlation analyses between each of the 2,501 brain IDPs and each of 1,100 other variables extracted from the UK Biobank database (these other variables are mostly not derived from imaging, though some do come from the non-brain imaging modalities); for the list of general classes of these variables, see Figure 6a , and for many examples of individual variables, see the lists associated with the CCA-ICA modes presented in Figure 7 and Supplementary Figures . The initial set of variables extracted from the UK Biobank database was automatically reduced to those (1,100 variables) containing sufficient numbers of valid (non-missing) data entries, using very similar selection rules to those applied in the recent CCA-based analysis of Human Connectome Project data 34 . Some variables are defined (in the UK Biobank database) such that the numerical encoding is the inverse of what one might naturally assume - for example in the variable 'Qualifications', higher numbers refer to lower levels of educational qualifications. In such cases we have inverted the sign of the ICA weightings printed in the figures, for ease of interpretation. Further, some variables are categorical, with no clear quantitative meaning to the values (for example, 'Transport type to work'); where we find an apparent association, this can be considered to be indicative of a real association (one might think of the analysis therefore as an over-conservative poor implementation of an ANOVA), but interpretation of the sign of the association clearly needs care. The analysis used data from the first 5,430 subjects scanned and having usable imaging data: age range 44–78 years (IQR 56–68 years); 53% of subjects were female. Eight confound variables are generated: age, age 2 , sex, age × sex, age 2 × sex, average head motion during tfMRI, average head motion during rfMRI and head size. To enforce Gaussianity, all confound variables, IDPs and non-IDP variables are first passed through a rank-based inverse Gaussian transformation; this improves the robustness of correlations (for example, to avoid undue influence of potential outlier values). The confounds are then regressed out of all IDPs and non-IDP variables to reduce the risk of finding nonmeaningful associations. For example, head motion corrupts imaging data in complex ways 26 , and also correlates with some diseases and with aging ( r = 0.15 in this data); hence, if not adjusted for, uninteresting associations would likely arise. However, some measures may have both biologically interesting associations with IDPs, and also act as imaging confounds. For example, abnormal heart rate or blood pressure could alter the fMRI signal through disrupted cerebral auto-regulation (independent of any changes to neural activity) 40 , but cardiovascular pathology could also be related to neurological pathology. Similarly, overall brain size and gray matter thickness IDPs are sensitive simple markers of aging and disease; however, these properties can also affect other IDPs by changing the mixture of tissue types in an imaging voxel, creating an apparent age/disease dependence that is driven by the volume of tissue rather than the properties of a given tissue type (such as fMRI activation or white matter microstructural properties). It is therefore important to interpret apparent associations carefully. The full set of 2.8 million (2,501 × 1,100) Pearson correlations is then estimated and corrected for multiple comparisons. Bonferroni correction, which is likely to be somewhat conservative in such situations, due to non-independence across variables tested, resulted in P corrected < 0.05 being equivalent to requiring P uncorrected < 1.8 × 10 −8 . An alternative popular approach for multiple comparison correction is false discovery rate (FDR) 82 ; we use the more conservative FDR option (making no assumption of variable dependencies 83 ), resulting here in requiring P uncorrected < 3.8 × 10 −5 . These two threshold levels are shown with dotted lines in all Manhattan plots in the main figures. Multivariate associations between brain IDPs and other measures. In the example multivariate analyses shown in Figures 7 and 8 , canonical correlation analysis (CCA 32 ) combined with independent component analysis (ICA 33 ) is used to identify several 'modes' of population covariation which link multiple brain IDPs to sets of other Biobank variables. This is very similar to the methodology used recently to identify a single mode of population covariation between imaging measures and many behavioral and lifestyle measures in data from 461 subjects in the Human Connectome Project 8 , 34 . IDP and non-IDP variables are prepared as for the univariate correlation analyses described above, resulting in a brain-IDP matrix of size 5,034 × 2,501 (subjects × IDPs) and a non-IDP matrix of size 5,034 × 1,100 (subjects × non-IDP variables). The intention is to feed these into CCA in order to identify population modes linking multiple variables from both matrices. However, in order to avoid an over-determined (rank deficient) CCA solution, we first compress both matrices along the respective phenotype dimension to 200 columns (that is, much smaller than the numbers of subjects). This was done by separately reducing each matrix to the top 200 subject-eigenvectors using PCA. To achieve this while avoiding the problem of missing data, we use the approach detailed recently 34 of estimating first a pseudo-covariance matrix ignoring missing data, projecting this onto the nearest valid (positive definite) covariance matrix, and then carry out an eigenvalue decomposition. The two resulting (IDP and non-IDP) matrices of size 5,034 × 200 are then fed into standard CCA ('canoncorr' in Matlab), resulting in 200 CCA modes being estimated. The CCA aims to identify symmetric linear relations between the two sets of variables. Each significant CCA mode identifies a linear combination of IDPs and a linear combination of non-IDPs, where the variation in mode strength across subjects is maximally correlated. That is, CCA finds modes that relate sets of brain measures to sets of subjects' non-brain-imaging measures; for a graphical illustration of this approach see Supplementary Information in ref. 34 . Permutation testing is then applied to estimate (family-wise-error, multiple-comparison-corrected) P values for the CCA modes estimated. Nine modes are found to be significant (P corrected <0.002, with all later modes having P corrected > 0.05). Because CCA can in general only unambiguously estimate distinct modes up to an orthogonal rotation amongst them (by direct analogy to PCA), we identify an unambiguous unmixing of the modes using ICA to optimize the final set of modes reported. Because we expect meaningful population modes to be much more structured (for example, sparser) in the cross-variable dimension than in the cross-subject dimension, we calculate ICA components that are statistically independent from each other in the cross-variable dimension. In order to take full advantage of the numbers of variables originally prepared, we first multiply the nine CCA subject-weight vectors into the original IDP and non-IDP data matrices (after concatenating these across variables), resulting in nine CCA variable-weight vectors of length 2,501 + 1,100 = 3601. These nine vectors are then fed into FastICA 33 to estimate nine population data sources having maximal statistical independence. This general approach (CCA, followed by concatenation of CCA weight vectors, followed by ICA) is similar to that proposed by Sui 84 , except that we return to the full feature space (as described above) for the ICA stage, rather than staying in the PCA-reduced space. The ICA result is extremely robust, with split-half (cross-subjects) reproducibility across the 9 ICA components of r > 0.89. Interestingly, 5 of these ICA modes (including modes 7, 8 and 9; Fig. 7 ) are virtually unchanged if the de-confounding step was omitted (correlation of variable-weights vectors: r > 0.8). Data, code and results availability. As described above, all source data (including raw and processed brain imaging data, derived IDPs, and non-imaging measures) is available from UK Biobank via their standard data access procedure (see ). The image processing pipeline will be made publicly available in early 2017 from - this is the pipeline used to process the raw imaging data and generate IDPs, and hence is not needed in order to replicate the results of this paper, which could be achieved by accessing IDPs as described above, and then using the IDP analysis code described below. The Matlab code for the univariate and multivariate tests described in this paper, and the results of those tests (all univariate correlations and multivariate weight vectors) are available from ; this online resource will be updated as more subjects' data and more IDPs become available. Higher resolution supplementary figures are available in the PDF version of the supplementary information online .
Exciting early results from analysing the brain imaging data, alongside thousands of measures of lifestyle, physical fitness, cognitive health and physical measures such as body-mass-index (BMI) and bone density have been published in Nature Neuroscience. The high quality of the imaging data and very large number of subjects allowed researchers to identify more than 30,000 significant associations between the many different brain imaging measures and the non-imaging measures. The findings have now been made available for use by researchers worldwide. Results included: Strong associations between people's cognitive processing speed and markers of the integrity of the brain's "wiring" and the size of brain structures. These effects increased in strength as people aged.A negative correlation between brain activity during a simple shape-matching task and intelligence, an effect that didn't relate to participants' age. This might be because the people who scored more highly on the cognitive tests needed to use less of their brain to carry out the task.A pattern of strong associations between higher blood pressure, greater alcohol consumption, and several measures that could reflect injury to connections in the brain.A separate pattern of correlations, linking intake of alcohol and tobacco and changes in red blood cells and cardiac fitness, to brain imaging signals associated with increased iron deposits in the brain.Researchers also unearthed some more complicated patterns of correlation. For example, one pattern links brain imaging to intelligence, level of education, and a set of lifestyle factors that at first appear unrelated – including amount of time spent outdoors. It is plausible that, taken together, these factors create a profile of socio-economic-status and its relation to the brain.However, because UK Biobank is an "observational" study that characterizes a cross-section of individuals, it's not always straightforward to establish which factors cause which, but such results should help scientists to define much more precise questions to address in the future search for ways of preventing or treating brain disease. UK Biobank will be the world's largest health imaging study. The imaging is funded by the Medical Research Council, Wellcome Trust, and the British Heart Foundation. It was launched in April 2016 after a number of years of planning and consultation with a large number of health and scanning experts. With the ambitious goal of imaging 100,000 existing UK Biobank participants, it is creating the biggest collection of scans of internal organs, to transform the way scientists study a wide range of diseases, including dementia, arthritis, cancer, heart attacks and stroke. Today's paper describes the brain imaging part of UK Biobank, led by Professsors Steve Smith and Karla Miller from the University of Oxford, and Professor Paul Matthews from Imperial College London. Professor Miller said: "We are using cutting-edge MRI scans and Big Data analysis methods to get the most comprehensive window into the brain that current imaging technology allows." "These results are just a first glimpse into this massive, rich dataset will that emerge in the coming years. It is an unparalleled resource that will transform our understanding of many common diseases." Professor Matthews, Edmond and Lily Safra Chair and Head of Brain Sciences at Imperial, added: "These results are exciting, but merely provide a first hint of what can be discovered with the UK Biobank. This project also is a landmark because of the way it has been done: 500,000 volunteers across the U.K. are donating their time to be part of it and more than 125 scientists from across the world contributed to the design of the imaging enhancement alone. Imperial College scientists played a major role in its inception and leadership as part of a team recruited by the U.K. biobank from a number of UK universities. This is a wonderful example of "open science". The paper reports first results from this remarkable data resource, which includes six different kinds of brain imaging done in the 30 minutes that each volunteer is in the brain scanner. Professor Smith explained: "We have 'structural imaging' - that tells us about brain anatomy – the shapes and sizes of the different parts of the brain. Another kind – 'functional MRI' - tells us about complex patterns of brain activity. Yet another kind – 'diffusion MRI' - tells us about the brain's wiring diagram. The rich and diverse information contained in these scans will reveal how the working of the brain can change with aging and disease; different diseases will best be understood through different combinations of information across these different images." UK Biobank has already scanned 10,000 participants, including images of the heart, body, bone and blood vessels in addition to brain scans. This will be by far the largest brain imaging study ever conducted; within another 5 years UK Biobank will have completed the scanning of 100,000 participants. One reason for needing such large numbers of participants is to have enough subjects to allow discovery of early, possibly subtle, markers of future disease risk, both for a range of common diseases and for rare neurological disorders like motor neuron disease. An important objective of the UK Biobank is to provide a resource for discovery of new insights into diseases like Alzheimer's, which demands scanning healthy subjects years or decades before they develop symptoms. From the UK Biobank data, scientists anywhere can aim to learn much more about brain diseases - and their relationship to a broad range of other diseases or disease risks - to guide the development of earlier targeted treatment (or changes in lifestyle) that could in the future prevent major diseases from ever happening.
10.1038/nn.4393
Chemistry
High speed filming reveals protein changes during photosynthesis
Robert Dods et al. Ultrafast structural changes within a photosynthetic reaction centre, Nature (2020). DOI: 10.1038/s41586-020-3000-7 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-3000-7
https://phys.org/news/2020-12-high-reveals-protein-photosynthesis.html
Abstract Photosynthetic reaction centres harvest the energy content of sunlight by transporting electrons across an energy-transducing biological membrane. Here we use time-resolved serial femtosecond crystallography 1 using an X-ray free-electron laser 2 to observe light-induced structural changes in the photosynthetic reaction centre of Blastochloris viridis on a timescale of picoseconds. Structural perturbations first occur at the special pair of chlorophyll molecules of the photosynthetic reaction centre that are photo-oxidized by light. Electron transfer to the menaquinone acceptor on the opposite side of the membrane induces a movement of this cofactor together with lower amplitude protein rearrangements. These observations reveal how proteins use conformational dynamics to stabilize the charge-separation steps of electron-transfer reactions. Main Our biosphere depends on the electron-transfer reactions of photosynthesis as a primary source of energy. Photosystems and photosynthetic reaction centres form a family of integral membrane protein complexes found in plants, algae, cyanobacteria and photosynthetic bacteria that convert the energy of a captured photon into a charge-separated state. The photosynthetic reaction centre of the purple non-sulfur bacterium B. viridis ( Bv RC) contains three transmembrane subunits called H, L and M and a periplasmic subunit C. These subunits support four bacteriochlorophyll (BCh) molecules, two bacteriopheophytin (BPh) molecules, a tightly bound menaquinone (Q A ), a mobile ubiquinone (Q B ), a single non-haem iron and four haem cofactors (Fig. 1 ). Electron-transfer reactions originate at a special pair (SP) of strongly interacting bacteriochlorophylls that, in B. viridis , have an absorption maximum at 960 nm. Photo-oxidation of the SP liberates an electron that is transferred to the active branch BPh L within a few picoseconds, is transferred to the tightly bound menaquinone (Q A ) in less than a nanosecond and is transferred to the mobile ubiquinone (Q B ) in microseconds. SP + is reduced by subunit C, and a second photo-oxidation event transfers a second electron to Q B − , which is protonated from the cytoplasm and released into the membrane as ubiquinol (H 2 Q). Other proteins participate in a cyclic flow that returns electrons to subunit C and the net effect is that two protons are transported across an energy-transducing membrane for every photon absorbed. Fig. 1: Electron-transfer steps of the photosynthetic reaction centre of B. viridis . Cartoon representation of the H, L, M and C subunits. Cofactors are shown in black including the SP molecule, two monomeric BCh molecules, two BPh molecules, a tightly bound Q A molecule, a mobile Q B molecule, a non-haem iron (Fe 2+ ) and four haems. The approximate boundaries of the membrane are suggested in blue. The electron-transfer pathway: SP → BPh L → Q A is referred to as the A-branch. Approximate timescales for the first two electron-transfer events, from SP to BPh L and from BPh L to Q A , are shown. Full size image Electrons may tunnel between cofactors when they are separated by approximately 10 Å or less 3 . The primary electron-transfer step 4 from SP to BPh L occurs in 2.8 ± 0.2 ps across a distance of 10 Å by means of a two-step hopping mechanism via the monomeric BCh L molecule 5 and is more rapid than conventional Marcus theory. By contrast, the 9 Å electron-transfer step from BPh L to Q A has a single exponential decay time 6 of 230 ± 30 ps, which is consistent with conventional Marcus theory. Coherent nuclear motions 7 and protein structural changes 8 have been suggested to influence the initial charge-transfer reactions of photosynthesis, yet the specific nature of these putative protein motions is unknown. Flash-freeze crystallographic trapping studies 9 , time-resolved Laue diffraction 10 and time-resolved serial femtosecond crystallography 11 , 12 , 13 , 14 (TR-SFX) have revealed structural changes in bacterial photosynthetic reaction centres 9 , 10 and cyanobacterial photosystem II (PSII) 11 , 12 , 13 , 14 that occur on the late microsecond-to-millisecond timescale, yet, to our knowledge, no time-resolved crystallographic studies on the timescale of the primary charge-separation reactions of photosynthesis have been reported. Here we apply time-resolved serial femtosecond crystallography 1 using an X-ray free electron laser (XFEL) to investigate the ultrafast structural response of Bv RC to light. We photo-excited the special pair with 150-fs pulses centred at 960 nm (Extended Data Fig. 1 ). X-ray pulses with a duration of 40 fs were generated at the Linac Coherent Light Source (LCLS) 2 and were used to record diffraction patterns from tens of thousands of microcrystals for the time points ∆ t = 1 ps, 5 ps (two repeats), 20 ps, 300 ps (two repeats) and 8 μs after photoexcitation (Extended Data Table 1 ). Time point ∆ t = 1 ps populates the photo-excited charge transfer state of the SP in which charge rearrangements have occurred within the bacteriochlorophyll dimer but are before the primary electron-transfer step; ∆ t = 5 ps and 20 ps are after the initial charge-transfer step and SP is oxidized and BPh L is reduced; ∆ t = 300 ps is longer than the time constant for electron transfer to Q A and menaquinone is reduced; and ∆ t = 8 μs corresponds to a meta-stable charge-separated state. Extended Data Figure 2 presents overviews of the F o (light) − F o (dark) isomorphous difference Fourier electron density maps (‘light’ corresponds to data collected from photo-activated microcrystals whereas ‘dark’ corresponds to data collected from microcrystals that were not photo-activated) for all time points. Difference electron density features are visible above 4.0 σ (where σ is the root mean square electron density of the map) near SP for all time points and strong features associated with Q A are visible for ∆ t ≥ 300 ps (Extended Data Table 2 ). In contrast to ultrafast TR-SFX studies of bacteriorhodopsin 15 , photoactive yellow protein 16 , rsEGFP 17 and bacterial phytochromes 18 in which ultrafast structural changes are driven by the movements of atoms owing to a photo-isomerization event, TR-SFX measurements of Bv RC reveal a knock-on effect on the protein structure owing to the light-induced redistribution of charge. Electric-field-induced conformational changes have been observed when fields of the order 10 8 V m −1 are applied across a protein crystal 19 and this is the same order of magnitude as electric-field perturbations owing to the movement of an electron within the Bv RC. Recurring changes of electron density are visible as positive difference electron density in the region of overlap between the two bacteriochlorophylls SP L and SP M of the special pair, and complementary negative difference electron density features are visible primarily associated with SP M (Fig. 2 , Extended Data Fig. 3 , Extended Data Table 2 and Supplementary Video 1 ). Singular value decomposition (SVD) of all seven difference Fourier electron density maps (Fig. 2e ) reveals that the strongest positive and several of the strongest negative difference electron density features of the principal SVD component are associated with the SP (Extended Data Table 2 ). Quantification of electron-density changes 20 within the Bv RC cofactors (Fig. 2f ) and statistical tests against control difference Fourier electron density maps (Methods) provide a very high level of confidence ( P ≤ 0.001) (Extended Data Table 3 ) that these recurring difference electron density features do not arise by chance. Therefore, photoexcitation causes the bacteriochlorophylls of SP to move closer together and the bending (an out-of-plane distortion) of SP M could explain these observations. An out-of-plane distortion was used to model difference electron density features observed as carbon monoxide was photo-dissociated from the haem of myoglobin 21 (Extended Data Fig. 4 ). Nonplanar distortions of chlorin and bacteriochlorin rings are observed in PSII and RC from Rhodobacter sphaeroides owing to interactions with the surrounding protein 22 and nonplanar porphyrins are also more-easily oxidized than planar porphyrins 23 , 24 . This suggests that the distortion of SP in advance of the primary charge-separation event (Fig. 2a ) could enhance the yield of the primary charge-transfer reaction, which has been optimized by evolution to achieve almost perfect quantum efficiency 25 . Fig. 2: Light-induced electron density changes in Bv RC at the site of photo-oxidation. a , Experimental F o (light) − F o (dark) difference Fourier electron density map for ∆t = 1 ps. b , Difference Fourier electron density map for ∆ t = 5 ps (dataset a). c , Difference Fourier electron density map for ∆ t = 20 ps. d , Difference Fourier electron density map for ∆ t = 300 ps (dataset a). e , Principal component from SVD analysis of all seven experimental difference Fourier electron density maps. All maps are contoured at ±3.2 σ . Blue, positive difference electron density; gold, negative difference electron density. f , Relative amplitudes of difference electron density features integrated within a 4.5 Å sphere 20 centred on the BvRC cofactors (Extended Data Fig. 3j ). The colour bars represent (from left to right): cyan, ∆ t = 1 ps; blue, ∆ t = 5 ps, datasets a and b (in that order); purple, ∆ t = 20 ps; red, ∆ t = 300 ps, datasets b and a (in that order); mustard, ∆ t = 8 μs. AU, arbitrary units. Source data . Full size image When the C subunit is fully reduced, an electron is transferred from haem 3 to SP + in less than a microsecond 26 . The above reasoning indicates that SP + may be more-easily reduced should SP M return to a planar geometry before this electron transfer occurs. This hypothesis is consistent with our experimental observations as the amplitude of the positive difference electron density feature between the SP bacteriochlorophylls increases from ∆ t = 1 ps to 20 ps, decreases for ∆ t = 300 ps and is not significant for ∆ t = 8 μs (Fig. 2 , Extended Data Fig. 3 and Extended Data Table 2 ). Moreover, neither TR-SFX studies of the S2 to S3 transition of cyanobacteria PSII 13 (∆ t = 150 μs and 400 μs) nor TR-Laue diffraction studies of Bv RC 10 (∆ t = 3 ms) have reported a positive difference electron density feature in the region of overlap between the SP of (bacterio)chlorophylls, which suggests that this feature has decayed. Charge rearrangements cause SP + to move up to 0.3 Å towards the M subunit by ∆ t = 300 ps and the side chains of both His173 L and His200 M adjust to preserve their ligating interactions with the magnesium ions of SP + ; similarly, His168 L and Tyr195 M adjust their conformation to maintain their hydrogen-bond interactions to SP L and SP M , respectively. These structural perturbations are revealed by paired negative and positive difference electron density features on the side chain of His173 L in the principal SVD components calculated from both the early (1 ps, both 5 ps, 20 ps) and late (both 300 ps, 8 μs) subsets of TR-SFX data, whereas positive difference electron density features associated with the side chains of His200 M and Tyr195 M become noticeably stronger for the late subset of data (Extended Data Fig. 3i, j , Extended Data Table 2 and Supplementary Video 1 ). These observations suggest that SP L moves towards subunit M slightly in advance of SP M , which may be owing to dielectric asymmetry within photosynthetic reaction centres 27 , 28 . Dielectric asymmetry is thought to underpin the phenomenon that electron transfer occurs only along the A-branch 27 (as defined in Fig. 1 ) in purple bacteria RCs and PSII. An electron moves from SP to BPh L in 2.8 ± 0.2 ps (ref. 4 ) and from BPh L to Q A in 230 ± 30 ps (ref. 6 ). The tightly bound menaquinone is therefore neutral for ∆ t = 1 ps, 5 ps and 20 ps; three-quarters of the photo-activated population are reduced to semiquinone by ∆ t = 300 ps; and essentially all photo-activated molecules have Q A reduced at ∆ t = 8 μs. Our difference Fourier electron density maps confirm these expectations as the few difference electron density features visible within the Q A binding pocket for ∆ t ≤ 20 ps are isolated, whereas more-continuous paired positive and negative difference electron density features are visible for ∆ t ≥ 300 ps (Fig. 3 and Extended Data Fig. 5 ). These recurring features of the late subset of TR-SFX data (both 300 ps and 8 μs) produce strong difference electron density features in the principal SVD component that are associated with Q A and its hydrogen-bond interaction with His217 M (Fig. 3d , Extended Data Table 2 and Supplementary Video 2 ) and statistical tests establish that these recurring changes cannot be ascribed to noise ( P ≤ 0.0125) (Extended Data Table 3 ). Structural refinement models ascribe these observations to a twist and translation of the semiquinone that brings the negatively charged head group approximately 0.2 Å closer to the positive charge of the non-haem Fe 2+ (Fig. 3f ) and thus stabilizes the reduced form of this cofactor. This interpretation is supported by hybrid quantum mechanics–molecular mechanics (QM/MM) calculations that predict that the Q A to His217 M hydrogen-bond is shortened by 0.17 Å when Q A is reduced (Extended Data Fig. 6f ) and suggest that semiquinone binding is stabilized by approximately 36 kJ mol −1 owing to structural changes (Methods and Extended Data Fig. 6g, h ), which is a sizeable fraction of the energy (125 kJ mol −1 ) of a 960 nm photon. Similar conclusions were drawn from a previously published analysis using a density functional theory formalism 29 . Light-induced electron-density changes were visible for Q A in TR-SFX studies of the S2 to S3 transition of cyanobacteria PSII 13 for the time points 150 μs and 400 μs, light-induced movements of the mobile quinone Q B were also observed in PSII 11 , 12 , 13 , 14 for delays of hundreds of milliseconds, and larger light-induced motions of Q B were reported in freeze-trapping studies of the Rhodobacter sphaeroides photosynthetic RC 9 . Fig. 3: Light-induced electron density changes in Bv RC within the Q A binding pocket. a , Experimental | F o (light) − F o (dark) difference Fourier electron density map for ∆ t = 5 ps (dataset a). b , Difference Fourier electron density map for ∆ t = 300 ps (dataset a). c , Principal components from the SVD analysis of the first four experimental difference Fourier electron density maps (∆ t = 1 ps, 5 ps (dataset a), 5 ps (dataset b) and 20 ps). d , Principal components from the SVD analysis of the final three experimental difference Fourier electron density maps (∆ t = 300 ps (dataset a), 300 ps (dataset b) and 8 μs). All maps are contoured at ±3.0 σ . Blue, positive difference electron density; gold, negative difference electron density. e , Difference Fourier electron density map for ∆ t = 300 ps (dataset a) showing the protein immediately surrounding Q A and contoured at ±3.5 σ . f , Superposition of the refined structures for the dark structure (yellow, Q A in black) and ∆ t = 300 ps (purple structure). Full size image For ∆ t = 300 ps, paired negative and positive difference electron density features are associated with the cytoplasmic portions of transmembrane helices D M and E M (Fig. 3e ) and indicate that Bv RC adjusts its structure in response to the movement of the semiquinone within the Q A binding pocket (Fig. 3f ). A more quantitative analysis (Methods, Extended Data Fig. 7 and Supplementary Video 3 ) suggests that low-amplitude protein motions begin to arise already by ∆ t = 1 ps (Fig. 4a ) as observed in TR-SFX studies of bacteriorhodopsin 15 , 20 and myoglobin 21 ; the amplitudes of these motions increase with time and by ∆ t = 5 ps larger displacements are observed near the SP + and BPh L − cofactors (Fig. 4b ); and for ∆ t = 300 ps protein conformational changes extend throughout the A-branch of the electron-transfer pathway from SP + to Q A − (Fig. 4c ). When the same representation is used to depict protein conformational changes predicted from QM/MM calculations (Supplementary Video 4 ) almost no structural changes are expected for the photo-excited charge-transfer state (Fig. 4d ); protein movements arise near the charged cofactors in the SP + BPh L − charge-separated state (Fig. 4e ); and structural changes extend throughout the A-branch in the SP + Q A − charge-separated state (Fig. 4f ). These findings demonstrate that Bv RC is not a passive scaffold but rather that low-amplitude protein motions engage in a choreographed dance with electron movements taking the lead and protein conformational changes following. Conversely, as the structure of the protein adjusts to stabilize these charge rearrangements, the energetic barriers that hinder the reverse electron-transfer reaction increase, thereby extending the lifetime of the change-separated species and enhancing the overall efficiency of photosynthesis. Fig. 4: Structural response of Bv RC to electron-transfer events. a , Recurring movements of Cα atoms for ∆ t = 1 ps quantified using full-occupancy structural refinement against 100 randomly resampled TR-SFX datasets. b , Recurring movements of Cα atoms for ∆ t = 5 ps (dataset a) using the same representation as in a . c , Recurring movements of Cα atoms for ∆ t = 300 ps (dataset a) using the same representation as in a . Recurring movements are represented as error-weighted mean ratios relative to 100 control structural refinements (Methods) coloured from grey (<80% of the maximum error-weighted mean ratio) to red (≥95% of the maximum error-weighted mean ratio). An identical representation is given for all time points in Extended Data Fig. 7 . d , Movements of Cα atoms estimated from QM/MM energy-minimization calculations associated with the SP photo-excited state and all other cofactors in resting state: SP*, BPh L 0 , Q A 0 (Methods). e , Movements of Cα atoms estimated from QM/MM energy-minimization calculations associated with the SP photo-oxidized state and BPh L reduced state: SP + BPh L − Q A 0 . f , Movements of Cα atoms estimated from QM/MM energy-minimization calculations associated with the SP photo-oxidized state and Q A reduced state: SP + BPh L 0 Q A − . Movements are coloured from grey (no movements) to red (maximum Cα motions). Transmembrane helices are drawn as rods. Full size image In Marcus theory, the total potential energy of an electron donor and its surroundings must be equal to that of the electron acceptor and its surroundings if an electron is to tunnel from donor to acceptor 3 . Fluctuations in the organizational energy around protein cofactors are therefore essential to facilitate electron-transfer reactions. Efforts aimed at understanding how protein conformational dynamics control the rates of electron transfer between cofactors 8 , 30 have been hampered by a lack of experimental tools that characterize protein structural changes on the relevant timescales. Our observations provide an experimental framework for extending the standard description of electron-transfer reactions in photosynthesis 3 to explicitly incorporate protein structural changes. Electron-transfer reactions are ubiquitous in nature and therefore a more-nuanced understanding of the interplay between protein structural dynamics and the movement of electrons has far-reaching biochemical importance. Methods Protein production and purification The expression and purification of the photosynthetic reaction centre from B. viridis cells was adapted from a previously published study 31 . Cells were disrupted by three rounds of sonication followed by centrifugation in a JA20 rotor at 15,000 rpm for 20 min to recover the membrane suspensions. Membranes were then purified by ultracentrifugation at 45,000 rpm for 45 min in a Ti45 rotor. Membranes were homogenized in 20 mM Tris-HCl, pH 8.5 and diluted to an optical density at 1,012 nm (OD 1,012 ) = 10. Membranes were then solubilized in 4% lauryldimehtylamine- N -oxide (LDAO) for 3 h at room temperature. Unsolubilized membranes were removed by ultracentrifugation at 45,000 rpm for 75 min in a Ti70 rotor. Bv RC protein was purified by loading the supernatant onto a 250-ml POROS 50-μm HQ ion-exchange column equilibrated with wash buffer (20 mM Tris-HCl, pH 8.5, 1% LDAO). The column was washed with 2 l of wash buffer with 5% elution buffer (20 mM Tris-HCL, pH 8.5, 1 M NaCl, 1% LDAO) and eluted with an increasing concentration of elution buffer over 20 column volumes. Fractions with an absorbance ( A ) ratio of A 280 / A 830 < 3.5 were pooled and concentrated in 100-kDa molecular-weight cut-off concentration tubes (Vivaspin) to a volume of 10 ml. This was loaded in 5-ml batches onto a HiPrep 26/60 Sephacryl S-300 column (GE Healthcare) equilibrated with SE buffer (20 mM Tris-HCl, pH 8.5, 100 mM NaCl, 0.1% LDAO) and eluted into 1.8-ml fractions. Fractions with an A 280 / A 830 < 2.6 were pooled and concentrated, followed by a 20-fold dilution in final protein buffer (20 mM NaH 2 PO 4 /Na 2 HPO 4 , pH 6.8, 0.1% LDAO, 10 μM EDTA) and then concentrated again to 20 mg ml −1 . Samples were flash-frozen in liquid nitrogen and stored at −80 °C. Protein crystallization Sitting drops of 20 μl were set up with a 1:1 ratio of protein solution (10 mg ml −1 ) and precipitant solution (3.6 M ammonium sulfate, 6% heptane-1,2,3-triol, 20 mM NaH 2 PO 4 /Na 2 HPO 4 , pH 6.8) set up against a 1-ml reservoir of 2 M ammonium sulfate. Large crystals grew at 4 °C in 3 days. Crystals were collected by pipette and crushed mechanically to create a seed stock by vortexing with seed beads for approximately 20 min with occasional cooling on ice 32 . For the XFEL experiment in April 2015 (run a), new 18.5-μl sitting-drop vapour-diffusion crystallization drops were set up in order to yield large numbers of microcrystals. In these experiments, the protein concentration was 8.5 mg ml −1 and a protein:precipitant concentration of 10:7.5 was used in the drops. Undiluted crystal seed stock (1 μl) was spiked into the drops for a final v/v concentration of 5.4%. Crystallization drops were then mixed by pipette and covered with a glass cover slide. Rod-like crystals grew over 5 days at 4 °C and were 10–20 μm in the longest dimension. Microcrystals for the experiment in June 2016 (run b) were prepared as above, but with an additional round of microseeding using crushed microcrystals to seed an additional round of microcrystal growth 32 . Microcrystals were collected by pipette and concentrated up to threefold by centrifugation at 1,000 g for 1 min followed by removal of the supernatant. These crystals were thicker and, although diffracting to a higher resolution, they highlighted the compromise that is inherent to TR-SFX as a lower excited-state occupancy was usually observed when working with crystals of higher optical density. Sample injection and data collection Microcrystals were transferred from Eppendorf tubes to a sample reservoir using a syringe and passing the microcrystal slurries through a stainless-steel 20-μm filter (VICI) or a 20-μm nylon filter (Sysmex). The reservoir was loaded into a temperature-controlled rocking chamber and injected into the XFEL through a gas dynamic virtual nozzle (GDVN) 33 using an internal diameter of 75 μm. The microjet used a microcrystal suspension flow rate of 20 μl min −1 and was focused to a 10-μm diameter using helium gas. The X-ray beam was aligned to interact with the liquid jet as close to the tip of the GDVN as practical and before Rayleigh break-up of the microjet. Diffraction data were collected at 293 K at the CXI beam line 34 of the LCLS XFEL during beamtime awarded in April 2015 (run a) and June 2016 (run b). Diffraction data were recorded on a Cornell-SLAC Pixel Array detector 35 . The X-ray wavelengths and equivalent pulse energies were 1.89 Å (6.56 keV) in 2015 and 1.31 Å (9.49 keV) in 2016. An X-ray pulse duration of 36 fs was used in 2015 and 45 fs in 2016. The XFEL beam was focused to a 3-μm 2 spot for both experiments. The detector was located 89 mm from the microjet in 2015 and 145 mm from the microjet in 2016. Diffraction data were collected at a repetition rate of 120 Hz from microcrystals that were not exposed to any optical laser pump (dark state) and for five time points corresponding to Δ t = 1 ps, 5 ps, 20 ps, 300 ps and 8 μs after photo-excitation. The time points Δ t = 5 ps and 300 ps were repeated in both 2015 and 2016 and are referred to as datasets a and b, respectively. Laser photoexcitation An optical Ti:Sa pump laser 150 fs in duration was focused to a spot size of 190 μm full-width half-maximum (FWHM) (323 μm 1/e 2 ) and aligned to overlap with the LCLS X-ray pulse. The LCLS timing tool 36 provided a timing accuracy of ±200 fs for the time point, ∆ t , between the arrival of the optical pump laser and the X-ray probe. A pump-laser wavelength of 960 nm was used to photo-excite Bv RC microcrystals, and this wavelength is at the absorption maximum of the special pair ( ε 960 ≈ 100,000 M −1 cm −1 ). The pump laser energy per pulse was 11.8 μJ in April 2015 and 11.0 μJ in June 2016. For an idealized Gaussian beam, 86.5% of this light will pass through a spot with diameter 1/e 2 and 50% of this light will pass through a spot with diameter FWHM. Thus the average fluence within the FWHM spot can be estimated as 25 mJ cm −2 and 23 mJ cm −2 , which equates to a pump-laser power density of 138 GW cm −2 for the 2015 experiment and 129 GW cm −2 for the 2016 experiment. This calculation defines the units used throughout to specify the laser power density. Both values are above 30–100 GW cm −2 , which has been recommended as an upper threshold to avoid nonlinear effects in bacteriorhodopsin 37 , 38 . Extreme nonlinear absorption was observed as ultrafast sample heating in time-resolved X-ray scattering studies of Bv RC when pumped with 800-nm light 39 . When using 800 nm to photo-excite Bv RC, it is the BCh and BPh cofactors (rather than the SP) that absorb light ( ε 800 ≈ 180,000 M −1 cm −1 ). The pump-laser fluence used in that study 39 was 1,560 GW cm −2 . Ultrafast sample heating within a GDVN liquid microjet has also been measured as a function of the 800-nm pump-laser fluence using time-resolved X-ray scattering (figure 28 of ref. 40 ). These measurements show that the energy deposited into Bv RC samples is proportional to the pump-laser fluence (a linear response) up to 270 GW cm −2 and that the measured heating then varies quadratically (a nonlinear response) above a pump-laser fluence of 355 GW cm −2 . Thus, an idealized assumption of a perfectly aligned Gaussian beam may not be realistic, and/or large losses occur as the incoming laser pulse is reflected from the surface of a GVDN liquid microjet, and/or thresholds 37 , 38 of 30–100 GW cm −2 do not apply in this context when Bv RC is photo-excited at 800 nm. When 960-nm light is used to photo-excite the SP of Bv RC it is probably more difficult to induce nonlinear effects because the photo-excited state SP* has an absorption maximum that is red-shifted by 70 nm relative to the ground state 41 and hole-burning 42 has been observed in Bv RC such that SP* is effectively transparent to the incoming light. Moreover, the absorbance of Bv RC at 960 nm is only 56% of it absorbance at 800 nm and therefore nonlinear effects are likely to arise at higher power densities when using 960 nm rather than 800 nm to photo-excite Bv RC. Nonlinear ultrafast heating 40 is observed in Bv RC delivered using a GDVN liquid microjet and photo-excited at 800 nm only above a power density of 355 GW/cm 2 . Therefore, the 960-nm pump-laser power densities of 138 GW cm −2 and 129 GW cm −2 used in this study are below the threshold at which nonlinear effects may reasonably be anticipated to be considerable. These conclusions are supported by time-resolved infrared spectroscopy measurements (Extended Data Fig. 1 ). Time-resolved infrared spectroscopy Time-resolved vibrational spectroscopy measurements were performed with a near-infrared pump and mid-infrared probe set-up using a regenerative amplifier (Spitfire Ace, Spectra Physics) to deliver pulses centred at 800 nm (1.2 mJ, 5 kHz). The amplifier output is used to pump a TOPAS-TWINS (Light Conversion) capable of generating tuneable femtosecond pulses at two different wavelengths. One path was used to generate mid-infrared probe light centred at 6,000 nm via difference frequency generation whereas the other path generated 960-nm pump pulses via second harmonic generation of the idler beam. The 960-nm beam was chopped to 2.5 kHz and delayed in time relative to the probe pulses using an optical delay line. Two weak replicas derived from the mid-infrared beam were used as probe and corresponding reference. Both probe and reference were dispersed in a Horiba spectrograph (grating with 75 grooves mm −1 and detected and integrated on a double-row MCT array with 64 pixels each on a shot-to-shot basis using a commercial detection system (Infrared Systems). Samples of Bv RC were prepared in a customized cell by enclosing around 15 μl of solution ( Bv RC at about 0.4 mM in D 2 O buffer) between two 2-mm thick CaF 2 windows separated by a 25-μm spacer. Probe and reference beams were focused at the sample position and collimated using 90° off-axis parabolic mirrors. The pump beam was focused using a 30-cm lens and overlapped with the probe beam at its focus. The sample cell was placed where pump and probe beams meet and translated continuously perpendicular to the beam direction during data acquisition. The focal spot size of the pump beam was determined using knife-edge scans and yielded perpendicular 1/e 2 radii of 57 μm and 56 μm. Different pump fluences were adjusted using reflective neutral density filters (Edmund Optics). For each fluence, 12 repeats over for 5 time points (1,000 pump shots per time point and repeat, at delays of −50, 1, 2, 5 and 300 ps) were recorded and less than 5% of shots were rejected during data treatment. Signals were calculated by subtracting consecutive pump-on shots from pump-off shots followed by application of the noise reduction algorithm 43 , 44 . The spectral resolution is <5 cm −1 . The results of these measurements are presented in Extended Data Fig. 1 . Data processing Images containing more than 20 diffraction spots were identified as diffraction hits by Cheetah 45 . Cheetah converted the raw detector data into the HDF5 format and data were then processed using the software suite CrystFEL version 0.6.2 46 , 47 . Crystals were indexed using a tetragonal unit cell ( a = b = 226.4 Å, c = 113.7 Å, α = β = γ = 90°). Scaling and merging were performed using Monte Carlo methods using the same software. Data from the dark state and photo-excited states were scaled together using the custom dataset-splitting option in the CrystFEL partialator module. Structure factors were calculated from merged intensities by the CCP4 module TRUNCATE 48 and molecular replacement was performed using the CCP4 module Phaser 49 using the ground-state Bv RC structures solved with XFEL radiation (PDB codes 5O4C and 5NJ4) as a search models. Statistics for data collection and refinement are detailed in Extended Data Table 1 . Electron density difference maps Isomorphous F o (light) − F o (dark) difference Fourier electron density maps were calculated using the refined dark state structures for phases with time points Δ t = 5 ps (dataset a) and 300 ps (dataset a) calculated against data and coordinates using the PDB entry 5O4C whereas difference Fourier electron density maps with time points Δ t = 1 ps, 5 ps (dataset b), 20 ps, 300 ps (dataset b) and 8 μs were calculated against data and coordinates using the PDB entry 5NJ4. Thus all difference electron density map calculations used only data collected during the same experiment. Difference Fourier electron density maps represent measured changes in X-ray diffraction intensities as changes in electron density without bias towards the structural model of the photo-activated state. The technique is extremely sensitive to small changes in electron density 50 and reveals more subtle features than are apparent from 2 mF o − DF c electron density maps alone ( m is the figure of merit and D is estimated from coordinate errors). A Bayesian weighting calculation script 51 using CNS software 52 was also used to analyse the difference Fourier electron density maps. In this procedure, differences in the structure factor amplitudes were weighted by the product of the figure of merit of the ground state structure reflections and of a weighting term, w (equation 14 of a previously published study 53 ), which was calculated using Bayesian statistics developed to improve the signal-to-noise ratio 53 . For six of the seven datasets, the recurring difference electron density features were slightly strengthened by this step. The exception was time point ∆ t = 8 μs, which has difference electron density features that are weaker than for the other maps (Fig. 2f ) and this appears to be due to a lower occupancy of the charge-separated state in these microcrystals. It is possible that a fraction of the photo-oxidized SP + population is reduced from the C subunit by ∆ t = 8 μs, which is longer than the timescale of this electron-transfer step 26 . However, no efforts were made to reduce the C subunit when preparing microcrystals and a similar occupancy (30 ± 5%) is observed to persist in time-resolved spectroscopy measurements on crystals for up to millisecond delays 10 . SVD analysis The SVD analysis of difference Fourier electron density maps was performed using an in-house-generated code written in Python that is based on a previously described approach 54 . As has been discussed 55 , SVD may serve as a noise filter to enhance the signal across a sequence of difference Fourier electron density maps. This step contains the assumption that the overall mechanism is linear and that changes in electron density are similar over the selected time windows. When applying SVD, we evaluate the expression [ U , Σ , V ] = SVD( A ), where A is a matrix of n difference Fourier electron density maps containing m elements; U is an n × n unitary matrix; Σ is an n × m rectangular matrix containing n diagonal elements (the singular values) arranged in decreasing order and all other matrix elements are zero; and the first right singular vector (the first row of the matrix V ) is referred to as the principal component. Results from SVD analysis of all seven electron density maps are presented in Fig. 2e and Extended Data Fig. 3l, m . Results from SVD analysis deriving from the first four time points (∆ t = 1 ps, 5 ps of datasets a and b, 20 ps) and the last three time points (∆ t = 300 ps of datasets a and b, 8 μs) are shown in Fig. 3c, d , Extended Data Figs. 3 h, i, 5h, i and Supplementary Videos 1 , 2 . This separation of the maps is motivated by the fact that photo-activated Bv RC molecules have menaquinone oxidized for the first subset of time points yet most menaquinone molecules of photo-activated Bv RC are reduced for the second subset of time points. Structural refinement of photo-excited states Isomorphous F o (light) − F o (dark) difference Fourier electron density maps were inspected in COOT. Structural refinement was performed using Phenix 56 . A model was first placed within the unit cell using rigid body refinement followed by multiple rounds of partial-occupancy refinement in which the SP, BCh L , BPh L and Q A portions of transmembrane helices E L , D L , E M and D M , as well as connecting loops, and additional residues near cofactors (153–178, 190, 230 and 236–248 of subunit L and 193–221, 232, 243–253, 257–266 of subunit M) were allowed to adopt a second conformation with 30% occupancy and the dark-state structure (PDB entry 5O4C) was held fixed. The occupancy of 30% was chosen by assessing the results from partial-occupancy refinement when the occupancy was allowed to vary and was imposed for all structural refinements for consistency. Results from structural refinement were compared against the difference electron densities and some manual adjustments were made using COOT 57 . Refinement statistics are displayed in Extended Data Table 1 . Validation of structure geometry was performed using MOLPROBITY 58 and PROCHECK 59 . Structural changes were also validated by calculating simulated difference Fourier electron density maps from the refined structures 10 , 20 (Extended Data Figs. 2 l, 4j ). Structural analysis of large-scale protein motions The high multiplicity of SFX data was exploited for structural analysis by randomly selecting a subset of experimental observations from within each SFX dataset to create 100 separate (but not independent) serial crystallography datasets for the two resting state datasets and the seven photo-activated datasets, amounting to 900 resampled datasets in total. For each of these resampled datasets the mean and uncertainty estimates ( σ ) for every unique Bragg reflection were determined. Structural refinement over a cycle of 100 rigid body and 100 isotropic restrained refinements with all atoms allowed to move and with every atom having 100% occupancy was then performed against each of these 900 resampled datasets using PDB entry 5NJ4 as a starting model. R free values ranging from 22.1% to 23.1% were recovered. Coordinate errors associated with each individual structural refinement are estimated 60 to be ≤0.2 Å. The distances between the Cα atoms of the photo-activated and resting Bv RC structures were compared pairwise using the miller package of CCTBX 61 . A 100 × 100 Euclidian distance matrix was then calculated for every Cα atom and every time point according to: Δ r ij ∆ t ,dark = | r i ∆ t − r j dark |, where i and j vary from 1 to100 and denote resampled dataset numbers, ∆ r ij depicts the distance separating the Cα coordinates of datasets i and j , and r i ∆ t and r j dark are the refined coordinates obtained from the photo-activated or dark structures, respectively. A second-order Taylor series expansion was then used to estimate the mean and s.e.m. associated with the ratio Δ r ij ∆ t ,dark /Δr ij dark,dark arising from coordinate variations within each set of 100 structural refinements. This expansion leads to the expression: $$\begin{array}{l}{\rm{Error}}-{\rm{weighted}}\,{\rm{mean}}\,{\rm{ratio}}=\frac{\langle \Delta {{r}_{ij}}^{{\rm{state}},{\rm{dark}}}\rangle }{\langle \Delta {{r}_{ij}}^{{\rm{dark}},{\rm{dark}}}\rangle }-\\ {\rm{var}}(\Delta {{r}_{ij}}^{{\rm{dark}},{\rm{dark}}})\times \frac{\langle \Delta {{r}_{ij}}^{{\rm{state}},{\rm{dark}}}\rangle }{{\langle \Delta {{r}_{ij}}^{{\rm{dark}},{\rm{dark}}}\rangle }^{3}}+\frac{{\rm{cov}}(\Delta {{r}_{ij}}^{{\rm{state}},{\rm{dark}}},\Delta {{r}_{ij}}^{{\rm{dark}},{\rm{dark}}})}{{\langle \Delta {{r}_{ij}}^{{\rm{dark}},{\rm{dark}}}\rangle }^{2}}\end{array}$$ where ⟨ X ⟩ is the mean of set X , var.( X ) is the variance of set X and cov( X , Y ) is the covariance of two sets, X and Y . The resulting error-weighted mean ratios are shown in Fig. 4a–c and Extended Data Fig. 7 in which movements are coloured from grey (movements ≤80% of the maximum ratio) to red (movements ≥95% of the maximum ratio). Full-occupancy structural refinement avoided systematic bias in this analysis arising from partial-occupancy structural refinement with a single dark conformation held fixed, but at the cost of underestimating the magnitude of light-induced conformational changes. Despite this limitation, this analysis extracted recurring structural motions that evolve with time (Fig. 4a–c , Extended Data Fig. 7 and Supplementary Video 3 ) and in a manner that is both consistent with the known timescales of the electron-transfer reactions (Fig. 1 ) and theoretical predictions (Fig. 4d–f and Supplementary Video 4 ). Tests of the statistical significance of recurring difference electron density features For each of the seven experimental difference Fourier electron density maps (∆ t = 1 ps, 5 ps (dataset a), 5 ps (dataset b), 20 ps, 300 ps (dataset b), 300 ps (dataset a), 8 μs) a lower pedestal of 3.0 σ was applied such that all electron density with an amplitude <3.0 σ was set to zero. Both positive and negative difference electron densities were then integrated within a 4.5 Å radius sphere around a chosen coordinate (Extended Data Fig. 3j ) as described for the analysis of TR-SFX data recorded from bacteriorhodopsin 20 . These positive ( A + ) and negative ( A − ) integrated difference electron density amplitudes were merged to yield a single amplitude according to: \(A({\bf{r}})=\sqrt{{({A}^{+})}^{2}+{({A}^{-})}^{2}}\) around the centre of integration r . The results of this analysis are presented in Fig. 2f in which the six centres of integration, r , are chosen as: the centre of the BPh M ring; the magnesium atom of BCh M ; the mid-point between the two magnesium atoms of the two SP bacteriochlorophylls; the magnesium atom of BCh L ; the centre of the BPh L ring; and the centre of the ketone-containing six-carbon ring of menaquinone Q A . For tests of statistical significance (Extended Data Table 3 ), this set was complemented by the addition of amplitudes extracted by integration around the iron atoms of haem 1 , haem 2 , haem 3 and haem 4 to create a set of ten amplitudes for each of the seven time points: $$\begin{array}{l}[A({{\rm{BPh}}}_{{\rm{M}}}),A({{\rm{BCh}}}_{{\rm{M}}}),A({\rm{SP}}),A({{\rm{BCh}}}_{{\rm{L}}}),A({{\rm{BPh}}}_{{\rm{L}}}),A({{\rm{Q}}}_{{\rm{A}}}),A({{\rm{H}}}_{1}),\\ A({{\rm{H}}}_{2}),A({{\rm{H}}}_{3}),A({{\rm{H}}}_{4}){]}_{\Delta t}\end{array}$$ arranged as a 10 × 7 element matrix. Control ‘noise-only’ F o (dark) − F o (dark) isomorphous difference Fourier electron density maps were calculated by first selecting sixteen resampled datasets from the set of 100 generated from the 2015 Bv RC dark data, and sixteen resampled datasets from the set of 100 generated from the 2016 Bv RC dark data. Eight F o (dark) − F o (dark) isomorphous difference Fourier electron density maps were then calculated by pairwise comparisons between the sixteen resampled datasets of the 2015 data, and another eight difference Fourier electron density maps were calculated by pairwise comparisons of the sixteen resampled datasets of the 2016 data. Seven control difference Fourier electron density maps were then randomly selected from the set of sixteen noise-only maps, difference electron density values with an amplitude lower than 3 σ were set to zero, and a set of ( A ( r , dark − dark)) was created by integrating the remaining difference electron density within a 4.5 Å radius sphere centred on the Bv RC cofactors as described above. A two-sample t -test was then performed in MATLAB to determine whether the set of seven time-dependent amplitudes ( A ( r , ∆ t )) and the set of seven noise-only amplitudes ( A ( r , dark − dark)) were indistinguishable from one another (the null hypothesis). The t -tests were then repeated 1,000 times by randomly selecting a different combination of seven control amplitudes ( A ( r , dark − dark)) from the sixteen noise-only difference Fourier electron density maps calculated above (of 16!/(9! × 7!) = 11,440 possible different combinations of the 16 control maps). The results of this analysis are summarized in Extended Data Table 3 and show that, when a threshold of P ≤ 0.001 is applied, the difference electron density amplitudes associated with the SP cannot be ascribed to noise. When a threshold of P ≤ 0.0125 is applied and the last three time points (∆ t = 300 ps (dataset a), 300 ps (dataset b), 8 μs) are examined as a set, the difference electron density amplitudes associated with the SP, BCh L and Q A cannot be ascribed to noise. Conversely, the set of difference electron density amplitudes associated with most other cofactors, as well as all sets of difference electron density amplitudes generated from noise-only maps, are indistinguishable from noise according to the results of these two-sample t -tests (Extended Data Table 3 ). QM/MM geometry optimizations The initial coordinates were taken from PDB entry 5O4C and missing residues and cofactor segments were retrieved from PDB entry 1PRC 62 . Protonation states of residues were chosen based on their reference pK a values and structural criteria such as hydrogen bond interactions. After the addition of protons to the structure, a 200-step steepest descent geometry optimization was performed with Gromacs 4.5 63 to relax these coordinates. During this optimization the positions of the heavy atoms were constrained to their positions in the X-ray structure. As in previous work, the interactions were modelled with the Amber03 force field 64 , 65 . Non-bonded Coulomb and Lennard–Jones interactions were evaluated without periodic boundary conditions and using infinite cut-offs. After relaxing hydrogens with MM optimization, we performed several QM/MM geometry optimizations of all atoms in the reaction centre, using the interface between the TeraChem quantum chemistry package 66 , 67 and Gromacs 4.5 63 . These optimizations were also performed without periodic boundary conditions and with infinite cut-offs for the Coulomb and Lennard–Jones interactions. The QM subsystems (Extended Data Fig. 6a–c ) were modelled with unrestricted density functional theory (DFT). We used the PBE0 functional set 68 in combination with the LANL2DZ basis set 69 for these DFT calculations. Empirical corrections to dispersion energies and interactions were introduced with Grimme’s DFT-D3 model 70 . The remainder of the protein, including crystal water molecules, was modelled with the Amber03 force field 64 , 65 , in combination with the TIP3P water model 71 . We searched for minimum-energy geometries in all relevant oxidation states of the system using the limited-memory Broyden–Fletcher–Goldfard–Shannon quasi-Newton optimization algorithm. The goal of these optimization steps was to characterize the structural relaxation of the protein in response to changes in the electronic states of the cofactors along the A-branch of the photo-induced electron-transfer process. We therefore examined the following electronic configurations: (1) all cofactors in their resting states (SP 0 BPh L 0 Q A 0 ); (2) special pair photo-excited, other cofactors in resting state (SP*BPh L 0 Q A 0 ); (3) special pair photo-oxidized, BPh L reduced (SP + BPh L − Q A 0 ); and (4) special pair photo-oxidized, Q A reduced (SP + BPh L 0 Q A − ). Because including all cofactors into one large QM region is computationally too demanding, we performed the optimizations with a different QM subsystem for each cofactor, including the nearest residues, in all relevant electronic states. The structural response to the change in electronic state (Fig. 4d–f ) was obtained by comparing the optimized geometries and potential energies in the various oxidation states. To quantify the effect of photo-absorption by the special pair (SP, configuration 1 to 2), we first optimized the resting state with the SP and nearby residues in the QM region (Extended Data Fig. 6a ), described at the PBE0/LANL2DZ level of theory plus D3 dispersion corrections. This structure was used as a reference for the optimized structures in the excited state (SP*, configuration 2) and after photo-oxidation (SP + , configuration 3). Using the same QM/MM subdivision, we optimized the system in the first singlet excited state ( S 1 ) by switching the QM description to the time-dependent DFT within the Tamm–Dancoff approximation 72 , and in the photo-oxidized state by switching the spin state of the electronic wave function to the lowest energy doublet state ( D 0 ). In the QM/MM optimization of the D 0 state of the SP, we modelled the BPh L with point charges representing the reduced state of that cofactor. Only very modest protein structural changes were associated with the optimized geometries with the SP in the S 1 and D 0 states relative to the reference structure in the resting state ( S 0 ). Similarly, we also optimized the geometry of the protein with the BPh L and nearby residues in the QM region (Extended Data Fig. 6d ) in both the lowest energy singlet ( S 0 , configuration 1) and doublet ( D 0 , reduced, configuration 3) states. In the optimization of the D 0 state of BPh L , the partial charges on the SP were changed to reflect its photo-oxidized ( D 0 , SP + ) state. Again the structural response is rather minor, as the geometries are very similar (Extended Data Fig. 6e ). In the next step of the electron-transfer process, the electron transfers from BPh L to Q A (configuration 4). We optimized the protein with Q A and its immediate environment, including the non-haem Fe 2+ site, in the QM region. The optimized structures in the resting and reduced states are compared in Extended Data Fig. 6f . Reduction of Q A from menaquinone to (deprotonated) semiquinone induces considerable structural changes in the Q A binding pocket. In line with the difference densities observed at 300 ps after photo-excitation, the hydrogen bond between the Q A carbonyl and His217 M reduces by 0.17 Å. We suggest that the reduction of this hydrogen bond helps to stabilize the negative charge on the Q A . To quantify the overall structural response to the electron transfers, we computed the displacements of the atoms in the various states (configurations 2–4) with respect to the structure of resting state (configuration 1) and recorded these displacements as B-factors to the PDB coordinate file of the resting state. Because only one cofactor was included in the QM region of our QM/MM optimizations, we summed up the displacements of both QM/MM optimizations of each redox state. The amplitudes of these displacements are represented using colour in Fig. 4d–f . Stabilization energies To estimate the energetic effects of the protein structural changes on the electron-transfer process, we computed the adiabatic and vertical electron affinities for Q A in isolation and in the optimized QM/MM protein models. These energies are shown schematically in Extended Data Fig. 6g, h . For the neutral Q A in vacuum, the electron affinity without structural relaxation is 164.5 kJ mol −1 (vertical electron affinity). Structural relaxation in response to adding the electron increases the affinity further by 24 kJ mol −1 , so that the energy difference between the neutral reactant minimum and the reduced product minimum is 188.5 kJ mol −1 (adiabatic electron affinity). The calculated adiabatic electron affinity is in good agreement with results from previous computations 29 , but is an overestimation with respect to the experimental value for the related 1,4-naphthoquinone (175 kJ mol −1 ) 73 . Inside the protein environment, the vertical electron affinity is much higher (258 kJ mol −1 ), part of which we attribute to the electrostatic interaction between the reduced Q A cofactor and the positively charged Fe 2+ ligand site. Structural relaxation of both the Q A cofactor and the protein environment increases the electron affinity by 60 kJ mol −1 to yield an adiabatic electron affinity of 318 kJ mol −1 . Thus, the results of the computations suggest that the structural response of the protein adds another 36 kJ mol −1 to the intrinsic relaxation energy of Q A (24 kJ mol −1 in vacuum) as concluded in previous computations 29 . We note that in this analysis we focused only on the effect of the structural response on the affinity of Q A . To estimate the total reaction energy associated with the photo-induced electron-transfer process from the SP to Q A , we also need the absolute energies of the neutral, photo-excited and oxidized states of the SP as well as the neutral and reduced states of BPh L . However, as these energies were not computed with identical QM/MM setups, we do not provide an accurate estimate here. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Atomic coordinates and structure factors have been deposited in the Protein Data Bank. PDB ID codes are as follows: 5O4C , dark conformation (dataset a); 5NJ4 , dark conformation (dataset b); 6ZHW , time point Δ t = 1 ps; 6ZI4 , time point Δ t = 5 ps (dataset a); 6ZID , time point Δ t = 5 ps (dataset b); 6ZI6 , time point Δ t = 20 ps; 6ZI5 , the time point Δ t = 300 ps (dataset a); 6ZI9 , time point Δ t = 300 ps (dataset b); 6ZIA , time point Δ t = 8 μs. Difference Fourier electron density maps and stream files containing X-ray diffraction intensities are deposited at the CXI database ( ) with identification number 161. Source data are provided with this paper. Code availability Software used for SVD analysis is available at . Code written in MATLAB to analyse difference electron-density amplitudes is available at . Software associated with the resampling of X-ray diffraction data is available at . The Gromacs 4.5 version linked to TeraChem for QM/MM optimization is available for download from .
Photosynthesis is the primary source of energy for almost all life-on-earth. A new study, published in Nature, provide new insight into how evolution has optimized the light-driven movements of electrons in photosynthesis to achieve almost perfect overall efficiency. Almost all life on earth has the energy-transducing reactions of photosynthesis as their primary source of energy. These light-driven reactions occur in plants, algae and photosynthetic bacteria. An X-ray structure of a protein provides scientists with a lot of information as to how they perform their biological task in a living cell. X-ray films show structural changes within a protein In this work scientists used a method called time-resolved X-ray crystallography to make a movie of structural changes within the protein responsible for the light-driven chemical reactions of photosynthesis. To achieve this scientists at the University of Gothenburg used a world-leading X-ray source in California (an X-ray free electron laser) to examine if structural rearrangements within photosynthetic proteins occur on the time-it-takes light to cross a hair of your head. Remarkably, these measurements showed that the protein changes structure on this time-scale. Subtle movements in the protein were seen Scientists at the University of Gothenburg observed that these movements were very subtle, with both the electron donor (a chemical group which absorbs light and releases an electron) and the electron acceptor (a chemical group which is located 2 nm away and which receives this electron) moving less than 0.03 nm (1 nm = 10-9 m or a millionth of a millimeter) in 300 ps (1 ps = 10-12 sec is called a picosecond and is a millionth of a millionth of a second). The protein as a whole also changed structure very slightly in order to prevent the electron returning to where it began, which would otherwise make the reaction useless. These results are fundamental to how evolution has optimized energy transducing proteins over billions of years to allow them to perform redox reactions without energy being lost in the process. "Time-resolved crystallography studies of a photosynthetic protein from bacteria reveal how light-induced electron movements are stabilized by protein structural changes occurring on a time-scale of picoseconds," says Richard Neutze, professor at the University of Gothenburg.
10.1038/s41586-020-3000-7
Other
Experimental work reproduces the knapping process at Olduvai
Ignacio de la Torre et al. Spatial and orientation patterns of experimental stone tool refits, Archaeological and Anthropological Sciences (2018). DOI: 10.1007/s12520-018-0701-z
http://dx.doi.org/10.1007/s12520-018-0701-z
https://phys.org/news/2018-10-experimental-knapping-olduvai.html
Abstract Freehand and bipolar experimental knapping of quartzite from Olduvai Gorge in Tanzania is used to conduct spatial analysis of artefact distributions using GIS techniques, and to investigate the orientation of refit lines using circular histograms. The aim of our study is to discern patterns that can be applied to the archaeological record in two domains, namely the identification of knapping episodes and the utility of refitting line orientations in addressing post-depositional disturbance. Our spatial analysis shows that distinctive clustering patterns can be discerned according to knapping stance, handedness and flaking technique. The circular dispersion of refit lines in the horizontal distribution of bipolar assemblages is strongly patterned, indicating that anisotropy of conjoining sets is inherent to pristine hammer-and-anvil knapping episodes. Working on a manuscript? Avoid the common mistakes Introduction The study of spatial distributions in experimental lithic scatters covers a range of topics, including knapper handedness (Bargalló et al. 2017 ), knapper stance (Newcomer and de Sieveking, 1980 ; Barton and Bergman 1982 ; Schick 1984 ; Fischer 1990 ; Kvamme 1997 ), reduction strategy and raw material (Kvamme 1997 ), hammer type (Newcomer and de Sieveking, 1980 ; Kvamme 1997 ) and post-depositional processes (Bowers et al. 1983 ; Gifford-González et al. 1985 ; Schick 1984 ; Nielsen 1991 ; Texier et al. 1998 ; Lenoble et al. 2008 ; Bertran et al. 2015 ; Driscoll et al. 2016 ). These studies have produced relevant observations on the spatial distribution of experimental scatters, but quantitative results based on systematic GIS analyses are yet to be done. Such lack of quantification in experimental assemblages applies both to density patterns and the refitting of the lithic sets produced. Analysis of orientation patterns in archaeology is traditionally linked to the study of post-depositional processes (e.g. Isaac 1967 ; Schick 1984 ; Kreutzer 1988 ; Pope 2002 ; Lenoble and Bertran 2004 ; Boschian and Saccà, 2010 ; Benito-Calvo and de la Torre 2011 ; Benito-Calvo et al. 2009 , 2011 ; Sánchez-Romero et al. 2016 ; de la Torre and Wehr 2018 ; McPherron 2018 ), and refitting is widely recognised as a powerful tool to disentangle site formation (e.g. Cahen 1980 ; Villa 1982 , 2004 ; Hofman 1986 ; Bordes 2003 ; Deschamps and Zilhão, 2018 ). Nonetheless, most studies concerned with taphonomic processes have addressed the vertical dimension of conjoining sets and their stratigraphic implications, while the horizontal dimension has received less attention in archaeological (e.g. Austin et al. 1999 ; Pope 2002 ; Ashton et al. 2005 ; Sisk and Shea 2008 ; Santamaría et al. 2010 ; de la Torre et al. 2014 ; Deschamps and Zilhão, 2018 ) and experimental (e.g. Schick 1984 ) assemblages. This paper aims to contribute to the literature by exploring spatial dynamics of experimental knapping episodes and conclusions that can be drawn from the orientation patterns of refits. The baseline distribution for artefact and refit orientations is often assumed to be random, with the assumption that any variation is the product of natural processes. While that is certainly the case for the orientation of the long axes of items (Toots 1965 ; Nagle 1967 ; Wendt 1995 ), and it is not realistic to envisage undisturbed assemblages where artefact and bone axes are preferentially oriented (de la Torre and Benito-Calvo 2013 ), there is no evidence to suggest this random patterning is inherent to other variables such as refit lines. To address these issues, our work includes the study of refit orientation patterns and the spatial analysis of four knapping experiments on quartzite from Olduvai Gorge in Tanzania. The goal of these experiments was to recreate core-and-flake assemblages through freehand and bipolar knapping techniques. Our analysis involves refitting of all experimental sequences, digital mapping of the artefact distribution, spatial analysis of artefact clusters and analysis of the refit orientation patterns. Through this, the study aims to elucidate clustering patterns of experimental lithic scatters, and reflect on the validity of orientation data of refit sets for spatial analysis in Palaeolithic assemblages. Materials and methods This paper involves the study of four knapping experiments (named Exp. 18, Exp. 40, Exp. 54 and Exp. 56) with quartzite (sensu Hay 1976 ) sourced from the Naibor Soit hills at Olduvai Gorge, Tanzania, from a repository within the Lithics Laboratory at the Institute of Archaeology, University College London. The aim of these four experiments was to produce as many flakes from individual block blanks using simple freehand and bipolar knapping techniques. Knapping All experiments were performed by one of us (TP), a right-handed knapper with 9 years of knapping experience, over a 2 × 2 m square cloth laid on top of a tile floor in an enclosed outdoor space. Two of the experiments (Exp. 18 and 40) were conducted using freehand knapping, in which the core was held in the left hand and struck with the hammerstone in the right hand. Exp. 18 was performed standing with feet 80 cm apart, and in Exp. 40 the knapper was kneeling (Fig. 1 ). Both cores were rotated and flipped whenever angles on a plane became too obtuse, but platforms were not prepared. Knapping continued until the core lost suitable flaking angles. Obtained flakes were dropped from the height of the hand. Exhausted cores were placed on the floor below their knapping position. Fig. 1 Experimental refit sets studied in this paper. a , b Exp. 18, freehand standing. c , d Exp. 40, freehand kneeling. e – h Exp. 54 ( e , f ) and Exp. 56 ( g , h ), bipolar kneeling Full size image The other two experiments (Exp. 54 and 56) involved bipolar knapping (Fig. 1 ). The cores were placed on top of the anvil and stabilised with the left hand, while the right hand struck with the hammerstone at a 90° angle to the platform. Both experiments were performed kneeling to reach the anvil, and once exhausted, cores were left in their flaking position on the anvil. Mapping Knapping sequences were videotaped, and the resulting experimental scatters were photographed from an orthogonal view as well as from multiple angles, with each photograph overlapping by at least 60%. The images were combined to create a high-resolution photo merge using Adobe Photoshop for Exp. 18, 40 and 56, and Agisoft PhotoScan for Exp. 54 (in this case a 3D photogrammetry model was first created and then a 2D orthophoto was extracted). The resulting plans were then used to map in situ the all pieces over 2 cm in size within the 2 × 2 m cloth. These pieces were given an individual ID and labelled with a QR code (Table 1 ). Smaller debris were collected per 0.5 m × 0.5 m quadrants. Table 1 Assemblage composition and main features of the refitted artefacts Full size table The photo maps of knapping scatters were imported into ArcGIS 10 as raster images and then georeferenced. The position of the knapper in each experiment was recorded and used as an arbitrary north. Polygon shapefiles were created along the outlines for each numbered piece and, where appropriate, the anvil. The centroid of each shapefile as designated by ArcGIS was used as the x , y coordinate for each piece. Refitting All material that received a unique identifier was measured (maximum length, width, thickness and weight), classified technologically and subjected to refit analysis. Refitting of Olduvai quartzite is notoriously difficult even in experimental settings (Proffitt and de la Torre 2014 ; Byrne et al. 2016 ), and thus the videos, images of the pre-modified blocks and photo maps were used to help identify the sequence position of certain pieces. Still, conjoining of freehand experiments took an average of 9 h each. Due to large amounts of crushing on the bipolar sequences, refitting was even more laborious in the case of Exp. 54 and 56, and took an average of 13 h each. The analysis of refit sets was largely based on Cziesla ( 1990 ) and De Loecker et al. ( 2003 ). The availability of videos, photogrammetry and the completeness of the assemblages enabled us to produce a more refined Harris Matrix for each refit set, in which each stage consists of a single flake or fragment set (i.e. refitting fragments resulting from a single strike) (Hiscock 1986 ), and to record the distance and orientation between successive flakes (section D in Figs. 2 , 3 , 4 and 5 ). Two additional options to represent refit lines were used to analyse the assemblages; one was to visualise all pieces as ‘projectiles’ originating from the core to show the spatial dispersion of the pieces (section B in Figs. 2 , 3 , 4 and 5 ). In addition, we produced maps with bi-directional refit lines between dorsal-ventral, transverse and longitudinal refits (section F in Figs. 2 , 3 , 4 and 5 ). Once such conjoining sequences were established for all four experiments, they were placed into an ArcGIS add-in module that we scripted to analyse refit sets spatially. Fig. 2 Maps ( a – b , d , f ) and circular histograms ( c , e , g ) of Exp. 18. a All plotted artefacts. b , c Distribution of refitting artefacts from the core. d , e Directionality of the flaking sequence within the refit set. f , g Map and orientation of refit lines Full size image Fig. 3 Maps ( a , b , d , f ) and circular histograms ( c , e , g ) of Exp. 40. a All plotted artefacts. b , c Distribution of refitting artefacts from the core. d , e Directionality of the flaking sequence within the refit set. f , g Map and orientation of refit lines Full size image Fig. 4 Maps ( a , b , d , f ) and circular histograms ( c , e , g ) of Exp. 54. a All plotted artefacts. b , c Distribution of refitting artefacts from the core. d , e Directionality of the flaking sequence within the refit set. f , g Map and orientation of refit lines Full size image Fig. 5 Maps ( a , b , d , f ) and circular histograms ( c , e , g ) of Exp. 56. a All plotted artefacts. b , c Distribution of refitting artefacts from the core. d , e Directionality of the flaking sequence within the refit set. f , g Map and orientation of refit lines Full size image Spatial analysis Average nearest neighbour (ANN), Getis–Ord General G and Global Moran’s were applied to identify clustered, uniform or dispersed patterns. Ripley’s K Function was used to test if clustering changed over a range of distances and Global Moran’s I to establish the distance of maximum clustering. Identification of clusters was made using Getis–Ord Gi* statistics (Sánchez-Romero et al. 2016 ). The quantitative variables used in Gi* were frequency, weight and length of pieces, using the inverse Euclidean distance as the spatial relationship between artefacts. Getis–Ord and Global Moran’s statistics for frequency of artefacts followed the quadrat method, where counting of pieces was conducted in quadrats of 125 mm, which is twice the size of the mean area per piece, considering the maximum area of dispersion for artefacts of all the experiments. Artefact frequency was also analysed through kernel density maps. Orientation patterns An add-in for ArcGIS was scripted to calculate distances between conjoining artefacts, and also orientation of the refit lines, based on trigonometric relationships on a Cartesian coordinate system. This script calculates both the direction and azimuth (where sequencing can be ascertained). The orientation data obtained with this ArcGIS module was then plotted into Rose diagrams with GeoRose software. The circular dispersion of displacement vectors was characterised using circular histograms and circular descriptive statistics (Fisher 1995 ). To estimate the reference direction of the assemblages, we used the mean direction (mean Cartesian vector unit) and the modal direction (the direction of maximum concentration of data). The module of the mean vector unit (R) was used as an index of dispersion (Benito-Calvo et al. 2009 , 2011 ), varying between 1 (when all vector all coincident) and 0 (when the dispersion is high). Since R is not a useful indicator of data spreading unless they comprise unimodal distribution, we first combined a unimodal statistical test (Rayleigh’s test) with omnibus tests (Rao’s spacing, Watson’s and Kuiper’s tests) to corroborate that the data followed mainly a unimodal distribution (Benito-Calvo and de la Torre 2011 ; de la Torre and Benito-Calvo 2013 ). Concentration was also estimated using the concentration parameter K (Fisher 1995 ), which measures the departure of the distribution from a perfect circle (uniform distribution). Circular histograms in 15° (Figs. 2 – 5 ) and 5° (Fig. 8 ) bins and descriptive statistics were also calculated weighting the vectors and axis according to the distance between artefacts and to the weight of pieces. Statistics were calculated using Oriana 3.13 and SpheriStats 3.1. Results Spatial distribution of artefacts and refit lines The Rose diagram of Exp. 18 shows that about half of the pieces are located within a 75° interval to the southeast of the core (Fig. 2 c), with the rest evenly distributed in all directions. The mean distance between products and the core is 343 mm, and the longest is 761 mm (Table 1 ). Analysis of the sequence (28 stages) shows that the majority of products landed to the northwest or south of the previous removal (Fig. 2 e). More than 75% of the stages are less than 600 mm away from the preceding product (Fig. 6 b). Refits have a mean distance of 385 mm (Table 1 ), and longer refit lines produce a largely elongated NW–SE trend (Fig. 2 g). Figure 3 c shows that most artefacts in Exp. 40 occur in a 90° interval to the northeast of the core. More than 75% of core-product distances (Fig. 3 b) are less than 400 mm, with outliers at 681 mm and 886 mm (Fig. 6 a and Table 1 ). The knapping sequence (27 stages; Fig. 3 d) shows a general trend of removals to move towards the east diagonally, and almost every other stage is to the North or South of its previous and next stage, forming a N–S trend (Fig. 3 e). This is corroborated by the refit orientation circular histogram (Fig. 3 g), which also indicates an additional trend from NE to SW. Many of the refits are less than 400 mm from one another (Fig. 6 c), thus following the pattern of core–product distances. In the case of Exp. 54, most pieces are within a 120° interval east from the core (Fig. 4 c). Core–flake distances range between 72 and 1162 mm (Fig. 6 a). There are 21 stages in the sequence, which shows an E–W trend (Fig. 4 e). Most distances between successive stages are less than 600 mm, although there is a larger amount of variation in the lower end of the distribution than in freehand experiments. The Rose diagram of refits (Fig. 4 g) shows a strong unimodal pattern along the E–W axis. Distances between conjoining pieces present a normal distribution, and are all ≤ 970 mm, with a mean of 475 mm (Table 1 ; Fig. 6 c). As shown in Fig. 5 c, most Exp. 56 artefacts are clustered in a Rose diagram to the NE of the core, in a narrow range of 75°. This reduction sequence contains 18 stages, where successive products are distributed randomly with respect to the previous detachment (Fig. 5 d), with no pattern except a singular trend towards 240°–255°. The unimodal and symmetrical NE to SW distribution (Fig. 5 g) may be linked to clustering near the anvil. On the other hand, one-to-one refit distances (mean = 467 mm; see Table 1 ) show a wider variation than the other experiments (Fig. 6 c). Fig. 6 Distances of a refitted artefacts to the core, b consecutive stages from the reduction sequence, c conjoining artefacts, d products to core Full size image Cluster analysis ANN results show that the average distance in all experiments is clearly lower than the average distance in a hypothetical random distribution, indicating that materials follow a clustered distribution (likelihood higher than 99%; Table 2 ). This is supported by Global Moran’s statistics and the Getis–Ord method, indicating a dominance of high-concentration clusters. Ripley’s K Function shows that clustering is maintained up to a distance of 1384–1396 mm, beyond which a dispersed pattern dominates (Table 2 ). Maximum clustering is reached at a distance of 380 mm for Exp. 18, and 464 mm for the rest of experiments. Table 2 Spatial pattern statistics of the experimental assemblages Full size table Figure 7 shows kernel maps of density and dispersion patterns of artefact frequency for each experiment. Exp. 18 has the lowest mean density, and Exp. 54 the highest mean density (Table 3 ). Exp. 40 has the highest maximum density (0.08881), which determines the highest range of density data, with a standard deviation of 0.00891. Exp. 18 contains the lowest maximum density (0.05862) and the lower standard deviation (0.00548; Table 3 ). This indicates that Exp. 54 and Exp. 56 share similar density patterns, whereas Exp. 18 and Exp. 40 have a higher variability and more extreme values. Dispersion of artefact frequency shows the highest concentration in front of the knapper for Exp. 40, and also in front, but slightly displaced to the right, for Exp. 18. Highest concentration areas can be clearly distinguished in the density maps (Fig. 7 a), and are also detected as hot spot clusters by Gi* statistics with 95% level of confidence (Fig. 7 b). Fig. 7 Kernel density analysis and mapping of clusters using Getis–Ord Gi* statistics. a Kernel density maps. Results of Gi* statistics: b hot spots detected using the frequency of pieces (counting of pieces by quadrats of 125 mm) and inverse Euclidean distance. c Hot spots detected using piece length and inverse Euclidean distance. d Hot spots detected using piece weight and inverse Euclidean distance Full size image Table 3 Kernel density statistics of artefact maps Full size table Both Exp. 18 and Exp. 40 also show an area of low artefact concentration that forms a distal arc from right to the left of the knapper. In Exp. 18, this low concentration area shows a scattered pattern, whereas in Exp. 40 the low concentration area has a ring shape and is separated from the highest concentration area by a discontinuous strip with no material. Areas of low concentration do not constitute statistically significant cold spots, according to the Gi* method. The highest concentration areas in Exp. 54 and Exp. 56 are also defined by density maps and Gi* hotspots, and are strongly patterned to the right of the knapper. Getis–Ord Gi* statistics (Fig. 7 c, d) detected only hot spots or statistically significant concentrations of pieces with high length and weight values. For length, Exp. 18 and Exp. 40 show hot spots in front of the knapper which are very concentrated. Conversely, length patterns of Exp. 54 and Exp. 56 are defined by hot spots to the right of the knapper, and show a more dispersed pattern, suggesting a higher dispersion of large pieces. This dual pattern is less obvious in the weight variable. Exp. 18 and Exp. 40 again show concentrated hot spots in front of the knapper, and Exp. 54 presents dispersed hot spots towards the right of the knapper. However, weight hot spots in Exp. 56 show a low dispersion: they are located to the right of the knapper, but are concentrated. Hot spot descriptive statistics show low values for Exp. 18 (Table 4 ), indicating the presence of smaller pieces in this assemblage; for example, hot spots reaching a 95% confidence include pieces with a mean length of 98–110 mm for Exp. 40, Exp. 54 and Exp. 56, but only of 67 mm for Exp. 18 (Table 4 ). Similarly, weight hot spots in Exp. 18 have a mean weight of 33 g and of 41–47 g for the other three experiments. No cold spots or statistically significant concentration of pieces with low length or weight were detected in any experiment. Table 4 Hot spot descriptive statistics calculated for artefact frequency, size (i.e. artefact area) and weight, at 95% confidence interval Full size table Circular dispersion of conjoining sets Core-to-flake displacements Statistical tests of core-to-flake displacements show high statistical significance ( p < 0.01) in all cases. This allows to reject the uniform distribution (Table 5 ) and indicates the presence of preferred orientations in the four experimental assemblages. Since omnibus tests reject uniformity against unimodal and multimodal distributions, and the Rayleigh test detects only unimodal orientations, all the circular core-to-flake distributions can be essentially considered as unimodal preferred orientations. Table 5 Circular statistics of refit orientations: (A) core-to-flake displacements (polar data); (B) refit lines (axial data) Full size table The circular distribution of mean direction in the core-to-flake displacements of Exp. 18 and Exp. 40 indicates some significant differences (Fig. 8 ). While azimuths in Exp. 18 are mainly concentrated between 80 and 150° (Table 5 ), with a mean direction of 108.5° and a modal direction between 145 and 150°, the vectors in Exp. 40 are located mainly in the first quadrant, with a mean direction of 49.9° and a mode of 50–55° (Table 5 ). On the other hand, Exp. 54 and Exp. 56 display similar mean directions (83.5° and 67.2°, respectively), which overlap with 95% of confidence interval (Fig. 8 ). Their modal directions are also very similar (90–95° for Exp. 54 and 80–85° for Exp. 56, Table 5 and Fig. 8 ), and the dispersion of data (indicated by R and K) is higher than in Exp. 18 and Exp. 40. The most concentrated data is found in Exp. 54 (R = 0.81, K = 2.99), while Exp.18 contains the highest dispersion (R = 0.44, K = 0.97). Fig. 8 Circular histograms of refit orientations, considering unfiltered data (i.e. no weighting), data weighted by distance and weighted by weight. a Histograms of core-to-flake displacements (polar data). b Histograms of refit lines (axial data) Full size image Descriptive statistics also included weighting the displacement vectors with the distance covered by each piece from the core, and the artefact weight (Fig. 8 a, Table 5 ). Results show that the mean direction does not vary significantly from the unfiltered data. Exp. 18 is where the mean changed the most significantly, varying close to 15° from weighted to unweighted data. On the other hand, the mean direction did not change significantly in Exp. 54 and Exp. 56, which show variations of only 2–3°. The modal direction is more variable, but with no similar pattern shared by all experiments (Table 5 ). Regarding dispersion, weighted statistics show an increase of the concentration in all experiments, excepting in the data weighted by the distance of Exp. 18, where the concentration is reduced slightly with respect to the unweighted statistics. In general, data weighted by artefact mass shows more concentrated parameters than data weighted by the distance, excepting in Exp. 56 (Table 5 ). Refit lines Clear differences in the axial data of refit lines exist between freehand and bipolar experiments. Statistical results of freehand knapping experiments do not allow rejecting the null hypothesis of uniformity with a > 95% confidence interval. Minimum p values are obtained for Exp. 18, where Rayleigh’s test and Rao’s Spacing test only reach p = 0.073 (92.7% confidence interval), and Exp. 40’s statistical significance is even lower ( p > 0.15). Therefore, no solid preferred orientation can be proposed for freehand knapping refit lines. Conversely, bipolar knapping data (Exp. 54 and Exp. 56) show high statistical significance for the Rayleigh test ( Z > 3.94; p < 0.019) and demonstrate evidence of departure from uniformity in Rao’s Spacing and Watson’s omnibus tests (0.025 < p < 0.005; Table 5 ). These results suggest that bipolar refit lines show strong unimodal preferred orientations. Refit mean directions are different within the freehand experiments (Exp. 18 = 130°, Exp. 40 = 32°; Table 5 ), whereas bipolar refit lines show more consistent mean directions, located around the 90–270° axis (axis 92–272° for Exp. 54 and axis 76–256° for Exp. 56). A similar—although weaker—relationship was observed in the refit line mode, which is more consistent within bipolar experiments than within freehand experiments (Table 5 ). The mode percentage is higher in bipolar (8–10.7%) than in freehand knapping (7.8%) (Table 5 ). The concentration parameters of refit lines also indicate a very dispersed distribution for Exp. 40 (R = 0.11 and K = 0.22) and the highest concentration for Exp. 54 (R = 0.34 and K = 0.72), while Exp. 18 and Exp. 56 have similar intermediate values. However, when refit lines are weighted by the distance (Fig. 8b ), concentration similarities between Exp. 18 and Exp. 56 disappear, although they are still positioned between the end values of Exp. 40 and Exp. 54. Discussion Density patterns Newcomer and de Sieveking ( 1980 ) investigated spatial patterns associated with knapper stance (e.g., standing, seated and sitting on the floor), and observed that the greater the distance from the floor, the larger and more diffused flaking scatters became (see also Fischer 1990 ). Schick ( 1984 ) employed a similar perspective and concurred with Newcomer and Sieveking (1980) that standing produced the largest and most diffuse lithic scatters, whilst also producing more elongated patterns whose density declined as the distance to the knapper increased. Lithic scatters produced whilst kneeling, squatting and sitting on the ground shared similar spatial patterning. These were all more densely clustered compared to standing and produced a more circular or oval distribution. Lithic spatial patterning according to varying reduction techniques has also been studied (e.g. Kvamme 1997 ) although, with the exception of Schick ( 1984 )—who touched upon the spatial differences between freehand percussion and floor knapping; a position similar, but not identical, to bipolar knapping—to our knowledge freehand versus bipolar flaking had yet to be directly investigated. Since proximity to the floor correlates with denser concentrations (Newcomer and de Sieveking, 1980 ; Schick 1984 ), it would be expected that bipolar flaking in Exp. 54 and Exp. 56 produced more tightly clustering patterns than freehand in Exp. 18 and Exp. 40. Whilst this may be the case for the smallest debris (whose spatial distribution is not the subject of our study), such an expectation is not entirely reproduced in our results, where > 20 mm pieces have a higher dispersion in bipolar than in freehand experiments (see Fig. 7 ). We propose this is due to the knapper’s lack of control over the products during bipolar flaking; in freehand reduction, the detached product normally rests on the knapper’s hand. However, the hand holding a bipolar core is not in contact with the flaking surface, and products often launch from the core and may travel longer distances from the knapping area, resulting in a more dispersed pattern. Freehand versus bipolar knapping may also inform in spatial patterns of handedness, a subject that is starting to receive attention in the literature (e.g. Bargalló et al. 2017 ). As shown in Fig. 7 , freehand scatters are largely centred with regard to the knapper’s position, whereas the bipolar products are strongly biased towards the NE. Again, these distinctive patterns are associated with core manipulation; products in a freehand sequence are left by the knapper to drop vertically from the core, whereas in bipolar flaking the core’s handling position blocks two quadrants, and products will land in the sectors associated with the hand that is manipulating the hammerstone. In the case of Exp. 54 and Exp. 56, produced by a right-handed knapper, artefact clusters will thus be located in the NE of the flaking area (Fig. 8 ). Orientation patterns The use of refit lines to address orientation patterns in archaeological assemblages is still uncommon and has been applied essentially to discuss post-depositional processes (e.g. Schick 1984 ; Austin et al. 1999 ; Pope 2002 ; Ashton et al. 2005 ; Sisk and Shea 2008 ; Santamaría et al. 2010 ; de la Torre et al. 2014 ; de la Torre and Wehr 2018 ; Deschamps and Zilhão 2018 ). Since the preferred orientation of archaeological remains is a strong indicator of hydraulic disturbance (e.g. Toots 1965 ; Isaac 1967 ; Schick 1984 ; Lenoble and Bertran 2004 ; Benito-Calvo and de la Torre 2011 ), the underlying assumption in the literature has been that orientation of refit lines should—like the main axis of individual artefacts—inform on the current direction. While we do not wish to challenge this assumption, results presented in this paper enable us to introduce a cautionary note. Our analysis of refit orientations shows that, in a pristine setting, strongly preferred orientations are to be expected for core-to-flake displacements in bipolar and freehand knapping scenarios, and for refit lines in bipolar knapping episodes. In the case of freehand knapping episodes, refit lines produce random distributions or weakly orientated patterns. Therefore, in archaeological contexts with good conditions of preservation, knapping episodes may produce circular histograms of refit lines that, albeit strongly orientated in some cases, are unrelated to post-depositional disturbance. Thus, there appears to be a degree of equifinality in the use of Rose diagrams for refit lines since they may indicate either flow direction/slope processes in water/gravity-disturbed assemblages or, quite on the contrary, the very position of the knapper in pristine sites. Therefore, results should be considered in the context of other proxies to achieve an accurate understanding of site formation processes. Apart from acknowledging the reduced sample analysed here (limited to four experiments), it should also be stressed that our experimental models are reductionist and do not account for the near-infinite number of variables that may affect the dispersion of artefacts and therefore the refit line circular histograms. From the dynamic position of the knapper, the fragmentation of the reduction sequences, to the use life of the artefact, many factors will have an influence on the Rose diagrams of refit sets in real archaeological contexts. Despite these caveats, however, our results indicate that patterning exists in refit line orientations of some types of knapping episodes. Therefore, the analysis of circular histograms bears heuristic potential to make high-resolution interpretations of spatial dynamics, particularly in near-pristine sites (e.g. Pigeot 1990 ; Bodu et al. 1990 ; Vaquero et al. 2001 ; Roberts and Parfitt 1999 ), where preferential orientations may be used to investigate activity areas and micro-spatial patterns. Conclusions The identification of single knapping episodes in the Palaeolithic record (e.g. Pigeot 1990 ; Fischer 1990 ; Bodu et al. 1990 ; Pope 2002 ) makes it relevant to model the distribution of experimental assemblages and reconstruct their spatial dynamics. With the aid of GIS techniques, we have shown in this paper that knapping scatters have clustered distributions, with high concentrations across low-density areas. These spatial dynamics seem to be patterned, and therefore it might be possible to use density models to explore aspects such as handedness and flaking methods in the archaeological record. Refit patterns in our experiments show that most conjoins are of short distances, most within less than half a meter radius of the knapper’s position. While this is only to be expected in knapping episodes where no transport of tools is made, our analysis has also interesting results concerning the preferential orientation of refit lines. Preferential orientation of artefacts is a strong indicator of post-depositional disturbance, and circular histograms of refit connections have been used to address the extent of taphonomic processes (e.g. Austin et al. 1999 ; Ashton et al. 2005 ; de la Torre and Wehr, 2018 ; Deschamps and Zilhão, 2018 ). However, our results show that, perhaps counterintuitively, preferential orientation of refit lines is the common pattern in pristine flaking scatters associated to bipolar knapping. Although these results introduce some degree of equifinality on the interpretation of Rose diagrams for refit lines, they also highlight the potential of applying orientation analysis to refit studies, which can contribute high-resolution data on spatial dynamics of conjoining sets in Palaeolithic assemblages.
Alfonso Benito Calvo, a geologist at the Centro Nacional de Investigación sobre la Evolución Humana (CENIEH) has participated in a paper published recently in the journal Archaeological and Anthropological Sciences, which reproduced the knapping process observed at Olduvai (Tanzania), using one of the most abundant raw materials at those sites, quartzite rocks. This was experimental work on which members from the CENIEH, University College London, Max Planck Institute and from the Universidad Autónoma de Barcelona have collaborated, based on studying the spatial patterns of the refits, that is to say, the assembly or matching of the lithic material to reconstruct the original geometry prior to knapping. First, the quartzite rocks were knapped, and then the position and orientation of each resulting fragment were plotted exhaustively, yielding detailed maps showing the distribution of the materials. "Starting from these maps, we have carried out a spatial analysis of the layout of the fragments and their refits, using GIS applications, designed for specialist analysis of spatial databases," explains Benito. The results obtained have shown very different spatial patterns characteristic of each knapping technique: bipolar or freehand. Comparison of these theoretical experimental patterns with the distribution found in the sites will allow the amount of post-depositional disturbance suffered by the sites to be quantified, and thus further investigation of the processes which have affected them. Credit: CENIEH
10.1007/s12520-018-0701-z
Computer
An easy-to-make, double-duty curved image sensor
Kan Zhang et al. Origami silicon optoelectronics for hemispherical electronic eye systems, Nature Communications (2017). DOI: 10.1038/s41467-017-01926-1 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-01926-1
https://techxplore.com/news/2017-11-easy-to-make-double-duty-image-sensor.html
Abstract Digital image sensors in hemispherical geometries offer unique imaging advantages over their planar counterparts, such as wide field of view and low aberrations. Deforming miniature semiconductor-based sensors with high-spatial resolution into such format is challenging. Here we report a simple origami approach for fabricating single-crystalline silicon-based focal plane arrays and artificial compound eyes that have hemisphere-like structures. Convex isogonal polyhedral concepts allow certain combinations of polygons to fold into spherical formats. Using each polygon block as a sensor pixel, the silicon-based devices are shaped into maps of truncated icosahedron and fabricated on flexible sheets and further folded either into a concave or convex hemisphere. These two electronic eye prototypes represent simple and low-cost methods as well as flexible optimization parameters in terms of pixel density and design. Results demonstrated in this work combined with miniature size and simplicity of the design establish practical technology for integration with conventional electronic devices. Introduction Biological eyes are highly sophisticated and remarkably designed vision organs that have inspired biomimicry for several centuries. From lobster eye-inspired radiant heaters to moth eye-inspired anti-reflective coatings, human challenges have been solved by nature’s advice in a wide variety of applications 1 , 2 . Also, the camera, which is undoubtedly the most revolutionary invention of mankind inspired by the eye, has immensely comforted, amused, and protected human lives. Today, technological advances have improved the quality of cameras with superior resolution, long focal lengths, and smart functionalities that are implemented in almost every consumer electronics system. In addition to these evolutions, reshaping conventional planar sensor systems into hemispherical formats would empower visual recordings with features that are beyond what state-of-the-art cameras can see, such as infinite depth of field, wider view angle, and lower aberrations 3 . Studying various eye systems in biology, most eyes have photoreceptors that capture and transduce photons into electrochemical signals laid either in concave or convex curvature. The concave array is mostly found in mammals as a camera or pin-hole type while the convex array is found in insects as a compound type. The camera or pin-hole eye has an outstanding quality of vision as it focuses light into an array of photoreceptors laid in a hemispherical concave structure (i.e., retina), allowing for clear identification of objects. The retina adopts the curvilinear shape that approximates the focal plane of the lens such that human eyes have large view fields and supreme focusing capabilities 4 . The compound eye has a wider-angle field of view via hundreds to thousands of ommatidia that are densely arrayed in a hemispherical convex structure for the sensitive detection of moving objects 5 . As such, biomimicry using semiconductor sensor systems structured in hemispherical formats would extend the capabilities of camera systems by utilizing the marvelous features of biological eyes. Although numerous types of artificial eyes have been presented in the past, the inspired systems lacked the essential photo-detecting unit 6 , 7 , 8 , 9 . To take advantage of mimicking biological eyes in electronic imaging systems, photodetectors must essentially be represented in either hemispherical concave or convex formats rather than in the planar format typically found in conventional camera systems. Fabricating devices on non-planar surfaces, however, can be a major challenge because conventional fabrication techniques were developed for planar wafers or plate materials in the semiconductor industry. The simplest approach to deforming sensors was to mount optoelectronic components on a flexible printed-circuit board (FPCB), but this could only achieve hemicylindrical photodetector array designs 10 . Specialized techniques to apply stress on and bend ordinary planar silicon wafer-based CMOS image sensors have been introduced as well, which have been successful in inducing minor curvatures 11 , 12 . The available techniques for direct fabrication on non-planar surfaces, such as soft lithography, mechanical molding, and lens-assisted lithography, are consequently complicated and expensive, and have very specific requirements. Encouraged by the promising prosperity in non-planar devices, novel strategies have been investigated to circumvent the limits set by non-planar surfaces while utilizing mature semiconductor fabrication techniques for economic consideration. For instance, transfer printing of ultrathin semiconductor nanomembranes onto rubber- or plastic-like substrates transformed the shape of high-performance electronics and optoelectronics into flexible and stretchable formats 13 , 14 , 15 , 16 , 17 , 18 . These unusual semiconductor devices on complex curvilinear surfaces are versatile in various areas due to their new degree of design freedom and biomimicry merits, including the hemispherical photodetector array 19 , 20 , 21 , 22 , 23 . Successful integration of stretchable photodetectors with camera systems has demonstrated concave and convex curvilinear photodetector arrays for the hemispherical electronic eye camera that mimicked the human eye 21 and compound electronic eye camera that mimicked the arthropod eye 22 . In both designs, a large array of thin silicon photodiodes separated by serpentine traces of metal for electrical interconnects were originally fabricated on a planar host substrate and transfer-printed onto rubber substrates. Upon hydraulic actuation, the array on rubber deformed and stretched into either a concave or convex structure, where the geometry of the serpentine wire tortuosity deterministically transformed the layout without electrical or mechanical failure. Both concave and convex camera systems that used silicon optoelectronics were groundbreaking toward camera evolution, but the requirement of hydraulic actuators may be bulky in many miniaturized camera systems for consumer devices. Also, the large separation distance between photodetector pixels reserved for electrical traces that are in the micrometer range may pose limitations in resolution optimization. An approach that is both compatible with commercially available imaging systems and has flexible optimization parameters is desirable for such hemispherical photodetector arrays to become more practical. Here, we present a unique origami-inspired approach, combined with semiconductor nanomembrane-based flexible electronics technology, to build dense, scalable, and compact hemispherical photodetector arrays. Originated as an art of paper folding, origami and kirigami were recently utilized to assemble three-dimensional structures with micro/nanomembrane materials to allow an increasingly wide range of applications 24 , 25 , 26 , 27 , 28 . Precut membranes have been structured into numerous types of interesting three-dimensional assemblies via buckling and folding to form various electronic components, including antennas, solar cells, batteries, nanogenerators, waveguides, photodetectors, and metamaterials 29 , 30 , 31 , 32 , 33 , 34 , 35 , 36 , 37 . Although a similar approach of forming curved silicon photodetector hemisphere was introduced in the past, optical imaging using the hemisphere array has not yet been demonstrated 38 . In this work, the folding mechanism is implemented for both concave and convex curvilinear photodetector arrays with single-crystalline silicon nanomembranes. The low flexural rigidity of single-crystalline nanomembrane allows high-performance photodetectors to bend with microscale radius of curvature 39 . Combining the origami-inspired approach with the transfer printing of advanced inorganic nanomembranes on flexible substrates, high-performance hemispherical electronic eye camera systems are fabricated to allow for unusual imaging that could not be done with conventional camera systems. Furthermore, the origami-based fabrication eliminates the use of metal wires in-between pixels for the connection of sparsely arrayed devices (as seen in other similar systems that limited resolution optimizations), as well as eliminating the need for the sophisticated actuators that were used to form the hemispheres 21 , 22 . Results Geometric origami for hemisphere-like silicon optoelectronics Figure 1 illustrates the concept of geometric origami used for the photodetector array. In geometry mathematics, a quasi-spherical solid is formed using one of the renowned Archimedean solids—the truncated icosahedron—which is a combination of multiple pentagonal and hexagonal faces, typically found in soccer balls or buckminsterfullerene molecules. As presented in Supplementary Fig. 1a, a net of half truncated icosahedron was first mapped and cut on the flexible substrate, followed by folding the net to create a quasi-hemisphere. The edges of the hemisphere-like structure could be further smoothed out by dividing the large pentagonal and hexagonal faces into even smaller polygon faces as presented in Supplementary Fig. 1b . The subdivided icosahedron not only smoothed out the edges of the hemisphere, but also allowed more pixels inside the geometry to improve fidelity. As an example, Fig. 1a represents a schematic illustration where the 676 polygon blocks were mapped into a net of subdivided half truncated icosahedron which was then folded to form a hemisphere. Figure 1b shows a photograph of the folded truncated icosahedron using metallized silicon nanomembrane blocks, printed onto a flexible polyimide substrate. The entire microfabrication of electronic devices, including silicon etch, metal deposition, and device passivation, were completed in a planar format prior to deformation, simplifying the process flow of curvilinear semiconductor devices by leaving the deformation mounting to the last step and thus preserving the feasibility of most semiconductor fabrication techniques. As a result, a simple and practical method to integrate well-developed planar devices onto complex curvilinear surfaces was achieved, enabling diverse applications that were difficult to address using conventional means. As presented in Fig. 1c, d , the net may be folded upwards for a concave hemisphere or downwards for a convex hemisphere. For instance, the concave array may be used to mimic the retina in either pin-hole- or camera-type mammalian eyes while the convex array may be used to mimic the ommatidia in a compound eye. Using nanomembranes combined with flexible substrates, the ultrathin photodetector array further bends to yield a smooth hemisphere. Figure 1e, f represents the abovementioned concave and convex photodetector arrays, respectively, using the origami approach. They were then analyzed and are discussed in detail here in later sections. Fig. 1 Geometric origami of silicon optoelectronics for the hemispherical electronic eye. a Schematic illustration of the net of half truncated icosahedron being folded into a hemisphere. 676 polygon blocks consisting of pentagons and hexagons were mapped into a net of subdivided half truncated icosahedron which was then folded to form a hemisphere. b A photograph of the half truncated icosahedron based on polygon blocks of metal-coated silicon nanomembranes printed on a flexible polyimide film. The completed net was folded into a convex hemisphere by inserting the net into a circular hole of a metal fixture. Scale bar, 1 mm. c Schematic illustration of the net of half truncated icosahedron based on silicon nanomembranes pressed into a hemispherical concave mold. d Schematic illustration of the net of half truncated icosahedron based on silicon nanomembranes covered on a hemispherical convex mold. e A photograph of a silicon optoelectronics-based hemispherical focal plane array formed using the concave mold-based origami approach shown in c . Inset image shows the flat focal plane array before folding. Scale bar, 2 mm. f A photograph of a silicon optoelectronics-based convex hemispherical eye camera formed using the convex mold-based origami approach shown in d . Inset image shows the flat eye camera before folding. Scale bar, 2 mm Full size image Si nanomembrane-based photodiodes for origami optoelectronics Silicon-based lateral P–i–N photodiodes were used as sensors in this study due to their broad spectrum, as well as the large bandgap of silicon and the fast response of the P–i–N structure. The schematic illustration shown in Fig. 2a and the optical microscope image in Fig. 2b represent the photodetector unit implemented in the electronic eyes. The photosensitivity results of the photodetector diode are shown in Fig. 2c . The measured dark current density was lower than 1 × 10 −14 A μm −2 up to a –5 V bias and weakly dependent on the reverse-bias voltage. The photocurrents measured at –3 V under the illumination of three visible lasers, including green (543 nm), yellow (594 nm), and red (633 nm) lasers were 1.74 × 10 −11 A μm −2 , 1.21 × 10 −11 A μm −2 , and 1.95 × 10 −11 A μm −2 , respectively. The measured current densities of the photodetectors at the different power levels exposed with each visible laser are shown in Supplementary Fig. 2a–c . The ratio between the photocurrent and the dark current showed about a 10 4 fold difference. The calculated photo responsivity was 9.49 mA W −1 , 6.24 mA W −1 , and 5.26 mA W −1 at –3 V under the green, yellow, and red lasers, respectively, as shown in Fig. 2d . The external quantum efficiency (EQE) was calculated using the photocurrent and the incident light power. At –3 V, the EQE was 2.2%, 1.3% and 1.0% for the green, yellow, and red lasers, respectively. Imaging with these photodetectors was performed using the green laser, as the photodiodes were most responsive to green light. Fig. 2 Electrical properties of a silicon optoelectronic device used for the electronic eyes. a Schematic illustration of a hexagon-shaped silicon nanomembrane-based photodiode used for the electronic eyes. An array of such photodiodes were printed and fabricated on a pre-cut flexible polyimide substrate. b Optical microscope image of the photodiodes. Scale bar, 50 μm. c Current density–voltage characteristics of the photodiode in the dark and under the illumination of lasers with wavelengths of 543 (green), 594 (yellow), and 633 nm (red). d Responsivity and external quantum efficiency of the photodiode under the illumination of lasers with green, yellow, and red wavelengths. The laser light intensities were 5 mW for the green and yellow wavelengths and 7 mW for the red wavelength. Green laser was used for the rest of this study Full size image Origami optoelectronics for hemispherical focal plane array Figure 3a presents a schematic illustration of the hemispherical focal plane array (FPA) based on origami silicon optoelectronics. The array was designed such that the pixels were laid out to form a large net of subdivided half truncated icosahedron, where each pixel contained a single photodetector and was shaped into either a pentagon or a hexagon, and electrically connected by metal interconnects. A macroscopic view of the hemisphere formed with this combination of polygons is represented with paper origami, as presented in Supplementary Fig. 3 . In total, there were 281 photodetectors in the hemispherical array, and each was adjacent to one another. The inner diameter of a single hexagonal photodetector was 113 μm, as presented in Supplementary Fig. 2d . The net was first fabricated on a planar format where conventional optoelectronics processes involving high temperatures and chemical solvents were utilized and transfer-printed onto a flexible polyimide substrate. Once the fabrication and passivation of the device was complete, the flexible net of half truncated icosahedron was mounted onto a concave fixture to mechanically transform it into a hemisphere. To precisely mount and fold the net onto a fixture, a metal-based hemisphere (concave) fixture and a polydimethylsiloxane (PDMS)-based reverse (convex)-hemisphere pressing mold were prepared. The net was centered onto the reverse mold, where the adhesion of the PDMS-based reverse mold temporarily held the net during the mounting process. The mounting process was completed by coating the metal concave fixture with a thin layer of epoxy glue and gently pressing the reverse mold with device onto the fixture. This process flow is applicable to curvilinear surfaces with different curvatures and was verified by mounting the net onto two concave fixtures with different radii ( r = 2.27 mm and 7.20 mm) of curvature of the hemisphere, as presented in Supplementary Fig. 4 . Supplementary Fig. 4a and b describes the detailed design parameters of the metal concave fixtures with small and large radii, respectively. During the mounting process, mechanical deformation was introduced to each pixel. However, the low flexural rigidity from the extremely small thickness of the silicon nanomembrane used as the photodetector material allowed the mechanical deformation to have a negligible impact on the performance of the device 40 . The performance of the photodetector started to degrade when the radius of curvature reached 1.5 mm, as presented in Supplementary Fig. 5 . The layer thicknesses of the silicon nanomembrane, metal interconnects, and polymer passivations can be reduced to achieve curvilinear photodetector arrays with smaller radius of curvature. For instance, a 20 nm thick silicon nanomembrane photodetector could wrap around the cladding layer of a single mode fiber to detect light leakage, which typically has a diameter of 125 μm 39 . It is important to carefully control the mechanical neutral plane of the device, such that minimal stress is applied on the most fragile part of the device 13 . Fig. 3 A concave hemispherical electronic eye camera system using origami silicon optoelectronics. a Schematic illustration of the hemispherical focal plane array (FPA) based on origami silicon optoelectronics. Fully formed photodiode array fabricated into a net of half truncated icosahedron was pressed and folded into a concave mold to create the hemispherical geometry. b Optics setup of the hemispherical electronic eye system shown using a schematic illustration with a light source, imaged object, and plano-convex lens to the left of the FPA. c A photograph of the hemispherical FPA based on origami silicon optoelectronics. Inset image shows a photograph of the electronic eye system with the plano-convex lens integrated on top of the FPA. Scale bar, 1 mm. d Ray patterns traced from different angles plotted against the position from the object plane. Right inset plot shows a magnified view of the dotted box shown in the plot. Left inset shows the calculated focal plane of the ray passing through the plano-convex lens (dotted red curve) and measured focal plane of the silicon optoelectronics array (blue curve). e High-resolution image of the letter ‘W’ acquired from the hemispherical electronic eye camera. The image was scanned from 0° to 60° in 12° increments for the refined imaging. Each inset image shows a snapshot at each degree angle, with the reference photodiode highlighted in green. f High-resolution image of the letter ‘W’ acquired from the hemispherical electronic eye camera matching the concave hemispherical surface of the FPA Full size image A simple camera system using a hemispherical FPA was assembled as presented in Fig. 3b . A plano-convex lens (10 mm diameter and 10 mm focal length) that focused light was placed in-between the array and the image. Figure 3c shows a photographic image of the FPA, with the inset image showing the assembled device including the plano-convex lens. Individual components, including the laser, image, lens, photodetector array, etc., were placed on a rail and allowed for flexible adjustments to the imaging setup, as shown in Supplementary Fig. 6 . Figure 3d represents the simulated focal plane of the camera system. The distance between the plano-convex lens and the focal plane array was approximated by simulating an object reflected on a planar focal plane behind the lens. As shown in Supplementary Fig. 7 , the focal plane from the planar plane of the lens was best approximated in the range of 7.0–8.5 mm, which agreed well with the back focal length (8.18 mm) provided by the lens manufacturer. With proper adjustments to the lens position, the focal plane had acceptable detection accuracy with the photodiode array, as shown in the left inset plot in Fig. 3d . Imaging of the object was also performed with the FPA mounted on a larger radius of curvature (7.20 mm) to demonstrate the origami photodetector array’s potential for hemispheres with various radii of curvatures. A plano-convex lens with a larger focal length (10 mm diameter and 20 mm focal length) was used for the photodiode array with the larger radius (7.20 mm). The focal plane for the photodiode array with the larger radius (7.20 mm) also had acceptable detection accuracy, as presented in Supplementary Fig. 8 . In addition, the distance at which the camera system should be placed was calculated using the ray traces plotted against the distance between the object and the camera system. With the hemispherical focal plane’s radius of curvature fixed at 2.27 mm, the position of the plano-convex lens was calculated to be 10.3 cm away from the object plane (for a larger radius of 7.20 mm, the distance was increased to 20 cm). Figure 3e, f shows the images obtained from the hemispherical FPA. Multiplexers allowed for the recording of signals from the large array of photodetectors in the matrix. The array of photodetectors with rows and columns of metal interconnects was connected to ten CMOS analog multiplexer circuits, where two 8-to-1 multiplexers controlled each side area of the half truncated icosahedron net. Imaging using the photodetector array was controlled with a computer programmed design platform in a passive matrix format. The recording mechanism and the multiplexer layouts, as well as the printed-circuit board (PCB) layout, are presented in Supplementary Fig. 9 . The dense array of the photodetectors imaged the letter ‘W’ with relatively high-spatial resolution. To further improve the image quality and eliminate the dark spots from defective pixels, a sequence of images was collected while rotating the imaged object. Images were taken after rotating the image counterclockwise in 12° increments from 0° to 60°. Ideally, the camera system should be rotated, rather than the imaged object, as the object remains stationary when a photo is taken. Rotation of the camera system was limited in the setup as shown in Supplementary Fig. 6 , thus the imaged object was rotated instead, mimicking the clockwise rotation of the camera system. A total of six images were combined and reconstructed to obtain a scanned, improved-quality image as shown in the middle image of Fig. 3e . Each of the six smaller images in Fig. 3e around the middle scanned image represents single scanning at a given rotation angle, with the reference photodiode shown in green for easier visualization of rotation. A high-resolution image of the letter ‘W’ acquired from the hemispherical electronic eye camera was rendered using numerical computing software to match the hemispherical surface of the FPA, as presented in Fig. 3f . It is expected that an image with higher resolution may be achieved by increasing the number of scans. The same set of experiments were performed for the photodetector array with the larger radius of curvature (7.20 mm). As presented in Supplementary Fig. 10 , for the larger radius of curvature, a slightly deteriorated and blurred letter was detected as compared to the image from the smaller radius, possibly due to optical aberrations associated with an imperfect FPA and lens combination. It should also be noted that the design of truncated icosahedron was not optimized for such a large radius of curvature, which created blind spots within the FPA. Nevertheless, the use of hemispherical FPAs largely benefited from the simplified optical elements required to image an object, as recording with planar FPA requires complicated optical systems to eliminate off-axis aberrations such as astigmatism, field curvature, and coma. This not only saved cost, but also allowed for a much simpler and compact camera design. Origami optoelectronics for artificial compound eye Different from concave FPAs where light is focused onto with lens, a photodiode array may be folded in a reverse manner for potential compound eye mimicking cameras that do not need any external optics. The same fabrication technique can be used to create a convex array, with minor modifications to the individual photodiodes to include microlenses. Biological eyes with photoreceptors arranged in a convex format are typically found in compound eyes. The structure of compound eyes differs from mammalian eyes as they are comprised of ommatidia on the convex surface. In each ommatidium, a tiny corneal lens focuses incoming light rays onto a single photoreceptor inside of it. Such imaging elements can be artificially created on top of each photodetector using the simple photoresist reflow approach 41 , 42 , 43 . A microlens placed on top of each detecting unit maximizes the amount of light delivered to the photodiode by accepting light from large incident angles (Supplementary Fig. 11 ). A slight decrease in incident light was observed with the microlens at 0°, which was attributed to the light absorption in the photoresist, but the loss was negligible and the lens transmitted a higher incident light percentage at larger incident angles. Whereas an actual ommatidium consists of other elements (like pigment cells and crystalline cones, in addition to the corneal lens, that all together focus light and isolates itself from neighboring ommatidia), the convex array demonstrated in this report lacks such isolating elements. However, it shows the proof of concept that complex elements like the corneal lens can be fabricated on the photodetector array formed using the origami approach. Similar to the concave hemispherical FPA, the hemispherical electronic eye that was convex in shape was formed using the same net of subdivided half truncated icosahedron, with a radius of curvature of 2.27 mm. With the photoresist microlens fabricated on top of each photodetector, the net was mounted on a convex fixture as shown in the schematic illustration in Fig. 4a . The mounting process for convex array was completed by coating the convex fixture with a thin layer of epoxy glue and pressing down the net with a reverse mold. A photographic image of the device before being mounted on the convex fixture is presented in Supplementary Fig. 12 . Figure 4b shows a photographic image of the convex hemispherical electronic eye and its inset image shows the device mounted on the PCB system. To demonstrate its ability to image with a wide field of view, a narrow laser beam was fired at an angle of 36° from the PCB plane, as illustrated in Fig. 4c . Figure 4d shows the image of the laser light acquired from the electronic eye camera matching the convex surface of the photodetector array, with brighter regions indicating the photodetectors of the convex electronic eye camera that detected the laser light. Although the convex design of the photodetector array enabled peripheral vision, the scanned image was closer to a blurry spot rather than a detailed single point. This was due to the large acceptance angle of the device that led to an overlapping of the light received by the adjacent diodes with microlenses. Additional biomimicry elements that isolated each photodiode and optimized acceptance versus inter-ommatidial angle could eliminate these adverse effects. As shown in this conceptual design, the convex hemispherical camera benefits from its capability to detect light from wide angles without any need of external optics like the camera-type eye. Such aspects are especially useful for visually controlled navigation and optometer responses that do not necessarily require extreme resolution, but require wide view angles and minimized device layouts. With further optimizations in the optical components by adding layers that mimic the pigment cells and crystalline cones, a compound electronic eye system that has panoramic vision may also be capable. Fig. 4 A convex hemispherical electronic eye camera system using origami silicon optoelectronics. a Schematic illustration of the convex hemispherical electronic eye camera based on origami silicon optoelectronics. Fully formed photodiode array fabricated into a net of half truncated icosahedron was covered and folded onto a convex mold to create the hemispherical geometry. b A photograph of the convex hemispherical electronic eye camera based on origami silicon optoelectronics. Each photodiode is integrated with a polymer microlens to mimic the corneal lens in a compound eye. Inset image shows a photograph of the compound electronic eye system mounted on a printed circuit board. Scale bar, 1 mm. c Optics setup of the compound electronic eye system shown using a schematic illustration, where the point laser is illuminated from an incident angle of 36°. d Image of the laser point acquired from the compound electronic eye camera matching the convex surface of the camera Full size image Discussion The biomimicry of eyes demonstrated in this report utilized the simple origami approach of deforming flexible electronics into hemispherical formats, which successfully generated two very important camera systems with a large number of photodetectors in a dense array. The density of the array can further be expanded by splitting the polygon blocks into smaller blocks or by attaching more pixels around the array. Moreover, the fabrication process can be made compatible with existing CMOS sensor technology with extremely high densities by releasing the array of CMOS sensors fabricated on silicon-on-insulator (SOI) wafers and origami-deforming the array at the last step. The conventional silicon manufacturing techniques used to fabricate the photodetectors, as well as the miniature size of the finished device, are beneficial to advance such an approach into commercial electronic systems. The easily scalable pixel density and the simplicity of the device structure are the key features of this method. Future research includes developing tunable hemispheres for the origami optoelectronics and mounting mechanisms for easy integration with other electronics. Furthermore, other convex isogonal polyhedral concepts, such as dodecahedron or rhombicosidodecahedron, may be employed as origamis to create unusual optoelectronics or electronics in hemispherical formats. Applying this concept to state-of-the-art digital cameras that capture high quality images or surveillance cameras using infrared night vision are also desirable which would further expand the capabilities of cameras. Methods Fabrication of Si-based photodiodes on flexible film The fabrication of both concave and convex hemispherical eyes started from a lightly p-doped SOI wafer (SOITEC TM ) which had a 270 nm device layer and 200 nm buried oxide (BOX) layer. The wafer was patterned and heavily doped with boron and phosphorus to form N+ and P+ regions using ion implantations (boron, dose of 4 × 10 15 cm −2 and an energy of 20 KeV, and phosphorus, dose of 4 × 10 15 cm −2 and an energy of 30 KeV), followed by diffusion at 950 °C for 20 min in a 5% O 2 , 95% N 2 ambient atmosphere. An array of etch-holes was made using photolithography and reactive-ion etching (RIE) to partially expose the BOX layer, and the processed top Si nanomembrane layer was released by immersing it in concentrated hydrofluoric acid (49%) for 2 h. A flexible polyimide film (Kapton HN; Dupont; 127 μm) was prepared by laser cutting the film (A-laser) to match the half truncated icosahedron pattern of the array. The patterns of the photodiode array and flexible substrate both corresponded to a shape consisting of one pentagon surrounded by five hexagons, so that the finished array could be wrapped onto a hemispherical fixture. The Si nanomembrane was directly transferred onto the polyimide film by pressing the adhesive-coated (SU-8 2; Microchem; 2 μm) polyimide film against the released nanomembrane. During this process, a modified mask aligner (MJB-3; Karl Suss) was used to perfectly align and transfer the nanomembrane to the precut polyimide substrate. After curing the adhesive, the silicon nanomembrane was patterned and etched (RIE) into polygon blocks to isolate the pixels, and the SU-8 (SU-8 2; Microchem; 2 μm) passivation layer was patterned with via-holes, followed by the deposition of first metal interconnects (Ti/Au = 30/250 nm). Adding another SU-8 via-hole layer with second metal interconnects and the final SU-8 passivation layer concluded the device fabrication process. These detailed processes are described with schematic illustrations in Supplementary Fig. 13 . Fabrication of polymer microlens for compound eye For the convex hemispherical electronic eye, a microlens was fabricated on each SU-8 (4 μm) passivated photodiode for a wider view field. The fabrication involved photolithography of a thick photoresist (AZ4620; MicroChemicals; 40 μm) and thermal reflow. After isolating the photoresist with photolithography, oven heating for 15 min at 95 °C caused the photoresist to reflow to a near-convex microlens. This process is described with schematic illustrations in Supplementary Fig. 14 . Origami process of Si optoelectronics Mounting the photodetector arrays used the same procedures for both concave and convex arrays. Either the concave or convex fixture was first coated with an adhesive layer and the finished photodetector array (without a microlens for the concave array and with a microlens for the convex array) was carefully pressed against the fixture using a reverse PDMS mold. Finally, the device was mounted onto the PCB using gold wire bonding. Measurement and analysis The measurements of the photodiode were performed using an HP 4155B Semiconductor Parameter Analyzer. Before the photodetector array was folded and mounted onto the hemispherical fixture, it was measured on a planar probe station with laser lights striking perpendicular to the device plane. Three different helium neon lasers emitting green (05-LGR-193; Melles Griot), yellow (25-LYR-173; Melles Griot), and red (1137 P; JDSU) lights were used for this study. The normalized current density was calculated for a single hexagonal photodetector for three laser beams. The concave camera system mounted on the lateral rail collected images from a beam expanded (15 × Complete Beam Expander; Edmunds Optics) green laser illuminated through a precut pattern of a letter ‘W’ and a plano-convex lens (#63–471; Edmunds Optics for small radius hemispherical FPA, and #63–473; Edmunds Optics for large radius hemispherical FPA) by recording photocurrents generated at each photodetector. This process was repeated with the imaged object (the letter ‘W’) rotated counterclockwise in 12° increments for six consecutive imaging steps. These were then combined to obtain scanning mode collection data and improve the effective resolution. The convex camera system mounted on the lateral rail recorded photocurrents generated at each photodetector from a narrow green laser illuminated from an angle of 36°. Data availability The data supporting the findings of this study are included within the paper and its Supplementary Information, or available from the corresponding author upon reasonable request.
These days, we increasingly rely on our cell phone cameras to capture virtually every aspect of our lives. Far too often, however, we end up with photos that are a sub-par reproduction of reality. And while operator error sometimes comes into play, most likely, the camera's digital image sensor is the real culprit. A flat silicon surface, it just can't process images captured by a curved camera lens as well as the similarly curved image sensors—otherwise known as the retinas—in human eyes. In a breakthrough that could, for example, lead to cameras with beyond-the-state-of-the-art features such as infinite depth of field, wider view angle, low aberrations, and vastly increased pixel density, flexible optoelectronics pioneer Zhenqiang (Jack) Ma has devised a method for making curved digital image sensors in shapes that mimic an insect's compound eye (convex) and a mammal's "pin-hole" eye (concave). The Lynn H. Matthias and Vilas Distinguished Achievement Professor of electrical and computer engineering at the University of Wisconsin-Madison, Ma, his students and collaborators described the technique in the Nov. 24, 2017, issue of the journal Nature Communications. Curved image sensors do exist. Yet even though they outperform their flat counterparts, they haven't made it into the mainstream—in part, because of the challenges inherent in a manufacturing method that involves pressing a flat, rigid piece of silicon into a hemispherical shape without wrinkling it, breaking it or otherwise degrading its quality. A concave version of the digital image sensor bends inward for creating a hemispherical focal plane array. Credit: Yei Hwan Jung and Kan Zhang Ma's technique was inspired by traditional Japanese origami, or the art of paper-folding. To create the curved photodetector, he and his students formed pixels by mapping repeating geometric shapes—somewhat like a soccer ball—onto a thin, flat flexible sheet of silicon called a nanomembrane, which sits on a flexible substrate. Then, they used a laser to cut away some of those pixels so that the remaining silicon formed perfect seams, with no gaps, when they placed it atop a dome shape (for a the convex detector) or into a bowl shape (for a concave detector). "We can first divide it into a hexagon and pentagon structure—and each of those can be further divided," says Ma. "You can forever divide them, in theory, so that means the pixels can be really, really dense, and there are no empty areas. This is really scalable, and we can bend it into whatever shape we want." A convex version of the digital image sensor bends like a soccer ball for mimicking an insect’s compound eye. Credit: Yei Hwan Jung and Kan Zhang That pixel density is a boon for photographers, as a camera's ability to take high-resolution photos is determined, in megapixels, by the amount of information its sensor can capture. Currently, the researchers' prototype is approximately 7 millimeters, or roughly a quarter-inch, in diameter. That's still a bit bulky for your cell phone, but Ma says he can make the sensor even smaller. "This membrane is a very big advance in imaging," he says.
10.1038/s41467-017-01926-1
Medicine
Researchers discover predictor of laser treatment success in patients with glaucoma
Matthew Hirabayashi et al, Predictive Factors for Outcomes of Selective Laser Trabeculoplasty, Scientific Reports (2020). DOI: 10.1038/s41598-020-66473-0 Journal information: Scientific Reports
http://dx.doi.org/10.1038/s41598-020-66473-0
https://medicalxpress.com/news/2020-08-predictor-laser-treatment-success-patients.html
Abstract We sought to determine predictive factors for selective laser trabeculoplasty (SLT) outcome. 252 eyes from 198 adult patients with open-angle glaucoma who underwent SLT between July 2016 and February 2018 with a minimum 6 month follow up were reviewed. We defined success as ≥20% IOP reduction or ≥1 medication reduction without an IOP lowering procedure. We also evaluated the relationship of these factors to postoperative IOP elevation >5 mmHg (IOP spikes). Our primary outcome measure was association between age, type and severity of glaucoma, pigmentation of the trabecular meshwork (PTM), total energy delivered, and baseline intraocular pressure (IOP) with success. At 2 and 6 months, 33.6% (76/226) and 38.5% (97/252) of eyes met success criteria respectively. Baseline IOP > 18 mmHg was significantly associated with success both at 2 and 6 months, reducing IOP by 5.4 ± 5.3 mmHg (23.7% reduction), whereas those with lower baseline remained at −0.7 ± 4.6 mmHg (4.9% increase) at 6 months ( P < 0.001). No other baseline characteristics significantly predicted success or IOP spikes. Patients with higher baseline IOPs had greater success rates and mean IOP reduction at both 2 and 6 months following SLT. Age, type and severity of glaucoma, PTM, or total energy delivery had no association with procedural success or IOP spikes. Patients with higher baseline IOP may experience greater lowering of IOP after SLT. However, SLT may be equally successful for patients with a variety of other characteristics. Introduction Second in frequency only to cataracts, glaucoma causes more cases of blindness worldwide than any other ocular disease and currently affects over 70 million people, 10% of whom eventually lose their sight due to optic nerve damage 1 . Since it is estimated to continue increasing in incidence, refining treatment approaches become more crucial every year to preserve sight 2 . Selective laser trabeculoplasty (SLT) reduces intraocular pressure (IOP) by improving aqueous outflow likely by biological rather than anatomic changes in the trabecular meshwork (TM) with proven efficacy ranging from 15–66% and a better safety profile than its predecessor, Argon Laser Trabeculoplasty (ALT) 1 , 3 . Originally indicated only for primary open-angle glaucoma (POAG), it has been shown to be efficacious for many other types of open-angle glaucoma 4 . Building on the principles of ALT, it uses frequency-doubled short pulse (Q-switched) Nd:YAG laser and selectively targets pigmented cells of the TM without damaging the overall anatomical structure of the meshwork 3 . The larger spot size of 400 μm for SLT compared to 50 μm for ALT allows for broader coverage of the angle and eliminates the need for as fine a focus while still providing the “champagne bubble” feedback that is generally recommended every two or three burns for appropriate energy delivery 5 , 6 , 7 . Although efficacy and safety of SLT have been well documented, predictive factors for outcome of SLT tend to vary in the literature and large-scale investigations on this topic are limited. SLT has reportedly worked best for patients with ocular hypertension, POAG, pseudoexfoliation, or pigmentary glaucoma 7 . Pigmentation of TM (PTM) has not been associated with efficacy of the procedure 8 , but with higher risk of IOP spikes 9 . 360° treatment have been reported to be more effective than 180° treatment in one setting 10 , although many surgeons opt to treat 180° at a time in hopes of decreasing risk of IOP spikes. With the growing interest in using SLT as first-line treatment and well known issues with medication compliance, determining which patients would have the best response to this therapy is increasingly important 11 , 12 , 13 . In this study, we attempted to determine possible predictive factors for both successful outcomes and common complications, including postoperative IOP spike following SLT treatment. Patients and Methods We first obtained University of Missouri Institutional Review Board (IRB) approval who granted us a waiver of informed consent due to the retrospective nature of the study with deidentified data. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and national research committee and with the 1964 Declaration of Helsinki and its later amendments or comparable ethical standards. We performed a retrospective chart review of 252 eyes of 198 adult open-angle glaucoma patients who underwent SLT at University of Missouri between July 2016 and February 2018 in accordance with the guidelines and regulations of our IRB approval. All patients completed minimum 6 months follow up visit after SLT. We collected pre-op and post-op data at 2 and 6 month follow-up. Our primary outcome measure was correlation between patient age, type and severity of glaucoma, PTM, or total energy delivery with procedural success. Success was defined as ≥20% IOP reduction or any reduction of medication without increasing IOP, and failure was not meeting these criteria or requiring an additional IOP-lowering surgery before the time point considered. Our secondary outcome measure was association of baseline characteristics to IOP reduction, medication reduction, and postoperative IOP spike (defined as elevation of IOP > 5 mmHg from baseline 1 hour after the procedure). Laser procedure and protocols All SLT was performed with the 532 nm frequency doubled Q-switched Nd:YAG laser using 3 ns pulse and spot size of 400 micrometer. Power varied from 0.6 to 1.4 mJ and was titrated until microbubbles were visualized then reduced by 0.1 mJ. The settings for the SLT laser and therefore total energy varied depending on TM pigmentation. At 1 hour post procedure, a Goldmann applanation tonometer was used to check IOP by the treating physician or skilled ophthalmic technician. A drop of Timolol or Brimonidine was given to the operative eye prior to procedure, and no steroid was used postoperatively. No additional ocular hypotensive drops were given after the procedure unless the patient experienced an IOP spike. IOP was checked similarly at 2 weeks, 2 months, and 6 months post-procedure. Full slit lamp examination with gonioscopy and visual acuity testing were done at 2 weeks, 2 months, and 6 months post procedure. Patients remained on their medication regimen until 2 months where medications were adjusted at the physician’s discretion based on disease progression and individual IOP targets. Statistical analysis We used paired t -tests to compare preoperative and postoperative IOP. A mixed model (age and preoperative IOP) and logistic regression (type and severity of glaucoma, pigmentation of the trabecular meshwork, total energy, and angle of treatment) with a random patient effect to account for individuals who contributed two eyes to the sample was used for determining predictive factors of success. Independent t -tests and Wilcoxon signed rank test to compare IOP and medication reduction based on preoperative IOP, a Mann-Whitney U test to compare total energy delivery between light and dense PTM groups, and independent t - and chi- square/Fisher’s exact tests to assess associations between factors and IOP spike. We reported all values as mean ± standard deviation for the data with normal distribution, median and interquartile range (IQR) for non-normally distributed data, and percentage (n) for categorical variables unless otherwise noted. Results Preoperative characteristics Baseline characteristics are summarized in Table 1 . The mean age of patients was 69.6 ± 11.2 years. Females comprised 49.2% of the sample size. Majority of patients were Caucasian (86.1%) with primary open-angle glaucoma (80.1%). The severity of glaucoma consisted of 43.3% (109/252) mild and 38.1% (96/252) severe according ICD-10 guidelines 14 , 15 . Mean preoperative IOP was 17.8 ± 4.4 mmHg and mean preoperative glaucoma medications was 2.0 ± 1.3 [2(2)]. Table 1 Baseline demographic and glaucoma status data ( n = 252). Full size table Overall procedural success at 2 and 6 months By our definition, 33.6% (76/226) of patients had successful SLT outcome at 2 months and 38.5% (97/252) at 6 months. Baseline IOP of 17.8 ± 4.4 (SD) mmHg was reduced to 16.7 ± 4.3 and 15.2 ± 4.9 mmHg at 2 and 6 months respectively, both statistically significant ( P < 0.001). Baseline medications were reduced from 2.0 ± 1.3 [2(2)] to 1.6 ± 1.3 [2(2)] at 2 months and 2.0 ± 1.3 [2(2)] at 6 months. This change in medication number was not statistically significant at 2 months ( P = 0.709) or 6 months ( P = 0.578). Predictive factors for success Findings for predictive factors for SLT success are reported in Table 2 . The only baseline characteristic that significantly predicted success of SLT was baseline IOP at both 2 and 6 months ( P < 0.001). Patients with baseline IOP of>18 mmHg experienced a mean IOP reduction of 3.7 ± 4.2 mmHg (17.3% reduction) at 2 months compared to −0.7 ± 4.1 (4.9% increase) mmHg in patients with baseline IOP ≤ 18 mmHg ( P < 0.001). Similarly, at 6 months the patients with higher baseline IOP had mean IOP reduction of 5.4 ± 5.3 mmHg (23.7% reduction) compared to −0.7 ± 4.6 mmHg (4.9% increase) in those with lower baseline IOP ( P < 0.001). IOP was not associated with mean medication reduction ( P = 0.186). Table 2 Predictive factors for SLT success at 2 and 6 months. Full size table Predictors for IOP spikes Our findings for the association between preoperative characteristics and IOP spikes are reported in Table 3 . Overall, 5.1% (13/252) eyes experienced IOP spikes (defined as >5 mmHg elevation in IOP after laser surgery). IOP spikes was not associated with procedural success at 2 and 6 months, nor was it with age, type or severity of glaucoma, baseline IOP, PTM, or total energy delivery. None of the patients experienced any other adverse events related to SLT or vision loss ≥ 2 Snellen lines at any point during the follow-up. Table 3 P -values for IOP Spike vs. patient characteristics or outcome measures. Full size table Discussion Many studies have attempted to characterize potential predictive factors of success following SLT and we sought to assess current claims and to contribute currently unexplored predictive factors such as total energy delivery during treatment. Our results agree with the frequently reported finding that baseline IOP is a predictor of SLT success 16 , 17 , 18 , 19 , 20 , 21 , 22 , while factors such as patient age, PTM, and type of glaucoma are not 17 , 20 , 21 , 22 , 23 , 24 . Some of the literature reports an association between heavily pigmented PTM and IOP spikes but we did not observe this in our population 9 , 24 . We found that baseline IOP significantly predicted success at both 2 and 6 months and those with higher IOPs (specifically >18 mmHg) had higher probabilities for a successful outcome and greater lowering of IOP. This result may be the result of flooring effect of any Schlemm’s canal based procedures, where IOP lowering cannot be achieved beyond the level of episcleral venous pressure 25 , 26 , 27 . Age, type and severity of glaucoma, PTM, or total energy delivery did not significantly predict a successful outcome at either 2 or 6 months in our population. The existing controversy around this topic is likely multifactorial and elements such as varying technique and laser settings, success criteria, study power, and clinical protocols can all influence findings. Predictive factors for SLT success is an increasingly important topic due to growing current interest in using SLT as a first line therapy 11 . The recent randomized large-scale multi-center trial comparing SLT to eye drops for first-line therapy of glaucoma and ocular hypertension (LiGHT trial) demonstrated 95% of eyes at target IOP at 36 months with only 6 IOP spikes in their 770 patients 11 , 12 . These high rates of SLT success compared to ours and other studies may reflect how treatment-naïve eyes may have a favorable response to laser trabeculoplasty improving TM outflow facility, and prior treatment with topical aqueous suppressants may reduce the natural capacity of the patients’ TM and physiologic outflow, thereby limited response to laser trabeculoplasty 28 . It may also be due to the flexible patient-specific definition of success used in that trial compared to our singular success criteria. Our success rate was consistent with other existing literature that included medically treated glaucoma that generally report rates from 15–66% 29 , 30 , 31 , 32 . We also attempted to characterize predictive factors for postoperative IOP spikes. Of the patients in our study 5.1% (13/252) experienced a postoperative IOP spike >5 mmHg over baseline within 1 hours of treatment. None of the predictive factors we evaluated was associated with IOP spikes. Our rate of IOP spikes is also similar to existing literature including medically-controlled glaucoma patients in which spike rates are often 10% or less 33 , 34 , 35 . We found no other complications from the laser procedure and no patients had loss of vision of 2 Snellen lines or more at any point during 6 month follow-up. While our sample size was greater than many studies on this topic, we are limited by the retrospective nature and the potential for unobserved confounders. Glaucoma patients were not randomly selected and those receiving laser treatment may had certain glaucoma characteristics compared to those managed on medications alone or by incisional surgery. Our sample group may not be generalizable to non-Caucasian ethnicities or types of glaucoma other than primary open-angle. Our conclusions for predictive factors for postoperative IOP spikes must also be interpreted with care given the small incidence number. We also had 54 patients who received bilateral treatment and a “crossover” effect on the contralateral eye is known to potentially result in a significant IOP reduction 36 , 37 . While this confounder is not eliminated, it was accounted for in statistical analysis. Additional prospective and randomized studies with a more diversified set of patients will help determine if theses baseline characteristics have the same role in predicting success in patients regardless of ethnicity or type of glaucoma. We would also like to address the treatment of 180° vs. 360° of the angle. There are multiple existing reports of greater angle treatment resulting in greater response 10 , 38 , 39 . Since our focus was on the factors that result in more favorable SLT outcome we considered degree of treatment to be one of the factors as it represents a variation in the technique. Due to this, considered it as a surrogate for degree of treatment rather than analyzing the groups separately. This also addresses a relevant question for 180° treatment: “Is more better?” since spot number and energy delivery also represent variation in technique. We decided presenting the data this way also can inform clinicians that more power and more spots may not necessarily result in greater response. Despite these limitations, we have demonstrated that SLT is an effective method to lower IOP and medication burden with a low risk of complication that may have particular benefit for those with higher baseline IOP. Age, type or severity of glaucoma, PTM, and total energy delivery appear to not significantly predict success and patients with a wide variety of these factors will likely benefit equally.
More than 70 million people worldwide suffer from glaucoma, a condition that causes a build-up of fluid and pressure inside the eye and can eventually lead to blindness. Treatment options have traditionally included eye drops to reduce the fluid the eye produces or surgery to unclog the eye's drainage. But a new study from the University of Missouri School of Medicine and MU Health Care provides insight into which patients might benefit most from a noninvasive treatment called selective laser trabeculoplasty (SLT), which relieves pressure by using a laser to alter the eye tissue, resulting in better fluid drainage. "There's been a lack of evidence about how well SLT works, how safe it is and the ideal candidate," said senior author Jella An, MD, an assistant professor of ophthalmology and a fellowship-trained glaucoma specialist at MU Health Care's Mason Eye Institute. "Because so little is known about SLT, there is a lot of apprehension among specialists about using it as a first-line treatment for glaucoma. Our research findings have helped me redefine the ideal patient for this procedure." An's research team reviewed 252 SLT procedures on 198 adult patients with open-angle glaucoma to determine what percentage of these surgeries achieved a 20% or greater reduction in intraocular pressure (IOP). Two months after surgery, 33.6% of patients met success criteria. At the six-month mark, 38.5% achieved the threshold. The researchers discovered patients with a higher baseline IOP had larger reductions in pressure. "We discovered significant improvement in patients with more severe cases, which convinced me that patients with the highest pressure will benefit the most from this laser therapy," An said. Age, type and severity of glaucoma did not significantly predict a successful outcome. In addition, less than 5% of patients studied experienced the most common adverse event of an IOP spike after the procedure. "This study really increased my comfort level to offer SLT as a primary therapy," An said. "Prior to this research, I would prescribe these patients multiple medications, creating the possibility of side effects and poor adherence, which could lead to disease progression. Now I offer this laser first if they are a good candidate because of its safety profile. If it doesn't work, we can always move forward with other options." In addition to An, the study's lead author was MU School of Medicine ophthalmology resident Matthew Hirabayashi, MD. Vikram Ponnusamy, MD, a recent graduate of MU School of Medicine, also contributed to the findings.
10.1038/s41598-020-66473-0
Earth
Researchers tackle methane emissions with gas-guzzling bacteria
Carlo R Carere et al. Mixotrophy drives niche expansion of verrucomicrobial methanotrophs, The ISME Journal (2017). DOI: 10.1038/ismej.2017.112 Journal information: ISME Journal
http://dx.doi.org/10.1038/ismej.2017.112
https://phys.org/news/2017-08-tackle-methane-emissions-gas-guzzling-bacteria.html
Abstract Aerobic methanotrophic bacteria have evolved a specialist lifestyle dependent on consumption of methane and other short-chain carbon compounds. However, their apparent substrate specialism runs contrary to the high relative abundance of these microorganisms in dynamic environments, where the availability of methane and oxygen fluctuates. In this work, we provide in situ and ex situ evidence that verrucomicrobial methanotrophs are mixotrophs. Verrucomicrobia-dominated soil communities from an acidic geothermal field in Rotokawa, New Zealand rapidly oxidised methane and hydrogen simultaneously. We isolated and characterised a verrucomicrobial strain from these soils, Methylacidiphilum sp. RTK17.1, and showed that it constitutively oxidises molecular hydrogen. Genomic analysis confirmed that this strain encoded two [NiFe]-hydrogenases (group 1d and 3b), and biochemical assays revealed that it used hydrogen as an electron donor for aerobic respiration and carbon fixation. While the strain could grow heterotrophically on methane or autotrophically on hydrogen, it grew optimally by combining these metabolic strategies. Hydrogen oxidation was particularly important for adaptation to methane and oxygen limitation. Complementary to recent findings of hydrogenotrophic growth by Methylacidiphilum fumariolicum SolV, our findings illustrate that verrucomicrobial methanotrophs have evolved to simultaneously utilise hydrogen and methane from geothermal sources to meet energy and carbon demands where nutrient flux is dynamic. This mixotrophic lifestyle is likely to have facilitated expansion of the niche space occupied by these microorganisms, allowing them to become dominant in geothermally influenced surface soils. Genes encoding putative oxygen-tolerant uptake [NiFe]-hydrogenases were identified in all publicly available methanotroph genomes, suggesting hydrogen oxidation is a general metabolic strategy in this guild. Introduction Aerobic methane-oxidising bacteria (methanotrophs) consume the potent greenhouse gas methane (CH 4 ) ( Kirschke et al., 2013 ). They serve as the primary biological sink of atmospheric methane (~30 Tg annum −1 ) ( Hanson and Hanson, 1996 ) and, together with anaerobic methane-oxidising archaea, also capture the majority of biologically and geologically produced CH 4 before it enters the atmosphere ( Oremland and Culbertson, 1992 ). Relative to their global impact as greenhouse gas mitigators, aerobic methanotrophs exhibit low phylogenetic diversity and are presently limited to 26 genera in the Alphaproteobacteria and Gammaproteobacteria ( Euzéby, 1997 ), two candidate genera in the phylum Verrucomicrobia ( Op den Camp et al., 2009 ; van Teeseling et al., 2014 ), and two representatives of candidate phylum NC10 ( Ettwig et al., 2010 ; Haroon et al., 2013 ). Reflecting their aerobic methylotrophic lifestyle, methanotrophs thrive in oxic–anoxic interfaces where CH 4 fluxes are high, including peat bogs, wetlands, rice paddies, forest soils and geothermal habitats ( Singh et al., 2010 ; Knief, 2015 ). However, they also exist within soil and marine ecosystems where CH 4 and oxygen (O 2 ) are more variable ( Knief et al., 2003 ; Tavormina et al., 2010 ; Knief, 2015 ). Based on current paradigms, aerobic methanotrophs are thought to primarily grow on one-carbon (C1) compounds in the environment ( Dedysh et al., 2005 ). All species can grow by oxidising CH 4 to methanol via particulate or soluble methane monooxygenase. They subsequently oxidise methanol to carbon dioxide (CO 2 ), yielding reducing equivalents (e.g. NADH) for respiration and biosynthesis. Proteobacterial methanotrophs generate biomass by assimilating the intermediate formaldehyde via the ribulose monophosphate or serine pathways ( Hanson and Hanson, 1996 ). In contrast, verrucomicrobial methanotrophs oxidise methanol directly to formate ( Keltjens et al., 2014 ) and generate biomass by fixing CO 2 via the Calvin–Benson–Bassham cycle ( Khadem et al., 2011 ). While these specialist C1-based metabolisms are thought to be the primary growth strategy under optimal conditions (i.e. CH 4 and O 2 replete conditions), they would presumably be less effective in dynamic environments where CH 4 and oxidant availability are likely to fluctuate. To add to this complexity, the methane monooxygenase reaction (CH 4 +O 2 +[NAD(P)H+H + ]/QH 2 →CH 3 OH+NAD(P) + /Q+H 2 O) ( Hakemian and Rosenzweig, 2007 ) is metabolically demanding, given it requires simultaneous sources of CH 4 , endogenous reductant (NAD(P)H or quinol) and exogenous O 2 to proceed. Methanotrophs therefore must carefully allocate resources to meet carbon, energy and reductant demands ( Hanson and Hanson, 1996 ). This complex balancing act provokes that, in order to be viable in environments limited for CH 4 and O 2 gases ( Knief et al., 2003 ; Tavormina et al., 2010 ), methanotrophs should be able to supplement C1 usage with other energy-yielding strategies. Recent pure culture studies have provided evidence that CH 4 -oxidising bacteria are indeed more metabolically versatile than previously thought. A minority of conventional methanotrophs can meet energy demands by oxidising the trace concentrations of CH 4 (1.8 ppmv) found in the atmosphere ( Kolb et al., 2005 ; Ho et al., 2013 ; Cai et al., 2016 ). Contrary to the long-held paradigm that methanotrophs are obligate methylotrophs, species from three alphaproteobacterial genera have been shown to grow on simple organic acids, alcohols and short-chain alkane gases ( Dedysh et al., 2005 ; Crombie and Murrell, 2014 ). Most recently, it has been shown that some methanotrophs are not exclusive heterotrophs: the verrucomicrobium Methylacidiphilum fumariolicum SolV can sustain chemolithoautotrophic growth on molecular hydrogen (H 2 ) through the activity of two [NiFe]-hydrogenases ( Mohammadi et al., 2016 ). Proteobacterial methanotrophs can also consume H 2 , though to date this process has only been reported as providing reductant to supplement methanotrophic growth ( Chen and Yoch, 1987 ; Shah et al., 1995 ; Hanczár et al., 2002 ). Our recent findings demonstrating a widespread distribution and diversity of hydrogenases in aerobic bacteria, in specific methanotrophs ( Greening et al., 2014a , 2014b , 2015 , 2016 ), led us to surmise that H 2 metabolism could serve a multifaceted role in adaptation of methanotrophic bacteria to their environment. Specifically, H 2 may serve as an important electron donor for the organism to meet carbon, energy and reductant demands in response to fluctuations in CH 4 and oxidant availability. In this work, we addressed this hypothesis by conducting an interdisciplinary investigation of the role of H 2 in defining the physiology and ecology of verrucomicrobial methanotrophs. Evidence obtained from in situ field studies indicate that Verrucomicrobia simultaneously oxidised CH 4 and H 2 in geothermally heated soils in Rotokawa, New Zealand, suggesting they are mixotrophic with respect to energy metabolism. Pure culture studies on a verrucomicrobium representative isolated from this site confirmed that the microorganism grew most efficiently through a mixotrophic lifestyle and depended on H 2 consumption to acclimate to fluctuations in CH 4 and O 2 availability. Integrating these findings with genome surveys, we propose that H 2 oxidation expands the ecological niche of methanotrophs, enabling them to meet energy and biomass demands in dynamic environments where O 2 and CH 4 concentrations are variable. We provide evidence that, while methanotrophic bacteria are often pervasively viewed as C1 specialists, their niche space is likely broader than previously recognised. Combining heterotrophic and lithotrophic electron donors allows for a more flexible growth/survival strategy with clear ecological benefits ( Semrau et al., 2011 ). Materials and methods Environmental sampling Soil samples (~50 g) were collected every 10 cm from the surface of the Rotokawa sampling site (38°37′30.8″S, 176°11′55.3″E) to a maximum depth of 50 cm. Soil temperatures were measured in the field using a 51II single input digital thermometer (Fluke, Everett, WA, USA). The pH of the soil (1 g in 10 ml of dH 2 O) was measured upon returning to the laboratory using a model HI11310 pH probe (Hanna Instruments, Woonsocket, RI, USA). Soil gas samples were collected every 10 cm using a custom-built gas-sampling probe equipped with a 1 l gas-tight syringe (SGE Analytical Science, Melbourne, VIC, Australia). Gas samples were collected and stored at 25 °C in 50 ml Air & Gas Sampling Bags (Calibrated Instruments, McHenry, MD, USA) and were processed within 48 h on a 490 MicroGC (Agilent Technologies, Santa Clara, CA, USA) equipped with Molecular Sieve 5A with a heated injector (50 °C, back-flush at 5.10 s, column at 90 °C, 150 kPa), a PoraPak Q column with a heated injector (50 °C, no back-flush, column at 70 °C, 50 kPa) and a 5CB column with a heated injector (50 °C, no back-flush, column at 80 °C, 150 kPa). CH 4 and H 2 gas consumption by soil microbial communities was determined by incubating 1 g of soils collected from the Rotokawa sampling site (depth <10 cm) in 112 ml gas-tight serum bottles at 37 and 50 °C. The serum bottle headspaces were amended with 300 ppmv H 2 or 400 ppmv CH 4 . Headspace CH 4 and H 2 mixing ratios were measured with a PeakPerformer gas chromatograph (Peak Laboratories, Mountain View, CA, USA) equipped with a flame ionising detector (FID: CH 4 ) and a PP1 Gas Analyzer (Peak Laboratories, Mountain View, CA, USA) equipped with a reducing compound photometer (RCP: H 2 ). Isolation and cultivation of Methylacidiphilum sp. RTK17.1 Soil samples (1 g) from the first 10 cm of soil at the Rotokawa sampling site were inoculated into serum bottles containing 50 ml media (pH 2.5). All cultivations were performed in a V4 mineral medium as described previously ( Dunfield et al., 2007 ) but with the addition (0.2 μm) of rare earth elements lanthanum and cerium ( Pol et al., 2014 ). CH 4 (10% v/v) and CO 2 (1%) were added to an air headspace and samples were incubated at 60 °C with shaking 150 r.p.m. CH 4 in the headspace was monitored with a PeakPerformer gas chromatograph (Peak Laboratories) equipped with an FID. Following several passages (10% v/v) into liquid media, enrichments were transferred onto solid media. Following several weeks incubation (60 °C) single colonies were re-streaked before being transferred back into liquid media. Isolate identity was confirmed via sequencing of the 16S rRNA gene (Macrogen, South Korea) using bacterial 9f/1492r primers ( Weisburg et al., 1991 ). Bioreactor and batch cultivation Methylacidiphilum sp. RTK17.1 was cultivated to the stationary phase for subsequent hydrogenase activity and oxygen respiration measurements in a semi-continuous static-liquid fed-batch bioreactor (New Brunswick; volume 1 l, pH control 2.5, temp 50 °C, agitation 100 r.p.m.) in an artificial headspace composed of 10% CH 4 , 10% H 2 , 20% O 2 , 40% CO 2 (v/v, balance N 2 ; flow rate 60 ml min −1 ) equipped with headspace recirculation and automated sampling via a 490 MicroGC (Agilent Technologies). Gas mixtures were supplied for 2 min every hour at a rate of 60 ml min −1 . Gas feeds for the bioreactor, batch and chemostat cultivation-based experimental work are presented as headspace compositions. For batch culture experiments, 350 ml cultures (in triplicate) were incubated in 1 l rubber-stoppered Schott bottles. The headspace of the bottles was amended with different mixing ratios of H 2 , CH 4 , O 2 and CO 2 gas, as described in figure legends. Acetylene gas was added in some experiments (4% v/v) to inhibit MMO activity as previously described ( Bédard and Knowles, 1989 ). Finally, to determine if growth was enhanced in the presence of H 2 , 20 paired cultures ( n =40) of Methylacidiphilum sp. RTK17.1 were incubated with or without 1% (v/v) H 2 in an air headspace supplemented with (v/v) 10% CH 4 and 1% CO 2 . Cultures were incubated in a custom test tube oscillator (agitation 1.2 Hz; Terratec, Hobart, TAS, Australia). Following 7 days incubation, total protein was determined by the Bradford assay ( Bradford, 1976 ). Statistical significance of observed differences of growth yields was determined using a Student’s t- test ( α =0.05). Headspace mixing ratios of H 2 and CH 4 were monitored throughout batch experiments by GC as described above. Chemostat cultivation Chemostat cultivation of Methylacidiphilum sp. RTK17.1 was performed to investigate the influence of H 2 on growth under O 2 -replete and O 2 -limiting conditions. A 1 l bioreactor (BioFlo 110; New Brunswick Scientific, Edison, NJ, USA) was used for these studies. Cultures were incubated at 50 °C and pH 2.5 with continuous stirring (800 r.p.m.). Bioreactor volume was kept constant at 0.5 l by automatic regulation of the culture level. V4 mineral media was supplied at a constant flow rate of 10 ml h −1 ( D =0.02 h −1 ). Dissolved O 2 was monitored using an InPro 6810 Polarographic Oxygen Sensor (Mettler-Toledo, Columbus, OH, USA). Custom gas mixtures were prepared in a compressed gas cylinder and supplied to the chemostat at a rate of 10 ml min −1 using a mass flow controller (El-flow, Bronkhorst, Netherlands). Gas mixtures contained approximately (v/v) 3% CH 4 and 26% CO 2 for all experiments. For O 2 -replete and O 2 -limiting conditions, influent O 2 was supplied at (v/v) 14.1% and 3.5%, respectively. Within respiring cultures, these values corresponded to 57.5% and 0.17% oxygen saturation. High, medium and low H 2 experiments consisted of (v/v) 1.9%, 0.7% and 0.4% H 2 additions. The balance of all gas mixtures was made up with N 2 . Cell density in liquid samples was monitored by measuring turbidity at 600 nm using a Ultrospec 10 cell density meter (Amersham Bioscience, UK). One unit of OD 600 was found to be equivalent to 0.43 g l −1 cell dry weight for Methylacidiphilum sp. RTK17.1. After achieving a steady-state condition as determined by OD 600 , influent and effluent gas concentrations were monitored over several days using a 490 MicroGC (Agilent Technologies). Biomass cell dry weight was used to calculate growth rate and specific gas consumption rate. Whole-cell biochemical assays Hydrogenase activity of Methylacidiphilum sp. RTK17.1 was measured in stationary-phase cultures harvested from the bioreactor. For amperometric measurements, whole cells were concentrated 5-, 10-, 20- and 30-fold by centrifugation followed by resuspension in V4 mineral medium (pH 3.0). Rate of H 2 oxidation was measured at 50 °C using an H 2 -MR microsensor (Unisense, Denmark) as previously described ( Berney et al., 2014b ; Greening et al., 2015 ). For colourimetric assays, 500 ml culture was harvested by centrifugation (15 min, 5000 × g , 4 °C) and treated as previously described ( Greening et al., 2014a , 2015 ) to prepare crude, cytosolic and membrane fractions for analysis. To test for hydrogenase activity, samples (20 μg protein) from each cell fraction were incubated with 1 ml 50 m m potassium phosphate buffer (pH 7.0) and 50 μ m benzyl viologen for 8 h in an anaerobic chamber (5% H 2 , 10% CO 2 , 85% N 2 (v/v)). Debris was removed by centrifugation (15 min, 10 000 × g , 4 °C) and the absorbance of the supernatants was read at 604 nm in a Jenway 6300 spectrophotometer (Cole Palmer, UK). O 2 consumption experiments were performed on cell suspensions of Methylacidiphilum sp. RTK17.1 to determine the influence of endogenous glycogen catabolism ( Khadem et al., 2012a ) on O 2 -dependent hydrogenase measurements. For these experiments, 2 ml cells (OD 600 1.0) were added to a Clarke-type oxygen electrode and incubated at 50 °C for up to 12 min without the addition of exogenous energy sources. Cell suspensions were treated with the protonophore 1 μ m , carbonyl cyanide m -chlorophenyl hydrazine (CCCP), 1 m m iodoacetamide (an inhibitor of glycolysis) and 1 m m potassium cyanide (KCN) to determine whether observed rates of O 2 consumption were a consequence of glycogen catabolism. An oxygen solubility value of 220 nmol ml −1 was used for calculations. Values were expressed as nmol O 2 min −1 (mg protein) −1 . Protein was determined, from lysed cell pellets, using the BCA assay (Thermo Fisher Scientific, Waltham, MA, USA), with bovine serum albumin as a standard. CO 2 fixation was measured by incubating cultures with 14 C-labelled HCO 3 − (as CO 2 at medium pH of 2.3). Triplicate cultures were initially grown at 50 °C with a headspace of CH 4 (20%), CO 2 (10%), H 2 (10%) in air. At late exponential stage growth, the headspaces of cultures (and heat-killed controls) were replaced by sparging with N 2 (10 min) and then amended with H 2 (8%), CO 2 (8%) and O 2 (1%). 15 μCi of 14 C-HCO 3 − (51 mCi mmol −1 ; American Radiolabeled Chemical, St Louis, MO, USA) were added to each culture. Given that the p K a for HCO 3 − at 50 °C is ~6.8 ( Amend and Shock, 2001 ), it was assumed that all added radiolabelled substrate equilibrated with unlabelled CO 2 pools immediately. Subsamples (1 ml) were harvested by centrifugation at regular time points, washed with sterile medium and subjected to liquid scintillation counting using Cytoscint cocktail as previously described ( Urschel et al., 2015 ). Disintegrations per minute were converted to rates of CO 2 fixed using previously described approaches ( Urschel et al., 2015 ). Details of nucleic acid extraction, amplification, genome sequencing, environmental quantitative PCR and soil microbial community composition determination methodologies are presented in the Supplementary Information . Results and discussion Verrucomicrobia-dominated surface soils serve as a sink of geothermally derived H 2 and CH 4 in Rotokawa geothermal field We performed a geochemical, molecular and biochemical survey of CH 4 and H 2 metabolism in an acidic geothermal soil in Rotokawa, New Zealand. The Rotokawa geothermal field is a predominately steam-driven system dominated by acidic and sulphurous springs and heated soil features. We selected a geothermally heated and acidic soil where previous studies have indicated methanotrophic activity (Sharp et al. , 2014). Substantial vertical gradients in temperature, pH and mixing ratios of CH 4 , H 2 and O 2 were observed in the soil profile ( Figure 1a ). Consistent with the geothermal activity at the site, high soil mixing ratios of CH 4 (47 000 ppmv) and H 2 (280 ppmv) were detectable at the deepest soil depths sampled. The levels of both gases decreased in the upper 30 cm of soil and, in the case of H 2 , dropped towards atmospheric levels by 10 cm depth ( Figure 1b ). These sharp decreases suggested that there were active methanotrophs and hydrogenotrophs in the oxic zone of the soil that consume most geothermally derived gas before it is emitted into the atmosphere. Indeed, microcosm incubations containing surface soils and associated communities rapidly consumed H 2 and CH 4 introduced into ambient air headspaces. Rates of H 2 oxidation exceeded that of CH 4 , suggesting that H 2 serves as a major energy source for this geothermal soil community ( Figures 1c and Figures 1d ). Figure 1 Geochemical, biochemical and molecular profile of CH 4 and H 2 oxidation at a geothermal field in Rotokawa, New Zealand. ( a ) Temperature and pH of the soils at different depths. ( b ) Soil mixing ratios of CH 4 , H 2 and O 2 at different depths. ( c ) Oxidation of CH 4 by surface soils. ( d ) Oxidation of H 2 by surface soils. In both ( c ) and ( d ), soil samples of 1 g were collected from the first 10 cm of soil from the profile and incubated in serum vials containing a CH 4 - or H 2 -supplemented ambient air headspaces. The average and standard deviation of triplicate samples are shown. ( e ) Community structure of the study site at different soil depths. Illumina 16S rRNA gene sequencing was performed on total genomic DNA extracted from samples taken at 10–50 cm soil depth. Non-rarefied abundance results (%) are shown for all OTUs (>100 reads) from 130 289 total sequence reads (with an average of 26 058 per sample depth). Consistent with a methanotrophic lifestyle, all verrucomicrobial OTUs were further classified into the family Methylacidiphilaceae. ( f ) Abundance of genes encoding Verrucomicrobia-type particulate methane monooxygenase ( pmoA ) and aerobic uptake hydrogenase ( hyaB ) plotted as a function of soil depth. Error bars represent the standard deviation of triplicate measurements on each extract. Differences in the copy number between pmoA and hyaB are attributable to the multiple isoforms of pmoA encoded in Methylacidiphilum spp. genomes. Full size image To infer the microorganisms responsible for CH 4 and H 2 uptake, we determined the microbial community structure of the soil profile. Consistent with findings in other acidic soil ecosystems ( Golyshina, 2011 ; Sharp et al., 2014 ; Lee et al., 2016 ), the euryarchaeotal order Thermoplasmatales was dominant at all depths. Methanotrophic verrucomicrobial genera, specifically Methylacidiphilum spp. and Methylacidimicrobium spp., were the dominant bacterial OTUs in surface soils and accounted for 47% of all bacterial 16S rRNA gene sequences in the soil profile ( Figure 1e ). Bacteria from these genera have been previously isolated in acidic geothermal soils in New Zealand ( Dunfield et al., 2007 ; Sharp et al., 2014 ), Kamchatka ( Islam et al., 2008 ) and Italy ( Pol et al., 2007 ; van Teeseling et al., 2014 ). As the only known acidophilic methanotrophs ( Op den Camp et al., 2009 ), it is probable that the verrucomicrobial phylotypes were solely responsible for CH 4 consumption in this ecosystem. Moreover, given putative uptake [NiFe]-hydrogenases have been detected in the genomes of Methylacidiphilales but not in Thermoplasmatales ( Hou et al., 2008 ; Khadem et al., 2012b ; Greening et al., 2016 ), it was likely that the Verrucomicrobia detected in these soils serve as major sinks of H 2 in this ecosystem. To test this possibility, we designed PCR primers to detect the presence of genes encoding the large subunits of the three particulate methane monooxygenases ( pmoA ) and a single oxygen-tolerant uptake hydrogenase ( hyaB ) encoded in the genome of Methylacidiphilum infernorum V4 ( Hou et al., 2008 ) ( Supplementary Table S1 ). These primers were applied in qPCRs on DNA extracts from the Rotokawa soil profile. Both the hydrogenase and methane monoxygenase genes were detected at all depths, with the most abundant templates detected in the top 10 cm of soil ( Figure 1f ), corresponding to the zones with the highest relative abundance of Verrucomicrobia-affiliated sequences and where the lowest CH 4 and H 2 soil gas concentrations were detected ( Figure 1b ). A verrucomicrobial strain isolated from Rotokawa constitutively oxidises CH 4 and H 2 gas To gain insight into the metabolic strategies that Verrucomicrobia use to dominate bacterial assemblages in geothermally heated acidic soil ecosystems, we isolated a thermotolerant methanotroph from surface soils. The strain, Methylacidiphilum sp. RTK17.1 ( Supplementary Table S2 ), grew optimally at pH 2.5, 50 °C ( T max 60 °C) and shared 99% 16S rRNA gene sequence identity with Methylacidiphilum infernorum V4 ( Dunfield et al., 2007 ). Bacteriological characterisation confirmed that the strain, in common with other verrucomicrobial methanotrophs ( Khadem et al., 2012a , 2011 ), oxidised CH 4 , fixed CO 2 and accumulated glycogen. In addition, cultures rapidly consumed exogenous H 2 ( Supplementary Figures S2 and S3 ). Real-time amperometric measurements confirmed that the strain oxidised H 2 under oxic conditions at rates proportional to increases in cell density ( Figure 2a ). H 2 oxidation occurred in all batch culture conditions tested, including when CH 4 was absent ( Supplementary Figure S1A ), when CH 4 was in excess ( Supplementary Figure S1C ), and following inhibition of CH 4 oxidation with acetylene ( Supplementary Figure S2 ). This suggests that H 2 and CH 4 are oxidised independently serving to energise the respiratory chain through the reduction of the quinone pool. Moreover, this expands the role of hydrogenases in aerobic methanotrophs beyond their previously suggested role of providing reductant for pMMO ( Shah et al., 1995 ; Hanczár et al., 2002 ). The observation that RTK17.1 can constitutively oxidise H 2 and CH 4 parallels results from the soil study showing that both H 2 and CH 4 are simultaneously oxidised ( Figure 1 ). Considering that Verrucomicrobia are dominant among taxa putatively capable of oxidising H 2 or CH 4 , this provides further evidence that verrucomicrobial methanotrophs adopt a mixotrophic lifestyle with respect to their energy metabolism. Figure 2 H 2 oxidation drives aerobic respiration and CO 2 fixation in Methylacidiphilum sp. RTK17.1. ( a ) Real-time oxidation of H 2 by bioreactor-cultivated whole cells. Rates of H 2 uptake were measured amperometrically using a H 2 microsensor. Density dependence and heat sensitivity (HK) of the process are shown. ( b ) Localisation of hydrogenase activity in cell membranes. Activity was measured colourimetrically by incubating cell fractions in an anaerobic chamber in the presence of H 2 and the artificial electron acceptor/redox dye benzyl viologen. The protein concentration-normalised absorbance of activity in cell lysates (L), cytosols (C) and membranes (M) are shown. ( c ) Aerobic respiratory dependence of H 2 uptake in whole cells. Real-time traces in untreated cells and nigericin-treated cells are shown. The relative amounts of H 2 and O 2 added at specific time points are shown. ( d ) Rates of hydrogen oxidation of untreated, nigericin-treated and valinomycin-treated cells. For the uncoupler-treated cultures, the initial ( x ), O 2 -limiting ( y ) and O 2 -restored ( z ) rates of H 2 oxidation are shown, which correspond to the rates highlighted in panel ( c ). Endogenous glycogen catabolism likely contributed to oxygen limitation ( y ) observed in nigericin-treated cells ( Supplementary Figure S5B ). ( e ) CO 2 fixation by batch-cultivated whole cells cultivated under microoxic growth conditions with H 2 and O 2 as the sole reductant and oxidant ( Supplementary Figure S2A ). 14 C-labelled CO 2 is incorporated into biomass in live but not heat-killed (HK) cultures. CO 2 fixed per mol of biomass in live and heat-killed cells is presented as a function of time. Full size image We sequenced the genome of Methylacidiphilum sp. RTK17.1 to obtain further insights into the potential functionality of this taxon ( Supplementary Table S2 ). Genes encoding key enzymes and pathways for CH 4 oxidation to CO 2 , CO 2 fixation through the Calvin–Benson–Bassham pathway, and aerobic respiration ( Figure 3 ) were highly conserved with those identified in other Methylacidiphilum strains ( Hou et al., 2008 ; Khadem et al., 2012b ; Erikstad and Birkeland, 2015 ). We also detected two [NiFe]-hydrogenase-encoding gene clusters in the genome ( Supplementary Figure S3 ) and confirmed their expression during aerobic growth with CH 4 and H 2 by RT-PCR ( Supplementary Figure S4 ). The gene clusters were classified as groups 1d ( hyaABC ) and 3b ( hyhBGSL ) [NiFe]-hydrogenases based on phylogenetic affiliation with biochemically characterised enzymes ( Supplementary Figure S3 ; Greening et al., 2016 ; Søndergaard et al., 2016 ). Biochemically characterised group 1d [NiFe]-hydrogenase are H 2 -uptake multimeric proteins that are membrane-bound via their cytochrome b subunit ( hyaC ) and function by transferring electrons into the respiratory chain via the quinone pool ( Fritsch et al., 2011 ). Consistent with the observed activity of the [NiFe]-hydrogenase in the presence of O 2 ( Figure 2a ), enzymes of this class are predicted to be O 2 -tolerant due to the presence of a novel [4Fe3S] cluster that protects the O 2 -sensitive active site from oxidative damage ( Fritsch et al., 2011 ; Shomura et al., 2011 ). Indeed, the six cysteine residues required to ligate such a cluster were conserved in the deduced RTK17.1 HyaB protein sequence. In comparison, biochemically characterised group 3b hydrogenases are reversible cytosolic enzymes that are relatively O 2 -sensitive ( Kwan et al., 2015 ); they directly couple NAD(P)H oxidation to H 2 formation during fermentation ( Berney et al., 2014a ) and, in some cases, H 2 oxidation by these enzymes supports CO 2 fixation through the production of reduced electron carriers ( Yoon et al., 1996 ). The RTK17.1 [NiFe]-hydrogenase combination differs from Methylacidiphilum fumariolicum SolV, where group 1 h/5 ( hhyLH ; a putative high-affinity H 2 uptake hydrogenase) and group 1d [NiFe]-hydrogenases (annotated as hupSLZ ) were reported ( Mohammadi et al., 2016 ), with our previous survey ( Greening et al., 2016 ) also showing SolV encodes a group 3b enzyme ( Supplementary Figure S3 ). Figure 3 Proposed model of methane (CH 4 ) and hydrogen (H 2 ) oxidation in Methylacidiphilum sp. RTK17.1. During mixotrophic growth, the oxidation of both H 2 and CH 4 yields reducing equivalents in the form of reduced quiones (QH 2 ). A large proton-motive force is generated and sufficient ATP is produced for growth via an H + -translocating F 1 F o -ATP synthase. Some of the quinol generated through H 2 oxidation provides the electrons necessary for pMMO catalysis. Following CH 4 oxidation by pMMO, ensuing reactions catalysed by an XoxF-type methanol dehydrogenase (MeDH) and formate dehydrogenase (FDH) contribute additional reductant (cyt c and NADH) into the respiratory chain for ATP production and growth ( Keltjens et al., 2014 ). NADH reduced through the actions of the formate dehydrogenase and H 2 -dependent group 3b [NiFe]-hydrogenase is used to support CO 2 fixation through the Calvin–Benson–Bassham cycle. Respiratory complexes I and II are not shown but are encoded in the genome of Methylacidiphilum sp. RTK17.1 ( Supplementary Table S1 ). Full size image H 2 oxidation supports aerobic respiration and CO 2 fixation in the verrucomicrobial isolate The observation that RTK17.1 encodes and utilises [NiFe]-hydrogenases prompted biochemical studies to investigate the role of H 2 in the metabolism of this bacterium. Biochemical assays targeting the group 1d [NiFe]-hydrogenase demonstrated that it is a membrane-bound uptake hydrogenase linked to the aerobic respiratory chain, consistent with our genome-based predictions. Fractionation experiments confirmed the activity was membrane-localised, as shown by the 31-fold increase in activity in membranes when compared to the cytosolic fraction ( Figure 2b ). We next tested the effect of the ionophores nigericin and valinomycin on rates of H 2 oxidation on whole cells (in the absence of CH 4 ). These compounds (nigericin and valinomycin) dissipate components of the electrochemical gradient used for ATP synthesis (pH and charge gradient, respectively) and the cellular response is to increase respiration to replenish the electrochemical gradient ( Cook et al., 2014 ) in a phenomenon known as uncoupling. H 2 oxidation increased upon treatment with these ionophores ( Figures 2c and d ), showing hydrogenase activity behaves as expected from a component of the energy-conserving respiratory chain. This uncoupled activity rapidly ceased, due to O 2 consumption by the cells suspended in the sealed chamber, but could be restored by further supplementation with O 2 . These results show hydrogenase is a bona fide component of this microorganism’s respiratory chain and is coupled to the activity of terminal cytochrome oxidases. Under these conditions, the onset of O 2 -limitation was likely exacerbated by the catabolism of endogenous glycogen reserves ( Supplementary Figure S5 ). Collectively, these findings demonstrate that this group 1d [NiFe]-hydrogenase is a membrane-bound, respiratory-linked, O 2 -tolerant/dependent enzyme that drives ATP synthesis as has been observed in other aerobic hydrogenotrophs. To test whether H 2 oxidation coupled to O 2 reduction could support CO 2 fixation in RTK17.1, we transferred mixotrophically grown, log phase cultures into a new microoxic headspace (O 2 1% v/v) in which H 2 (8% v/v of the headspace) was present as the sole exogenous electron donor and CO 2 (8% v/v) as the sole carbon source. Trace 14 CO 2 (0.1% of total CO 2 supplied) was added and the amount fixed into biomass sampled over time was determined by measuring disintegrations per minute (DPM) via liquid scintillation counting. We observed systematic increases in DPMs associated with cells sampled over a 20 h period relative to controls, indicating that CO 2 was rapidly incorporated into biomass in a time-dependent manner. The number of DPMs associated with cells after 20 h of incubation were 200-fold greater in live than heat-killed cells ( Figure 2e ), showing that biological CO 2 fixation occurs in cultures supplied with H 2 as the sole reductant and O 2 as the sole oxidant. This activity was not observed in the absence of exogenous H 2 , indicating that H 2 serves as the source of reductant for CO 2 fixation under these conditions. H 2 oxidation supports adaptation of verrucomicrobial methanotrophs to CH 4 and O 2 limitation The finding that Methylacidiphilum sp. RTK17.1 couples H 2 oxidation to aerobic respiration and carbon fixation suggests that it can grow chemolithoautotrophically. Consistent with this hypothesis, we observed a small but significant increase in biomass in cultures grown under microoxic conditions when H 2 , CO 2 and O 2 (1% headspace concentration) were supplied as the sole electron donor, carbon source and electron acceptor, respectively ( Supplementary Figure S2A ). This biomass increase was concomitant with increased amounts of CO 2 fixed ( Figure 2e ), and the amount of carbon fixed per cell (40 to 80 fmol cell −1 ) produced over this time period was consistent with biomass yields from other studies ( Maestrini et al., 2000 ). However, the observed growth rates (0.005 h −1 ) were substantially lower than observed when RTK17.1 was supplied with CH 4 , CO 2 and H 2 (0.037 h −1 ) and when Methylacidiphilum fumariolicum SolV was grown autotrophically in similar conditions (0.047 h −1 ) ( Mohammadi et al., 2016 ). We also observed that autotrophic growth was only sustained when RTK17.1 was incubated under microoxic conditions (1% O 2 ) rather than in an oxic (20% O 2 ) headspace. We speculate that, under such microoxic conditions, sufficient O 2 is available to drive hydrogenotrophic aerobic respiration through activity of the group 1d [NiFe]-hydrogenase. Simultaneously, it is likely that the O 2 -sensitive group 3b [NiFe]-hydrogenase can remain active and is able to supply reducing equivalents required for CO 2 fixation. In support of this notion, H 2 oxidation sustained energy-conservation via the group 1d hydrogenase of non-growing CH 4 -limited cultures in the presence of ambient O 2 ( Supplementary Figure S2C ) and enhanced growth yields in CH 4 -replete cultures ( Supplementary Figure S2D ). To better understand the role of H 2 and CH 4 oxidation during mixotrophic growth, we compared growth and gas consumption kinetics of the cells cultivated in a chemostat under six different conditions ( Table 1 ). We observed that H 2 addition into the feedgas of Methylacidiphilum sp. RTK17.1 increased growth yields under O 2 -replete and O 2 -limiting conditions. Whereas CH 4 oxidation predominated under O 2 -replete conditions, the specific consumption rate of H 2 increased 80-fold and exceeded rates of CH 4 oxidation under O 2 -limiting conditions. In combination, these results show that Methylacidiphilum sp. RTK17.1 grows mixotrophically and modulates rates of H 2 and CH 4 consumption in response to the availability of O 2 in order to balance energy-generation and carbon fixation. Table 1 H 2 oxidation by Methylacidiphilum sp. RTK17.1 during chemostat cultivation Full size table H 2 oxidation may be a general ecological strategy for verrucomicrobial and proteobacterial methanotrophs In this work, we demonstrated that a verrucomicrobial methanotroph adopts a mixotrophic lifestyle both in situ and ex situ . The environmental isolate Methylacidiphilum sp. RTK17.1 sustains aerobic respiration and carbon fixation by using organic (CH 4 ) and inorganic (H 2 ) electron donors either in concert or separately depending on substrate availability. Through the dual use of both electron donors, the bacterium is able to more flexibly adjust its metabolism to meet energy and carbon demands in response to simulated environmental change. A model of how CH 4 and H 2 metabolism is integrated into the physiology of this microorganism, based in part on genomic information ( Supplementary Table S2 ), is shown in Figure 3 . Integrating our genomic, physiological and biochemical findings, we conclude that the group 1d [NiFe]-hydrogenase is a membrane-bound uptake hydrogenase that is directly linked to the aerobic respiratory chain and supplements the quinone pool to power the methane monoxygenase reaction or feed directly into Complex III. The organism is also capable of using H 2 as a reductant to support CO 2 fixation through the Calvin–Benson-Bassham pathway, likely through the cytosolic NAD(P)-coupled group 3b [NiFe]-hydrogenase. The [NiFe]-hydrogenase combination (group 1d and group 3b) in this RTK17.1 only supports weak autotrophic growth, but provides multiple layers of support for a mixotrophic lifestyle. In addition to supporting growth and survival during periods of CH 4 limitation, our data show that H 2 is the preferred electron donor during O 2 -limiting conditions. Under these conditions, rates of H 2 consumption increased by 77-fold and exceeded observed rates of CH 4 oxidation ( Table 1 ). This is likely to be a consequence of two factors. Firstly, some hydrogenases such as the group 3b [NiFe]-hydrogenase are inhibited at high O 2 concentrations ( Kwan et al., 2015 ). Secondly, methanotrophy is more resource-intensive than canonical aerobic hydrogenotrophy, given it requires O 2 both as a substrate for methane monooxygenase and as the terminal electron acceptor for respiration ( Hakemian and Rosenzweig, 2007 ). Comparison of our independent findings with those made by Mohammadi et al. (2016) suggests that verrucomicrobial methanotrophs have evolved a range of strategies to integrate H 2 metabolism into their physiology. Both Methylacidiphilum sp. RTK17.1 and Methylacidiphilum fumariolicum SolV are capable of sustaining chemolithoautotrophic growth on H 2 /CO 2 under microoxic conditions. However, the strains grow at drastically different rates of 0.005 and 0.047 h −1 ( Mohammadi et al., 2016 ), respectively, under optimal conditions. These differences may reflect that, while both strains possess group 1d ( hyaABC/hupLSZ ) and group 3b ( hyhBGSL ) hydrogenases, SolV has also acquired a group 1 h enzyme ( hhyLH ) ( Greening et al., 2016 ) with surprisingly fast whole-cell kinetics ( Mohammadi et al., 2016 ). It is possible that, with this enhanced hydrogenase suite, SolV may be able to more efficiently partition electrons derived from H 2 oxidation between respiration and carbon fixation. In addition to supporting hydrogenotrophic growth, both organisms also modulate hydrogenase expression and H 2 oxidation rates in response to simulated environmental change, such as CH 4 and O 2 availability ( Mohammadi et al., 2016 ). While the physiological significance of this regulation was not explored in SolV, our studies inferred that H 2 co-oxidation with CH 4 enhanced yields during CH 4 surplus and sustained survival during CH 4 limitation in RTK17.1. Further differences between the strains are reflected in the regulatory profile, with the group 1d enzyme constitutively expressed in RTK17.1, but repressed in favour of the group 1 h enzyme under oxic conditions in SolV ( Mohammadi et al., 2016 ). Overall, our findings suggest that SolV may fulfil a similar ecological niche to classical Knallgas bacteria (e.g. Ralstonia eutropha ), switching between efficient heterotrophic and autotrophic growth dependent on energy availability. In contrast, H 2 metabolism appears to be more important for optimising growth and survival of RTK17.1 in response to energy and O 2 availability. In this regard, this organism’s metabolism more closely resembles the mixotrophic strategy employed by Mycobacterium smegmatis ( Berney et al., 2014a ; Greening et al., 2014b ). Future studies would benefit from side-by-side comparisons of these strains under equivalent conditions and further exploration of the physiological role and biochemical features of the group 3b and group 1 h [NiFe]-hydrogenase enzymes. More generally, we predict that H 2 oxidation is likely to support the majority of aerobic methanotrophic bacteria. Whereas only a few methanotrophic genera appear to be capable of heterotrophic generalism ( Crombie and Murrell, 2014 ), genomic surveys ( Peters et al., 2015 ; Greening et al., 2016 ) show that all 31 publicly available aerobic methanotrophic genomes harbour the capacity to metabolise H 2 ( Supplementary Figure S3 ). As with SolV ( Mohammadi et al., 2016 ) and RTK17.1, most of the surveyed methanotroph genomes that were found to encode for [NiFe]-hydrogenases have been shown to support aerobic respiration (groups 1d, 1h, 2a) and carbon fixation (groups 3b, 3d, 1 h) ( Greening et al., 2016 ). Reports showing H 2 oxidation by several proteobacterial strains further support the classification of these enzymes as uptake hydrogenases ( Chen and Yoch, 1987 ; Shah et al., 1995 ; Hanczár et al., 2002 ). H 2 is likely to be a particularly attractive energy source for methanotrophs because of its relative ubiquity when compared to C1 compounds. H 2 is biologically produced by diverse organisms across the three domains of life as a result of fermentation, photobiological processes and nitrogen fixation ( Peters et al., 2013 ; Schwartz et al., 2013 ; Poudel et al., 2016 ). Moreover, verrucomicrobial and proteobacterial methanotrophs harbouring the recently described group 1 h [NiFe]-hydrogenases ( Greening et al., 2014a , 2015 ) may be capable of scavenging atmospheric H 2 to survive CH 4 starvation. Considering these observations in concert, it seems likely that hydrogenases in aerobic methanotrophs function to supplement the energetic and reductant requirements in environments where CH 4 and O 2 gases are limiting or variable. Thus, while methanotrophic bacteria are often pervasively viewed as C1 specialists, we propose that, via the utilisation of hydrogenases as part of a mixotrophic strategy, the niche space of methanotrophs is much broader than previously recognised. Combining heterotrophic and lithotrophic electron donors allows for a more flexible growth/survival strategy, with clear ecological benefits ( Semrau et al., 2011 ). We therefore predict that most methanotrophs will be able to use H 2 to support either autotrophic growth, mixotrophic growth or long-term persistence/maintainence. Finally, our geochemical and microbial community diversity investigation of the Rotokawa geothermal field provides ecological support to our assertion that the metabolic flexibility of methanotrophs enhances niche expansion in situ . We provide genetic and biochemical evidence that methanotrophic Verrucomicrobia inhabiting the near-surface soils co-metabolised CH 4 and H 2 gas ( Figure 1 ) and in doing so adopt a clear mixotrophic strategy. In acidic geothermal soils, we demonstrated that verrucomicrobial methanotrophs have grown to be the dominant bacterial taxon by simultaneously consuming gases primarily of geothermal and atmospheric origin, that is, CH 4 and H 2 as energy sources, respectively, CO 2 as a carbon source and O 2 as oxidant. Their metabolic flexibility also ensures resilience to temporal and spatial variations in the availability of key substrates allowing for CH 4 oxidation via the monooxygenase reaction. More generally, the prevalent narrative that methanotrophic bacteria are methylotrophic specialists is based on studies under optimal growth conditions and ignores the requirement of these organisms to adapt to environmental variations requiring a certain level of metabolic versatility. Intimate evolutionary and ecological interactions are likely to have selected for a spectrum of different lifestyles across methanotrophic lineages, ranging from strict C1 specialism to broad substrate generalism, depending on the environment. However, based on the presence of [NiFe]-hydrogenase in numerous methanotroph genomes ( Supplementary Figure S3 ) and the data presented here, we contend it is likely that most methanotrophs depend on H 2 oxidation to some extent to support either growth and/or survival. This finding has broad implications for future investigations on the ecology of methanotrophs as well as the biogeochemical cycles of H 2 and CH 4 .
An international research team co-led by a Monash biologist has shown that methane-oxidising bacteria – key organisms responsible for greenhouse gas mitigation – are more flexible and resilient than previously thought. Soil bacteria that oxidise methane (methanotrophs) are globally important in capturing methane before it enters the atmosphere, and we now know that they can consume hydrogen gas to enhance their growth and survival. This new research, published in the prestigious International Society for Microbial Ecology Journal, has major implications for greenhouse gas mitigation. Industrial companies are using methanotrophs to convert methane gas emissions into useful products, for example liquid fuels and protein feeds. "The findings of this research explain why methanotrophs are abundant in soil ecosystems," said Dr Chris Greening from the Centre for Geometric Biology at Monash University. "Methane is a challenging energy source to assimilate. "By being able to use hydrogen as well, methanotrophs can grow better in a range of conditions." Methanotrophs can survive in environments when methane or oxygen are no longer available. "It was their very existence in such environments that led us to investigate the possibilities that these organisms might also use other energy-yielding strategies," Dr Greening said. Dr Greening's lab focuses on the metabolic strategies that microorganisms use to persist in unfavourable environments and he studies this in relation to the core areas of global change, disease and biodiversity. In this latest study, Dr Greening and collaborators isolated and characterised a methanotroph from a New Zealand volcanic field. The strain could grow on methane or hydrogen separately, but performed best when both gases were available. "This study is significant because it shows that key consumers of methane emissions are also able to grow on inorganic compounds such as hydrogen," Dr Greening said. "This new knowledge helps us to reduce emissions of greenhouse gases. " Industrial processes such as petroleum production and waste treatment release large amounts of the methane, carbon dioxide and hydrogen into the atmosphere. "By using these gas-guzzling bacteria, it's possible to convert these gases into useful liquid fuels and feeds instead," Dr Greening said.
10.1038/ismej.2017.112
Nano
Development of a novel carbon nanomaterial 'pot'
Hiroyuki Yokoi et al, Novel pot-shaped carbon nanomaterial synthesized in a submarine-style substrate heating CVD method, Journal of Materials Research (2016). DOI: 10.1557/jmr.2015.389
http://dx.doi.org/10.1557/jmr.2015.389
https://phys.org/news/2016-08-carbon-nanomaterial-pot.html
Abstract We have developed a new synthesis method that includes a chemical vapor deposition process in a chamber settled in organic liquid, and applied its nonequilibrium reaction field to the development of novel carbon nanomaterials. In the synthesis at 1110–1120 K, using graphene oxide as a catalyst support, iron acetate and cobalt acetate as catalyst precursors, and 2-propanol as a carbon source as well as the organic liquid, we succeeded to create carbon nanofiber composed of novel pot-shaped units, named carbon nanopot. A carbon nanopot has a complex and regular nanostructure consisting of several parts made of different layer numbers of graphene and a deep hollow space. Dense graphene edges, hydroxylated presumably, are localized around its closed end. The typical size of a carbon nanopot was 20–40 nm in outer diameter, 5–30 nm in inner diameter, and 100–200 nm in length. A growth model of carbon nanopot and its applications are proposed. Access provided by MPDL Services gGmbH c/o Max Planck Digital Library Working on a manuscript? Avoid the common mistakes I. INTRODUCTION Carbon is capable of forming various hexagonal networks incorporating partially pentagonal or heptangular structures with sp 2 hybrids. 1 A variety of carbon nanomaterials with unique forms, including fullerenes, 2 carbon nanotubes (CNTs), 3 graphene 4 and so on, have been generated owing to this versatility. Even as for CNTs or nanofibers, bamboo-type, 5 cup-stacked-type, 6 beaded 7 and necklace-type 8 tubes, nanobells, 9 and nanocoils 10 as well as cylindrical ones have been reported. Applications of these materials have been intensively examined so as to take advantage of their structural features or physical properties. As the properties of carbon nanomaterials could be superior or peculiar depending on their structures, 1 creating novel carbon nanomaterials has been one of the significant subjects in materials research. One of the effective approaches to produce novel structures is the application of nonequilibrium conditions such as high temperature gradients or rapid cooling in the synthesis. It is known that inhibition of a reverse reaction or control of precipitation through rapid cooling is a key point in the arc discharge synthesis of fullerene 11 or chemical vapor deposition (CVD) of graphene, 12 respectively. The high growth rate of CNTs is achieved in the liquid phase deposition, and is attributed to the high gradient of temperature around catalysts immersed in an organic liquid. 13 A flaw of this technique is the difficulty in applying catalysts that are solvable to organic liquids or peeled from substrates due to violent bubbling during the deposition. To resolve the flaw, we have been developing a synthesis technique, named submarine-style substrate heating CVD, in which catalysts do not contact organic liquid despite the fact that the synthesis chamber is settled in the liquid. In the present study, we have adopted graphene oxide 14 (GO) as a catalyst support in applying this CVD method to the synthesis of carbon nanomaterials, and succeeded to synthesize nanofibers composed of novel pot-shaped carbon nanomaterials, named carbon nanopot in this study. GO is well-known as an excellent support of metal nanoparticles, and its potential applications in catalysis, light energy conversion, fuel cells, and sensors have been examined. 15 On the other hand, CNTs were produced in a conventional CVD with GO-supported Ni nanoparticles. 16 No further investigation, however, into the synthesis of carbon nanomaterials with GO-supported catalysts has been reported to the best of our knowledge. Catalyst supporting materials made of sp 2 carbon could not only disperse metal nanoparticles as well as zeolite 17 but also affect the structure of produced materials made of the same sp 2 carbon. Nanobell has been already known as a container-shaped carbon nanomaterial that is synthesized in the appearance of fiber. This material was separated into pieces through intense ultrasonication for the application as drug-delivery vehicle in a recent study. 18 The properties of carbon nanopot, such as the capacity and the controlled release of medicine, are expected to be definitely superior to those of nanobell as the aspect ratio of a nanopot is much larger than that of a nanobell. In addition, a nanopot has a complex structure in which graphene edges are distributed unevenly at its outer surface and are exposed densely around its closed end, which would enable one to develop new nanomaterials processing unavailable to conventional nanomaterials. The growth mechanism of this uniquely structured nanomaterial should be also discussed. Several formation mechanisms have been proposed for CNT. 19 Two general modes, tip-growth mode 20 and base-growth mode 21 are accepted widely in the catalyzed CVD of CNTs on substrates. Depending on the strength of the catalyst-substrate (or -support) interaction, catalyst particles separate from the substrate during the precipitation of CNT across the particle bottom in the former mode, and are anchored to the substrate surface while CNT precipitates out around the particle’s apex in the latter mode. The state of catalyst particles during the CNT growth, and the diffusion of carbon after carbon source, such as hydrocarbon, is decomposed on the surface of catalyst particles have been also debatable, whether the catalyst is in the liquid 22 , 23 or solid state 24 and whether the carbon diffusion is a volume one 24 , 25 or a surface one, 26 , 27 respectively. Several in situ atomic-scale electron microscopy studies were conducted to approach the heart of these issues. Those studies have revealed that catalyst particles exhibit successive elongation and contraction during the formation of the bamboo-type CNTs in the tip-growth mode while they remain crystalline. 28 , 29 This dynamic deformation of the catalyst particle could also take place in the formation of carbon nanopots. However, one should note that these two nanomaterials are distinct from each other in their nanostructures as described above, which suggests that some aspects of the growth mechanism of carbon nanopot could be unique. As for the carbon diffusion, both a surface one 28 and a volume one 29 , 30 have been confirmed or supported. In this paper, we report the development of the submarine-style substrate heating CVD method, and synthesis, structure, and discussion on the growth mechanism of carbon nanopots. II. EXPERIMENTAL A. Development of the submarine-style substrate-heating CVD method A schematic of an apparatus of the submarine-style substrate-heating CVD method is shown in Fig. 1 . A 1-L vessel with a water jacket was half-filled with organic liquid and covered with a five-necked lid. A Dimroth condenser (VIDREX, Fukuoka, Japan) was attached to one of the lid necks, and air remaining inside the vessel was replaced with nitrogen gas at a flow rate of 5 L/min. We assembled a synthesis chamber by covering the top and the sides of the space between two electrodes with 1 mm thick borosilicate glass plates as shown in Fig. 2 and settled the chamber in the organic liquid. The inside of the chamber was not submerged with liquid by maintaining the internal pressure against the hydraulic pressure, and its uncovered bottom served as a gas port. Thus, we have fulfilled the following requirements simultaneously and simply: the catalyst does not contact organic liquid directly, and it is possible to control the temperature gradient around the catalyst to be almost comparable to that in the liquid phase deposition. FIG. 1 Schematic of the submarine-style substrate heating CVD apparatus. Two of five necks are not shown. Full size image FIG. 2 Schematic of the synthesis chamber part of the submarine-style substrate heating CVD apparatus. The front glass plate of the chamber is detached for ease of observing the inside. (a) Electrode, (b) borosilicate glass plates, (c) catalyst-loaded silicon substrate, (d) substrate support, and (e) carbon plate heater. Full size image In this study, the catalyst was mounted on thermally-oxidized silicon substrates of 14 mm in length, 9 mm in width, and 0.5 mm in thickness. The thickness of the silicon oxide layer was 300 nm. The substrate was settled 10 mm above the bottom of the inner space of the synthesis chamber, and its catalyst-mounted side was turned upward. The size of the synthesis chamber was 20 mm in length, 25 mm in width, and 20 mm in height. One can also mount the substrate much closer to the surface of organic liquid at the bottom of the chamber by turning the catalyst-mounted side downward, which would realize almost the same condition of the temperature gradient around the catalyst as that in the liquid phase deposition. The substrate was heated with a carbon plate of 5 mm in width and 0.5 mm in thickness attached to the bottom side. The carbon plate was charged with a DC power supply [PU12.5-60, 750 W (12.5 V, 60 A), KENWOOD TMI, Kanagawa, Japan] to heat the silicon substrate to 1273 K. The temperature of the substrate was measured with a calibrated radiation thermometer (THI-900DX16, Sensor: Si, TASCO, Osaka, Japan). Argon gas was blown into the chamber to remove air from the chamber and also to prevent the product from being immersed with organic liquid while cooling the substrate down after the deposition. Carbon source was supplied into the chamber through the vaporization of the organic liquid at the bottom of the chamber due to the radiant heat from the carbon plate heater. Using the thus designed apparatus, we could heat the catalyst to 1273 K and supply it with the carbon source without immersing it with the organic liquid in the whole synthesis process under the level of organic liquid. In addition, we could quench the product from 1273 K to a temperature below 773 K in 2 s by turning off the power supply. B. Synthesis and analysis of carbon nanopot GO used as the catalyst support was prepared with a modified Hummers method. 31 The concentration of GO in the aqueous dispersion was diluted to 0.092 g/L approximately. Iron acetate (99.995%, Aldrich, St. Louis, Missouri) and cobalt acetate tetrahydrate (99.998%, Aldrich) were dissolved in the dispersion with a concentration of 0.01 mol/kg for both, in which metal acetates are decomposed and reduced to form catalyst nanoparticles at synthesis, following the procedure developed for catalyzed CVD of CNT. 17 , 32 After the solution was ultrasonicated for 10 min and centrifuged at 15,000 g for 30 min, supernatant liquid was replaced with the same amount of deionized water and settled GO was dispersed again. A 1.5 µL drop of the suspension was applied to each of the silicon substrates for synthesis. After the drops dried, the catalyst-supporting GO coats were illuminated by an ultra-high pressure mercury lamp (UI-501C, 500 W, Ushio, Tokyo, Japan). X-ray photoelectron spectroscopy (XPS) was performed for the GO coats mounted on naturally-oxidized silicon substrates in a vacuum better than 10 −7 Pa to analyze the amount of iron and cobalt loaded on GO. The XPS system (Sigma Probe, Thermo Scientific, Waltham, Massachusetts) was equipped with a monochromatized x-ray source (Al K α , h ν = 1486.6 eV). Electrons emitted from the samples were detected by a hemispherical energy analyzer equipped with six channeltrons. The overall energy resolution for XPS was below 0.55 eV (on Ag 3 d 3/2 with a pass energy of 15 eV). XPS peaks were deconvoluted using Gaussian components after Shirley background subtraction. XPS was also used to analyze the termination states of graphene edges in the outer side of carbon nanopot. In the synthesis with the submarine-style substrate-heating CVD apparatus, we used 2-propanol as organic liquid, expecting to keep the catalyst active longer. 33 The synthesis time and temperature were 10 min and 1100–1130 K, respectively. The morphological and structural features were investigated with a field-emission scanning electron microscope (FE-SEM, JSM-6320F, JEOL Ltd., Tokyo, Japan) and a transmission electron microscope (TEM, JEM-2000FX, JEOL Ltd.) at the acceleration voltage of 5 and 200 kV, respectively. Raman spectroscopy was conducted using a micro Raman spectrometer (RS-RIP-2000, Nippon Roper, Tokyo, Japan) with a 532 nm excitation source for the analysis of the quality of carbon sp 2 network. XPS analysis was performed for aggregates of carbon nanopot fibers transferred to a cleaned silicon substrate from a deposition substrate to avoid detecting photoelectrons from the GO sheets on the base. After the transfer of carbon nanopots, the thus exposed surface of the deposition substrate was analyzed using an electron probe microanalyzer (EPMA, EPMA-1720H, Shimadzu Corp., Kyoto, Japan) at an acceleration voltage of 15 kV and a sample current of 20 nA to investigate catalyst particles supported on GO. III. RESULTS In the XPS measurements of the catalyst-supporting GO coats, prominent peaks assigned to C1 s , O1 s , and Fe2 p , respectively, were observed in the energy range between 160 and 820 eV as shown in Fig. 3(a) . Though signals related to Co were detected and the Co2 p 3/2 peak was also expected to appear around 780 eV, it was too small to distinguish from the Fe Auger peaks [Fig. 3(b) ], which prevented us from determining the Co content accurately. Weak emission peaks assigned to N1 s and S2 p were also recorded as shown in Figs. 3(c) and 3(d) , respectively. The sources of these elements are thought to be remnants of NaNO 3 and H 2 SO 4 used in the preparation of GO from graphite powder. Another emission peak was observed at 103.5 eV, and assigned to Si2 p . This emission is originated from the silicon substrate. The content of the catalyst-supporting GO coats was estimated at 48.7, 38.5, 0.9, 0.3, 10.7, and 0.8 at.% for C, O, N, S, Fe, and Co, respectively, through semiquantitative analysis of these emission peaks. Though the accuracy of the Co content is limited, it was obvious that the content of Co is an order of magnitude smaller than that of Fe despite the fact that the Co/Fe molar ratio was 1:1 in the mixture of metal acetates added to GO dispersions. FIG. 3 XPS spectra of catalyst-supporting GO coats on a silicon substrate. (a) Widely scanned, (b) Co2 p 1/2 , (c) N1 s , and (d) S2 p XPS spectra are displayed. Full size image In the SEM observations of products synthesized at 1110 K, we noticed that a lot of winding nanofibers are not simply formed but there is also a repeated cyclical bright and dark pattern on every nanofiber as shown in Fig. 4 . In the observations of the products under high magnification, we found that these nanofibers consisted of cyclic combination of a rounded section and a linear section (Fig. 5 ). Investigations using TEM revealed that pot-shaped units, in which the end of the hollow straight section was open and the end of the rounded section was closed, were connected in the appearance of fiber (Fig. 6 ). The pot-shaped units had a complex nanostructure composed of a bottom part and tapering tube section formed of multi-layer graphene, a multi-walled-tube section with a fairly constant outer diameter, an expanding hollow neck, and a connecting section (Fig. 7 ). The distance between layers was estimated at 0.34 nm (Fig. 7 , inset), which is in line with that in multi-walled CNTs. 3 The following features were also found: the innermost graphene layer was connected in the region from the bottom part to the multi-walled-tube section while the graphene layers at the outer surface of the tapering tube section were terminated, where graphene edges were distributed densely (Fig. 7 , inset); the graphene layers were disconnected between the pot-shaped units (Fig. 7 ); separated single pot-shaped units were also observed (not shown). The length of some fibers exceeded 100 µm (not shown). Typical geometry parameters of the products were 20–40 nm in outer diameter, 5–30 nm in inner diameter, 100–200 nm in unit length, and 20–100 µm in fiber length. Nanofibers synthesized at 1120 K had the same structure as mentioned above. Hereafter, we refer to the pot-shaped unit and the nanofiber composed of the pot-shaped units as “carbon nanopot” and “carbon nanopot fiber,” respectively, as named in the following section. FIG. 4 FE-SEM image of products synthesized at 1110 K. A repeated cyclical bright and dark pattern is recognized on every nanofiber. Full size image FIG. 5 Magnified FE-SEM image of a product synthesized at 1110 K. Full size image FIG. 6 TEM image of products synthesized at 1110 K. The arrow points to an open end of a pot-shaped unit. Full size image FIG. 7 TEM image of pot-shaped units synthesized at 1110 K. Different parts of the unit are labeled. The inset is a magnified view of the section bordered by the dashed box in the main image. The value indicates the average distance between layers. The arrows point to the graphene edges along the outside of the tapering tube part. Full size image A typical Raman spectrum of carbon nanopot fibers is shown in Fig. 8 . Two distinct peaks were observed at 1345 and 1586 cm −1 , which are assigned to D and G bands, respectively. A small D′ band at 1620 cm −1 was also recognized according to peak deconvolution with Lorentz functions. The G band corresponds to the in-plane stretching vibration in graphite. The D and D′ bands are associated with the existence of defects in the two-dimensional hexagonal graphitic network. It was noted that the intensity ratio of the D band to the G band at their peaks ( I D / I G ) is inversely proportional to the in-plane crystallite size. 34 , 35 In the case of the carbon nanopot fibers, the ratio was calculated at 0.954, which is corresponding to a crystallite size of 20.2 nm. 35 FIG. 8 Typical Raman spectrum of carbon nanopot fibers. Fitted Lorentz curves are shown by thin lines. The D, G, and D′ bands are labeled, respectively. Full size image In the XPS measurements of the aggregates of carbon nanopot fibers, a tail was observed at the higher energy side of the C1 s peak for sp 2 C=C bonds at 284.6 eV (Fig. 9 ). This spectral structure is attributed to chemical shifts corresponding to C–H bonds, defects, sp 3 C–C, and oxygenated functional groups modifying carbon atoms in the outer layers of the carbon nanopot. 36 Through detailed peak analysis, we have found that 9.5 at.% of the carbon atoms are modified with the hydroxyl group. We note that silicon with the amount comparable to that of carbon was detected in the measurements. This result means that the fibrous material could not cover the silicon substrate completely. Though the XPS analysis showed that the substrate surface was contaminated with carbon materials, the amount of carbon was less than 10 at.% while the rest of the components were silicon and oxygen composing the naturally oxygenated silicon surface. Consequently, most of the detected hydroxyl groups could be attributed to the carbon nanopot fibers. FIG. 9 A typical C1 s XPS spectrum of aggregates of carbon nanopot fibers transferred to a silicon substrate from a deposition substrate. Deconvolution of the spectrum using Gaussian curves corresponding to sp 2 C=C, CH (defect), sp 3 C–C, C–OH, C–O–C, C=O, and O=C–O, respectively, is exhibited. Full size image We investigated the GO surface exposed after longer carbon nanopot fibers were harvested for TEM observations, and found that the round and linear sections of each carbon nanopot are formed on the tip and base sides, respectively, and nanoparticles, which are thought to be catalysts, stay on the base in some leftovers of the nanofibers as exhibited in Fig. 10 . The size of the nanoparticles was distributed widely in a range between 15 and 120 nm. Carbon nanopot fibers were observed to grow from catalyst nanoparticles with the size smaller than 40 nm approximately. The EPMA study of the exposed surface has clarified that the distribution of iron and that of cobalt were coincident with the positions of nanoparticles [Figs. 11(a)–11(c) ). It was suggested qualitatively that the content of cobalt was much less than that of iron, which is in line with the result in XPS analysis mentioned above. FIG. 10 FE-SEM image of a GO surface exposed after longer carbon nanopot fibers were harvested. A catalyst particle staying on the base part of a carbon nanopot fiber is indicated with an arrow. Full size image FIG. 11 EPMA images of (a) secondary electron, (b) Fe K α x-ray, and (c) Co K α x-ray of a GO surface after longer carbon nanopot fibers were harvested. Some of nanoparticles are marked with circles for ease of correspondence. Full size image We also obtained fiber-shaped products for the synthesis at 1100 or 1130 K. There was, however, no clear cyclic variation in the diameter of these fibers observed. IV. DISCUSSION The features of the pot-shaped material synthesized in this study are quite different from those of nanobell, which consists of a bottom part and expanding sidewall part formed of graphene layers with a fairly constant thickness. The typical aspect ratio of the depth to the inner diameter in the hollow space of nanobell is 1–3. On the other hand, the pot-shaped material has the aspect ratio as high as about 10, and more complex nanostructures consisting of several sections formed of graphene layers with different layer numbers as well. The latter feature is accompanied by another distinct feature that dense graphene edges are localized to the outer side near the closed end. Consequently, we have judged that the pot-shaped material is a new material and named it carbon nanopot. The I D / I G ratio of the carbon nanopot in Raman spectroscopy was 0.954, which is slightly higher than that reported for bamboo-type CNTs (0.82 at the synthesis temperature of 850 °C). 37 We interpret this result to imply that defected structures are fixed before relaxing to the most stable structure in the growth process under the high temperature gradient while the extreme condition would favor the formation of metastable phases. The XPS analysis suggested that approximately 10% of carbon atoms in the outer layers of carbon nanopot are hydroxylated. The most possible hydroxylation sites are the graphene edges distributing densely at the outer surface of the tapering tube section. It could be another merit of using an alcoholic carbon source that the graphene edges are terminated by the hydroxyl group rather than hydrogen. We expect that carbon nanopots might behave like nano-surfactants as the densely hydroxylated region could be localized to the vicinity of the closed end, which would make the closed end part hydrophilic while the open end part remains hydrophobic. We assumed that the growth behavior of a carbon nanopot is different from those of existing carbon nanomaterials to some extent as the carbon nanopot has a far more complex structure than those nanomaterials. The following viewpoints are taken into consideration in examining the growth mechanism of carbon nanopots: (i) GO was used as a catalyst support. The metal catalyst with high carbon content is expected to have a similar affinity for both the surface of GO and the inner surface of carbon nanopot. (ii) Carbon nanopot could be formed in the base-growth mode, as suggested in the SEM observation of the exposed GO surface (Fig. 10 ). (iii) Catalyst particles could exhibit successive elongation and contraction during the formation of carbon nanopot even in the solid phase as observed in the in situ TEM studies. 28 , 29 (iv) The carbon diffusion in catalyst particles could be both a volume one and a surface one. (v) The sharp temperature gradient, a feature of the submarine-style substrate-heating CVD method, would promote the precipitation of carbon in colder surface areas of catalyst particles. We propose the following growth model, integrating the above viewpoints (Fig. 12 ). At the first stage, iron acetate and cobalt acetate supported on GO are decomposed at high temperatures and would be reduced to iron–cobalt alloy with hydrogen generated through the decomposition of the alcoholic carbon source on the particle surface. These processes should be similar to those in the conventional alcohol catalytic CVD of CNTs. 33 Carbon atoms generated through the decomposition of the organic gas dissolve in the catalyst particle and diffuse [Fig. 12(a) ], and are precipitated as a cap-shaped graphene sheet (graphene cap) at an apex, where the temperature is lower due to the sharp temperature gradient [Fig. 12(b) ]. The catalyst particle with a high carbon content protrudes, keeping in contact with the inner surface of the graphene cap as a new cap is precipitated and the cap edge is extended through the surface diffusion of carbon. This behavior of catalyst particles was confirmed in the in situ TEM observations. 29 , 30 The sharp temperature gradient is expected to favor the precipitation of a new graphene cap rather than the extension of older caps, which would cause termination of the extension of the older caps when a new cap is precipitated [Fig. 12(c) ]. As the catalyst particle becomes elongated, the supply of carbon to the tip of the catalyst particle decreases and the growth at the cap edge becomes superior to the precipitation of new graphene layers [Fig. 12(d) ]. The multi-walled-tube section with less disconnection of outer graphene layers is formed in this way [Fig. 12(e) ]. As the multi-walled-tube section extends, the carbon content of the catalyst particle decreases due to the increase of the diffusion length and the temperature decrease around the catalyst tip. This triggers the contraction and the evacuation of the catalyst particle from the hollow space [Fig. 12(f) ]. The catalyst particle forms the expanding hollow neck through the growth at the graphene edge on the way back and finishes forming one unit of a carbon nanopot [Fig. 12(g) ]. Another cap-shaped graphene layer corresponding to the bottom part of the next carbon nanopot is precipitated after the carbon content of the catalyst particle at the apex is recovered [Fig. 12(h) ]. In summary, we have assumed that a part of the catalyst particle intrudes into and evacuate from carbon nanopots in turn while the other part sticks to the GO surface firmly, to form connected carbon nanopots. FIG. 12 Proposed growth model of carbon nanopot. (A) Catalyst particle, (B) GO sheets, (C) carbon atom, (D) silicon substrate, (E) carbon plate heater. The arrows represent the intake of carbon atoms by the catalyst particle or diffusion of carbon atoms. The panel labels (a)-(h) indicate the sequence of the growth steps. See text for details. Full size image The XPS analysis has revealed that the amount of cobalt loaded on GO was in an order of magnitude smaller than that prepared. This could be attributed to the low pH value (3.6) of the GO suspension used in this study. It was reported 38 that cobalt was successfully loaded on reduced GO through the pH adjustment of a GO suspension to 9.5. In addition, it was found that the size of the catalyst particles formed on GO sheets was not uniform. It should be noted that the small content (0.3 at.%) of sulfur was detected in the XPS analysis of the catalyst supporting GO sheets. The effect of sulfur on the formation of filamentous carbon was studied intensively in 1990s. Kim et al. explained that sulfur enhances the filamentous growth of carbon by selectively poisoning the surface of the catalysts. 39 Tibbetts et al. explained that the role of the sulfur in enhancing carbon fiber growth is to melt the iron-based catalyst particle, enabling vapor–liquid–solid growth, because the Fe–S system is eutectic at 988 °C. 40 In the present study, the ratio of atomic percentages of sulfur and iron is 0.03, which is too small to reduce the melting point of the catalyst. As for the effect of selective poisoning of the catalyst surface, however, quite small amount of sulfur could affect the filamentous growth of carbon. The content of cobalt in the catalyst particle, the particle size and remnants of sulfur could affect the morphology and growth rate of carbon nanopots. The effects and control of these conditions will be investigated further. The carbon nanopot could be a highly functional material. The structural features such as the dense exposure of presumably hydroxylated graphene edges at the outer surface of the tapering tube part or the aspect ratio of the hollow space as high as about 10 would be favorable to its application to functional composite materials or drug delivery, respectively. V. CONCLUSION It has been confirmed through FE-SEM and TEM observations, XPS, EPMA, micro-Raman spectroscopy that the pot-shaped carbon nanomaterial synthesized using the submarine-style substrate heating CVD method is a novel nanomaterial and that it has a complex and regular nanostructure consisting of several parts made of different layer numbers of graphene and a deep hollow space (the aspect ratio of ∼10). This new material has been named carbon nanopot. It is suggested that using GO as a catalyst support and the nonequilibrium reaction field generated in the submarine-style substrate heating CVD method could be essential to the formation of carbon nanopot. The following growth model has been proposed: a part of the catalyst particle intrudes into and evacuates from carbon nanopots in turn while the other part sticks to the GO surface firmly, to form carbon nanopots in series. The dense exposure of presumably hydroxylated graphene edges at the outer surface of the tapering tube part as well as the deep hollow space would give enough reason to examine its utility, for example, in the development of composite materials or drug delivery systems.
A novel, pot-shaped, carbon nanomaterial developed by researchers from Kumamoto University, Japan is several times deeper than any hollow carbon nanostructure previously produced. This unique characteristic enables the material to gradually release substances contained within and is expected to be beneficial in applications such as drug delivery systems. Carbon is an element that is light, abundant, has a strong binding force, and eco-friendly. The range of carbon-based materials is expected to become more widespread in the eco-friendly society of the future. Recently, nanosized (one-billionth of a meter) carbon materials have been developed with lengths, widths, or heights below 100 nm. These materials take extreme forms such as tiny grained substances, thin sheet-like substances, and slim fibrous substances. Example of these new materials are fullerenes, which are hollow cage-like carbon molecules; carbon nanotubes, cylindrical nanostructures of carbon molecules; and graphene, one-atom thick sheets of carbon molecules. Why are these tiny substances needed? One reason is that reactions with other materials can be much larger if a substance has an increased surface area. When using nanomaterials in place of existing materials, it is possible to significantly change surface area without changing weight and volume, thereby improving both size and performance. The development of carbon nanomaterials has provided novel nanostructured materials with shapes and characteristics that surpass existing materials. Now, research from the laboratory of Kumamoto University's Associate Prof. Yokoi has resulted in the successful development of a container-type carbon nanomaterial with a much deeper orifice than that found in similar materials. To create the new material, researchers used their own, newly developed method of material synthesis. The container-shaped nanomaterial has a complex form consisting of varied layers of stacked graphene at the bottom, the body, and the neck areas of the container, and the graphene edges along the outer surface of the body were found to be very dense. Due to these innovate features, Associate Prof. Yokoi and colleagues named the material the "carbon nanopot." The black arrow indicates the end of the opening of carbon nanopot. A structural schematic of the carbon nanopot showing hydroxyl groups bonded to the edges of the graphene layers near the closed end of the nanopot is also indicated (not to scale). Credit: Journal of Materials Research, 31((1): 117-126, 14-Jan-2016, doi:10.1557/jmr.2015.389.Copyright: Materials Research Society The carbon nanopot has an outer diameter of 20 ~ 40 nm, an inner diameter of 5 ~ 30 nm, and a length of 100 ~ 200 nm. During its creation, the carbon nanopot is linked to a carbon nanofiber with a length of 20 ~ 100 μm meaning that the carbon nanopot is also available as a carbon nanofiber. At the junction between nanopots, the bottom of one pot simply sits on the opening of the next without sharing a graphene sheet connection. Consequently, separating nanopots is very easy. "From a detailed surface analysis, hydrophilic hydroxyl groups were found clustered along the outer surface of the carbon nanopot body," said Associate Prof. Yokoi. "Graphene is usually hydrophobic however, if hydroxyl groups are densely packed on the outer surface of the body, that area will be hydrophilic. In other words, carbon nanopots could be a unique nanomaterial with both hydrophobic and hydrophilic characteristics. We are currently in the process of performing a more sophisticated surface analysis in order to get that assurance." Since this new carbon nanopot has a relatively deep orifice, one of its expected uses is to improve drug delivery systems by acting as a new foundation for medicine to be carried into and be absorbed by the body. This finding was posted as an Invited Feature Paper in the Journal of Materials Research, on January 13th, 2016. Additionally, the paper was elected as a Key Scientific Article in Advances in Engineering (AIE) on July 9th, 2016.
10.1557/jmr.2015.389
Chemistry
Mechanical engineers develop process to 3-D print piezoelectric materials
Three-dimensional printing of piezoelectric materials with designed anisotropy and directional response, Nature Materials (2019). DOI: 10.1038/s41563-018-0268-1 , www.nature.com/articles/s41563-018-0268-1 Journal information: Nature Materials
http://dx.doi.org/10.1038/s41563-018-0268-1
https://phys.org/news/2019-01-mechanical-d-piezoelectric-materials.html
Abstract Piezoelectric coefficients are constrained by the intrinsic crystal structure of the constituent material. Here we describe design and manufacturing routes to previously inaccessible classes of piezoelectric materials that have arbitrary piezoelectric coefficient tensors. Our scheme is based on the manipulation of electric displacement maps from families of structural cell patterns. We implement our designs by additively manufacturing free-form, perovskite-based piezoelectric nanocomposites with complex three-dimensional architectures. The resulting voltage response of the activated piezoelectric metamaterials at a given mode can be selectively suppressed, reversed or enhanced with applied stress. Additionally, these electromechanical metamaterials achieve high specific piezoelectric constants and tailorable flexibility using only a fraction of their parent materials. This strategy may be applied to create the next generation of intelligent infrastructure, able to perform a variety of structural and functional tasks, including simultaneous impact absorption and monitoring, three-dimensional pressure mapping and directionality detection. Main The direct piezoelectric constant correlates the electric displacement of a material with an applied stress 1 , 2 , 3 . Owing to their ability to convert mechanical to electrical energy and vice versa, piezoelectric materials have widespread applications in pressure sensing 4 , 5 , ultrasonic sensing 6 , 7 , actuation 8 , 9 and energy harvesting 10 , 11 . The piezoelectric charge constants of bulk piezoelectric ceramics, polymer-piezoelectric composites and their respective foams are dictated by their intrinsic crystallographic structures and compositions 12 , resulting in common coupling modes of operation 13 . Additionally, their intrinsic microstructures are strongly coupled with other physical properties, including mass densities and mechanical properties 14 . Chemical modifications such as doping 15 , 16 have been introduced to change the piezoelectric constants in certain directions by altering the crystallographic structures, but their design space is restricted by the limited set of doping agents 17 . It also comes at the cost of other coupled physical properties such as mechanical flexibility and sensitivity 18 , 19 . Casting and templating techniques have been used to produce piezoelectric foams 20 , 21 that showcase the potential for reduced mass densities and improved hydrostatic figures of merit, but their piezoelectric coefficients, described by a square foam model 22 , are largely limited by the intrinsic crystalline orientation and occupy only a narrow area within piezoelectric anisotropy space. Here we report a set of concepts in which a wealth of direct piezoelectric responses can be generated through rationally designed piezoelectric architectural units and are realized via additive manufacturing of highly sensitive piezo-active lattice materials. Our strategy begins by designing families of three-dimensional (3D) structural node units assembled from parameterized projection patterns, which allows us to generate and manipulate a set of electric displacement maps with a given pressure, thereby achieving full control of piezoelectric constant tensor signatures. These unit cells are then tessellated in three dimensions, forming metamaterial blocks that occupy a vast piezoelectric anisotropy design space, enabling arbitrary selection of the coupled operational mode. Upon polarizing the as-fabricated piezoelectric material, we have demonstrated that piezoelectric behaviour in any direction can be selectively reversed, suppressed or enhanced, achieving distinct voltage response signatures with applied stress. To implement this concept, we prepared functionalized lead zirconate titanate (PZT) nanoparticle colloids. These nanoparticles are then covalently bonded with entrapped photo-active monomers. These concentrated piezoelectric colloids are subsequently sculpted into arbitrary 3D form factors through high-resolution additive manufacturing. We found that building blocks with designed piezoelectric signatures could be assembled into intelligent infrastructures to achieve a variety of functions, including force magnitude and directionality sensing, impact absorption and self-monitoring, and location mapping, without any additional sensing component. These free-form PZT nanocomposite piezoelectric metamaterials not only achieve a high piezoelectric charge constant and voltage constant at low volume fractions but also simultaneously possess high flexibility, characteristics that have not been attainable in previous piezoelectric foams or polymers. This study paves the way for a class of rationally designed electromechanical coupling materials, thus moving structural metamaterials 23 , 24 towards smart infrastructures. Design of 3D piezoelectric responses We developed a strategy to realize the full design space of piezoelectric coefficients through the spatial arrangement of piezoelectric ligaments. Our scheme involves analysing configurations of projection patterns from a 3D node unit classified by connectivity. The evolutions of projection patterns give rise to diverse electric displacement maps (Fig. 1a–h ), from which the piezoelectric coefficient tensor space d 3 M ( M = 1–3) can be designed, going beyond the limitations of the monolithic piezoelectric ceramics, polymers and their composite feedstock whose piezoelectric coefficients are located in the {−−+} quadrants 25 , 26 , 27 , 28 and {++−} quadrants 29 , 30 . Here the dimensionless piezoelectric tensor space, \({{\bar{\bf d}}}_{3M}\) , is defined by normalizing d 3 M by the length of the vector { d 31 , d 32 , d 33 }. To capture the broadest possible design space, we start with the minimum number of intersecting microstruts at a node that can be tessellated into 3D periodic lattices. All intersecting struts are represented as vectors originating from the node, that is, \({\bf{L}}_i\) ( i = 1– N , where N is the node unit connectivity). In building our projection patterns, we define \({\bf{l}}^j_i\) as the 2D projection of \({\bf{L}}_i\) onto three orthogonal planes through the global 1–2–3 coordinate system of the 3D piezoelectric cube (Fig. 1a , \({\bf{L}}_i = \frac{1}{2}\mathop {\sum }\limits_j^3 {\bf{l}}_i^j\) , where j = 1, 2 or 3). As an example, we use piezoelectric ceramic and its composites, which have \({{\bar{\bf d}}}_{3M}\) distributed in the {−−+} quadrants 25 , 26 , 27 , 28 , as the base material with which to construct the electric displacement maps. The white arrows pointing upwards or downwards against the 3-direction indicate the positive or negative electric displacement response of the strut along the 3-direction (that is, the poling direction). Fig. 1: Design of piezoelectric metamaterials for tailorable piezoelectric charge constants. Designing 3D node units by configuring the projection patterns. a – g , Node unit designs from 3-, 4-, 5- and 8-strut identical projection patterns, respectively. A node unit with higher nodal connectivity can be constructed by superposition of projection patterns comprising a smaller number of projected struts. h , Node unit with dissimilar projection patterns showing decoupled \(\bar d_{31}\) , \(\bar d_{32}\) . The white arrows in the projection patterns pointing towards the positive or negative 3-direction indicate the positive or negative electric displacement contribution to poling direction 3. Red arrows in a – h indicate the compression loading along the 1-, 2- or 3-direction. i , A dimensionless piezoelectric anisotropy design space accommodating different 3D node unit designs with distinct d 3M distributions; each d 3M is normalized by the length of the vector { d 31 , d 32 , d 33 } and thus \(\bar d_{31}\) , \(\bar d_{32}\) and \(\bar d_{33}\) form a right-handed 3D coordinate system. The dimensionless piezoelectric coefficients of their parent monolithic piezoelectric ceramics and their composites are labelled within the dashed region, {−−+} quadrant 25 – 28 . Full size image Configuring the projection patterns in these planes results in diverse electric displacement maps, allowing access to different quadrants of the d 3 M property space (Supplementary sections 1 and 2 ). A basic 3D node unit containing 3, 4 and 5 intersecting struts on the projection patterns is illustrated in Fig. 1a–f . We start with 3D node units with identical 3-strut projection patterns on the 1–3 and 2–3 planes, that is, d 31 = d 32 (Fig. 1a,b ). Configuring the projection pattern by rotating the relative orientations of two of the projected struts ( \(\theta = \angle {\bf{l}}_1^1{\bf{l}}_2^1\) ) redistributes the electric displacement contributions, as indicated by the white arrows reversing direction in the projection pattern (Fig. 1a,b ). Rotating the projection patterns allows us to inversely reorient the intersecting spatial struts with correlations as calculated in Supplementary sections 1 and 2 . This results in the \({{\bar{\bf d}}}_{3M}\) tensor shifting from the {+++} quadrant to highly anisotropic distribution near the positive \(\bar d_{33}\) axis {0 0+} and then to the {−−+} quadrant with negative d 31 and d 32 as well as positive d 33 (Fig. 1i ). Further decrease of the relative orientation reverses all values of the d 3 M to occupy the {−−−} quadrant (Supplementary Table 1 ). Similarly, for a 4-strut or 5-strut projection pattern with two-axis symmetry, decreasing the relative orientation ( \(\theta = \angle {\bf{l}}_1^1{\bf{l}}_2^1\) of projected struts) results in a change of d 3 M distribution from the {+++} quadrant to the {−−−} quadrant (Fig. 1i ) or the {−−+} quadrant owing to the competition of the opposite electric displacement contributions within the struts (Fig. 1c–f ). Our designs can be broadened by increasing the 3D node unit connectivity through superposition (Fig. 1g ). Micro-architectures with high nodal connectivity are deformed mainly by compression or tension 31 , 32 . The d 33 increases with additional nodal connectivity as compared to lower-connectivity cases in which strain energy from strut bending does not contribute to the electric displacement in the 3-direction. Moreover, our designs are not restricted to identical projection patterns where d 31 and d 32 are coupled. 3D node unit designs with dissimilar projection patterns allow independent tuning of d 31 and d 32 (‘out of 45° plane’ distribution of d 3 M , Fig. 1i , d 31 ≠ d 32 ). We configure the dissimilar electric displacement maps by independently varying the relative orientations θ 1 and θ 2 on the 1–3 and 2–3 planes (Fig. 1h , Supplementary Table 1 ). The compression along the 1-direction and 2-direction on the 3D node unit therefore generates different electric displacement maps and results in the decoupling of d 31 and d 32 (Fig. 1h ). The d nM of designed units can be computed by collecting the electric displacement from all intersecting strut members L i at equilibrium under applied stress. Such models relate the configuration of the projection patterns \({\bf{l}}^j_i\) with the piezoelectric coefficient of interest d nM of the metamaterials (see derivations in Methods ) $$d_{nM} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N \mathop {\smallint }\nolimits_{V_i} {\bf{d}}_{nm}^i{\bf{T}}_{mr}^i{\bf{{\sigma}}} _r^idV_i}}{{\mathop {\sum }\nolimits_{i = 1}^N \mathop {\smallint }\nolimits_{V_i} {\bf{{\delta}}} _{Mm}{\bf{T}}_{mr}^i{\bf{{\sigma}}} _r^idV_i}},$$ where \({\bf{d}}_{nm}^i\) is the piezoelectric coefficient matrix of the base material ( n = 1,2,3, m , M = 1–6), \({\bf{T}}_{mr}^i\) represents the stress-transformation matrix from the local x –y– z coordinate system to the global 1–2–3 coordinate system, \({\bf{{\sigma}}} _r^i\) is the stress vector in the local coordinate system ( r = xx , yy , zz , xy , xz , yz ), V i is the volume of the i th strut in the node unit and δ Mm is the Kronecker delta ( Methods , Supplementary sections 1 and 2 , Supplementary Figs. 1 – 4 ). Configuring the projection patterns generate various designs of architectures that occupy different quadrants of the \({{\bar{\bf d}}}_{3M}\) distribution space, as shown in Fig. 1i , where M = 1−3. These families of 3D node units constitute a broad 3D piezoelectric constant selection where d 3 M occupy desired quadrants of the property space, in contrast to the piezoelectric coefficients obtained by piezoelectric square foam models 22 (Supplementary section 1 , Supplementary Fig. 5 ). This rich design space creates an enormous palette of novel applications, as demonstrated in later sections. Synthesis and printing of electromechanical metamaterials Our fabrication method of 3D piezoelectric architectures starts by synthesizing surface-functionalized piezoelectric nanoparticles (Fig. 2a,b , see Supplementary section 3 , Supplementary Table 2 for particle properties), dispersing them with ultraviolet-sensitive monomers into highly concentrated, uniform colloids (PZT volume loading up to 50%) that can be sculpted into 3D structures by near-ultraviolet light 33 (Fig. 2c ). While surface functionalization of approximately 4 vol% piezoelectric nanoparticles has produced an appreciable piezoelectric coefficient as compared to non-functionalized dispersion 33 , the trade-off between high piezoelectric responsiveness and processability has limited the realizations of arbitrary piezoelectric 3D micro-architectures with high piezoelectric coefficients. As shown in Fig. 2a , a functionalization agent, (trimethoxysilyl propyl methacrylate) is covalently grafted to the PZT particle surface via siloxide bonds leaving free methacrylate on the surface. The surface functionalization reaction is optimized to maximize surface coverage. These strong covalent bonds between the piezoelectric nanoparticles and the polymer matrix network improve the dispersion quality of the highly concentrated piezo-active colloidal resin (Supplementary section 4 , Supplementary Figs. 6 and 7 ) by creating a sterically hindered surface. Increasing the functionalization level elevates the piezoelectric output of the nanocomposite to reach the upper bound at a given loading concentration (Fig. 2b , Supplementary section 5 , Supplementary Fig. 8 ). Strict control of the thickness of the colloidal paste through the designed recoating system and reduction of oxygen inhibition enable the fabrication of complex 3D piezo-active architectures with fine features ( Methods , Supplementary section 6 ) from a range of concentrated colloidal particles entrapped with ultraviolet-sensitive monomers (Fig. 2c–d , Supplementary Figs. 9 – 11 ). This versatile process is not limited to PZT. Surface functionalization can be implemented to enhance the response of a wide range of piezoelectrics (for example, barium titanate, BTO) or other functional materials such as multiferroics (for example, bismuth ferrite). The as-fabricated nanocomposite system does not require post-heat treatment, and achieves high structural fidelity and uniformity. Configuring the photo-sensitive monomer compositions enables independent tuning of the composite stiffness, allowing us to access rigid to flexible piezo-active materials to convert mechanical stress to voltage signals (Supplementary section 7 , Supplementary Figs. 12 – 14 , Supplementary Table 3 , Supplementary Videos 1 , 2 ), as well as energy harvesting (Supplementary section 7 , Supplementary Fig. 15 ). Fig. 2: Surface functionalization of PZT with photosensitive monomers and 3D printing of piezoelectric metamaterials with complex micro-architectures. a , Schematic illustration of surface functionalization method and strong bonds between the nanoparticles and the polymer matrix after the ultraviolet curing process. b , Schematic illustration of the relationship between the surface functionalization level and the piezoelectric response. The piezoelectric response increases with the surface functionalization level as a result of increasing stress transfer. c , Schematic illustration of the high-resolution additive manufacturing system. d , Scanning electron microscope images of 3D-printed piezoelectric microlattices. Scale bars, 300 µm. Full size image Measurement of 3D piezoelectric responses To evaluate the piezoelectric responses of the designed piezoelectric metamaterials, we printed cubic lattices comprised of periodic unit cells stacked along the three principal directions and poled them under uniform electric fields (see Methods , Supplementary section 8 , Supplementary Figs. 16 and 17 for poling the samples). A shaker with an integrated force sensor exerts cyclical loadings on the samples (details in Supplementary section 9 ). We measured the generated voltages (in the 3-direction) induced by the applied stress with a resistor (40 MΩ) connected to a data acquisition system. We found excellent agreement of the measured { d 31 , d 32 , d 33 } signatures with the designed response to force from different directions. Here, N = 5 designs (3-strut projection pattern, Fig. 1a,b ) are used to demonstrate the different voltage output patterns due to the distinct distributions of d 3 M . As identical cyclic loadings (about 0.5 N, sawtooth loading profile) are applied along three orthogonal directions, significant differences in the voltage output patterns are observed for three distinct designs (Fig. 3a–c , Supplementary Video 3 ; the voltage responses generated from square-loading and unloading profile are shown in Supplementary Fig. 18 ). The N = 5 piezoelectric metamaterial of θ = 75° (Fig. 3a ) outputs a positive voltage when loaded in the 3-direction, while the sample generates a negative voltage when loaded in the 1- or 2-direction. In contrast, Fig. 3b shows that the voltage outputs of our N = 5, θ = 90° lattice in the 3-direction are positive while voltage output in the 1- or 2- direction is suppressed, exhibiting highly anisotropic response. By further increasing θ to 120°, the voltage outputs in all 1-, 2- and 3-directions are positive when loaded in any direction, as shown in Fig. 3c , owing to its all-positive d 3 M distribution. Fig. 3: Measurement of 3D piezoelectric responses. a – c , Optical images of representative piezoelectric metamaterials comprised of N = 5 node units and their corresponding real-time voltage outputs under impact coming from the 1-, 2- and 3-directions, respectively. d , Experimental and finite element analysis (FEA) results of the effective piezoelectric voltage constant \(g_{33}^{\mathrm{eff}}\) versus the relative density of N = 8 and N = 12 lattice materials. e , Comparison of specific piezoelectric charge coefficients and elastic compliance between the piezoelectric metamaterials presented in this study and typical piezoelectric materials 3 , 20 , 33 , 39 , 40 , 41 , 42 , 43 , 44 , 45 . f , Drop-weight impact test on the as-fabricated piezoelectric lattice ( N = 12). g , The real-time voltage output of the lattice corresponding to various drop weights. The transient impact stress activates the electric displacement of the metamaterial in the 3-direction, shown as the trace of the voltage output against the impact time. h , Impulse pressure and transmitted pressure versus the mass of the drop weights. The significant gap (shaded area) between the detected impulse pressure and transmitted pressure reveals simultaneous impact energy absorption and self-monitoring capability of the 3D piezoelectric metamaterial. Full size image To assess the mechanical-electrical conversion efficiency, the effective piezoelectric voltage constant g 33 , defined as the induced electrical field per unit applied stress, was quantified by measuring the d 33 and permittivities of the as-fabricated metamaterials. The resistor used in the apparatus is replaced by a circuit to quantify the charge generated in response to applied stress (Supplementary Fig. 19 , Supplementary section 9 , Supplementary Table 4 ). The d 33 and g 33 are then quantified by the ratio of the applied load and the generated charge. The g 33 results of the metamaterial comprised of highly connected structure ( N = 12 and N =8) are shown in Fig. 3d (see Supplementary section 10 , Supplementary Fig. 20 for more details). Remarkably, g 33 increases with decreasing relative density, indicating a potential application as a simultaneously light-weight and highly responsive sensor. The measured d 33 over their mass density (that is, | d 33 |/ ρ ) and compliance are plotted against all piezoelectric materials (Fig. 3e , Supplementary section 11 ). We found that these low-density and flexible piezoelectric metamaterials achieve over twice the specific piezoelectric constant of piezoelectric polymer (for example, polyvinylidene fluoride, PVDF) and a variety of flexible piezoelectric composites (Fig. 3e , Supplementary Fig. 21 ). Additionally, enhancement of the hydrostatic figures of merit can be obtained via unit cell designs with all identical signs of the g 3M {+++} and d 3M {+++} coefficients (Supplementary Fig. 20 ). This enhanced piezoelectric constant along with the highly connected 3D micro-architecture makes the 3D piezoelectric metamaterial an excellent candidate for simultaneous impact absorption and self-monitoring. A series of standard weights ranging from 10 g to 100 g were sequentially dropped onto the as-fabricated 3D piezoelectric lattice ( N = 12) attached on a rigid substrate (Fig. 3f ) to impact the piezoelectric metamaterial (high-speed camera movie shown in Supplementary Video 4 ). The transient impact stress activates the electric displacement of the metamaterial in the 3-direction, shown as the trace of the voltage output against the impact time (Fig. 3g ). The impulse pressure on the piezoelectric metamaterial calculated via the measured d 33 , and the measured pressure transmitted to the rigid substrate against time are plotted in Fig. 3h . The significant gap (shaded area) between the impulse pressure and transmitted pressure reveals the impact energy absorption and protection capability of our piezoelectric 3D metamaterial as a potential smart infrastructure 34 , 35 (Supplementary sections 7 and 12 , Supplementary Fig. 22 ). Location and directionality sensing The 3D digital metamaterial building blocks can be further stacked or printed as smart infrastructures capable of time-resolved pressure self-sensing and mapping without application of an external sensor. Here, piezoelectric metamaterials of N = 12 are selected and 3D printed into a four-pier piezoelectric bridge with a non-piezoelectric bridge deck (Fig. 4a,b ). The external closed circuits with a data acquisition system are connected to the top and bottom surfaces of the piers to monitor the voltage outputs. The 3D-printed piezoelectric metamaterial bridge can directly map the magnitude as well as the location of potentially damaging deformations throughout its structure. To demonstrate, steel balls with a mass of 8 g are sequentially dropped at random onto the deck. The resulting voltage is collected at each of the four electrodes with the amplitude depending on the electrode proximity. The envelopes of the voltage outputs (plotted in Fig. 4c,d ) independently monitor the strain amplitude. The impact strain map of the deck can be plotted to determine the impact location (Fig. 4e,f , Supplementary section 13 , Supplementary Fig. 23 ). These 3D piezoelectric infrastructures allow one to obtain time-resolved self-monitoring information (for example, displacements, forces, strain mapping) throughout a structure 36 , 37 without additional external sensors. Fig. 4: Assembly of architected metamaterial blocks as intelligent infrastructures. a , b , Camera image of a self-monitoring 3D-printed piezoelectric bridge infrastructure. All four piers are poled together along the 3-direction and the electrodes are attached to the top and bottom surface of the piers. V 1 to V 4 denote the voltage output between the corresponding electrodes. The locations of the dropping steel ball are indicated with dashed lines. c , d , Real-time voltage outputs from the self-monitoring piezoelectric piers. e , f , The normalized strain amplitude map (colour scale) converted from the voltage map indicates the different locations of the impact. Full size image Taking advantage of the distinct directional d 3M design space, stacking multiple piezoelectric building blocks, each with a tailored directional response, allows us to program the voltage output patterns as binary codes (that is, positive or negative voltage). These stackable metamaterial blocks provide a method of determining directionality, which we leverage to sense pressure from arbitrary directions 38 . Figure 5a demonstrates the directionality sensing concept using sensing infrastructure assembled from an array of piezoelectric metamaterial cubes with pre-configured d 3M tensor signature { d 31 , d 32 , d 33 } distributed at different quadrants. Piezoelectric metamaterial cubes on the outer surface of the cube (surfaces 1 to 5, as labelled) are connected to voltage output channels, with intersecting faces of the cube sharing one voltage output channel (Supplementary Fig. 24 ). Pre-program stacked d 3M combinations with each face of the cube allows the output voltage binary map to be uniquely registered with a given pressure applied on that face. The direction and magnitude of any arbitrary force can then be super-positioned and determined from the collected voltage maps (Fig. 5a ). Fig. 5: Force directionality sensing. a , Illustration of force directionality sensing application using piezoelectric metamaterials stacked from four types of designed building blocks to achieve arbitrary force directions. b , As-fabricated piezoelectric infrastructure comprised of stacked architectures with encoded piezoelectric constants. c , Voltage output patterns corresponding to different impact directions indicated by red arrows. The insets show the binary voltage patterns registered with different impact directions. The impact force in the 1-direction is registered with permutation voltage matrix [−,−,+,+], with [+,+,−,+] for the 2-direction and [−,−,−,+] for the 3-direction, respectively. Full size image As a proof-of-concept demonstration of directionality sensing, we stacked a piezoelectric metamaterial infrastructure comprised of four cubic units with their unique, designed 3D piezoelectric signatures (Fig. 5b,c ). The output voltage binary map uniquely registers the corresponding force direction. When impacted from the 1-, 2- and 3-directions (labelled I, II, III and IV in Fig. 5d ), three distinct voltage outputs are detected, each correlated with the original respective impact direction (Fig. 5d , Supplementary Video 5 ). The impact force in the 1-direction is registered with permutation voltage matrix [−,−,+,+], with [+,+,−,+] for the 2-direction and [−,−,−,+] for the 3-direction, respectively (Fig. 5d ). These digitalized, binary output voltage maps originated from the preconfigured piezoelectric constant signatures decode directionality of the impact as well as its magnitude. Outlook This work presents a method of designing electrical–mechanical coupling anisotropy and orientation effects, producing them via additive manufacturing (3D printing) of highly responsive piezoelectric materials. This creates the freedom to inversely design an arbitrary piezoelectric tensor, including symmetry conforming and breaking properties, transcending the common coupling modes observed in piezoelectric monolithic and foams. We see this work as a step towards rationally designed 3D transducer materials in which users can design, amplify or suppress any operational modes ( d nM ) for target applications. Design and tessellation of the piezo-active units can lead to a variety of smart-material functionalities, including vector and tactile sensing, source detection, acoustic sensing and strain amplifications from a fraction of their parent materials. Whereas most 3D printing processes are capable of processing structural materials (polymer, metal or ceramics), multifunctional materials are particularly challenging owing to the inherent trade-off between processing compatibilities and functional properties. In this work, covalent bonding of concentrated piezoelectric nanocrystals with entrapped ultraviolet-sensitive monomers allows the attainment of high piezoelectric coefficients at a given volume loading. Our fabrication methods can be extended to lead-based or lead-free piezoelectric ceramics (PZT, BTO and so on) and other functional materials, allowing high-fidelity printing of complex 3D functional architectures. These 3D-printed multi-functional materials, with simultaneously tuned structural and transduction properties throughout their micro-architectures, eliminate requirements for sensor array deployment, suggesting applications from soft, conformable transducers to rigid, energy-absorbing smart structures. Methods Designing d 3M based on unit cell patterns We developed an analytical model to establish the relationship between the piezoelectric charge constant tensor and the projection pattern parameters. The effective piezoelectric charge constant d nKL is defined to correlate the induced effective electric displacement D n of a 3D unit cell with applied stress σ KL as follows: $$D_n = d_{nKL}\sigma _{KL}$$ (1) D n , d nKL and σ KL represent the effective electric displacement field, the effective piezoelectric charge constant tensor, and externally applied stress field defined in the global 1–2–3 system, respectively (Fig. 1a , n , K , L = 1–3). We compute the d nKL of a node unit under applied stress by collecting and volume-averaging the electric displacement contributions \(D_n^{(i)}\) and stress in equilibrium with σ KL from all strut members L i : $$\left\{{\begin{array}{*{20}{c}} {D_n = \frac{1}{V}\mathop {\sum }\limits_{i = 1}^N {\int}_{V_i} {D_n^{(i)}{\mathrm{d}}V_i} } \\ {\sigma _{KL} = \frac{1}{V}\mathop {\sum }\limits_{i = 1}^N {\int}_{V_i} {{\bf{{\delta}}} _{Kk}{\bf{{\delta}}} _{Ll}{\bf{{\sigma}}} _{kl}^{(i)}{\mathrm{d}}V_i} } \end{array}} \right.$$ (2) where V i is the volume of the i th strut; V is the effective volume of the node unit cell; \({\bf{{\sigma}}} _{kl}^{(i)}\) is the stress state of the i th strut in the global 1–23 system, respectively, and k , l = 1–3; δ Kk and δ Ll represent the Kronecker delta to identify the stress components that are in equilibrium with the externally applied load. We introduce a local beam coordinate system x– y– z for struts (Supplementary Fig. 4 ), and relate the stress in the global 1–2–3 system ( \({\bf{{\sigma}}} _{kl}^{(i)}\) ) and local x –y –z system ( \({\bf{{\sigma}}} _{pq}^{(i)}\) ) by a linear transformation operator containing strut orientation information: $${\bf{{\sigma}}} _{kl}^{(i)} = {\bf{N}}_{kp}^{(i)}{\bf{{\sigma}}} _{pq}^{(i)}\left( {{\bf{N}}_{lq}^{\left( i \right)}} \right)^T$$ (3) where p , q = x , y , z and N ( i ) represents the coordinate system transformation matrix containing components with respect to the projection pattern angle ( θ j , j = 1–3) 46 and has the form: $$\begin{array}{l}{\bf{N}}^{(i)} = \left[ {\begin{array}{*{20}{c}} {\cos \theta _2} & 0 & {\sin \theta _2} \\ 0 & 1 & 0 \\ {\sin \theta _2} & 0 & { - \cos \theta _2} \end{array}} \right]\left[ {\begin{array}{*{20}{c}} 1 & 0 & 0 \\ 0 & {\cos \theta _1} & {\sin \theta _1} \\ 0 & {\sin \theta _1} & { - \cos \theta _1} \end{array}} \right] \left[ {\begin{array}{*{20}{c}} {\cos \theta _3} & {\sin \theta _3} & 0 \\ {\sin \theta _3} & { - \cos \theta _3} & 0 \\ 0 & 0 & 1 \end{array}} \right]\end{array}$$ (4) Substituting equation (3) into equations (1) and (2) yields the expression of the effective charge constants d nKL : $$\begin{array}{l}d_{nKL} = \frac{{D_n}}{{\sigma _{KL}}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N A_i\left| {L_i} \right|{\bf{d}}_{nkl}{\bf{N}}_{kp}^{(i)}{\bf{{\sigma}}} _{pq}^{(i)}({\bf{N}}_{lq}^{\left( i \right)})^T}}{{\mathop {\sum }\nolimits_{i = 1}^N A_i\left| {L_i} \right|{\bf{{\delta}}} _{Kk}{\bf{{\delta}}} _{Ll}{\bf{N}}_{kp}^{(i)}{\bf{{\sigma}}} _{pq}^{(i)}({\bf{N}}_{lq}^{\left( i \right)})^T}} = \frac{{\mathop {\sum }\nolimits_{i = 1}^N {\bf{d}}_{nkl}{\bf{N}}_{kp}^{(i)}{\bf{{\sigma}}} _{pq}^{(i)}({\bf{N}}_{lq}^{\left( i \right)})^T}}{{\mathop {\sum }\nolimits_{i = 1}^N {\bf{{\delta}}} _{Kk}{\bf{{\delta}}} _{Ll}{\bf{N}}_{kp}^{(i)}{\bf{{\sigma}}} _{pq}^{(i)}({\bf{N}}_{lq}^{\left( i \right)})^T}}\end{array}$$ (5) where A i and | L i | are the area of the cross-section and length of the i th strut, respectively. These two variables are assumed to be the same for all struts in the node unit. This allows the design of d nKL —or equivalently in Voigt notation, d nM —according to the projection pattern configurations (by convention, KL → M : 11→1; 22→2; 33→3; 12→4; 13→5; 23→6). We demonstrate the application of the method by designing d nM according to the relative orientation θ between the projected struts (see examples in Supplementary sections 1 and 2 , and Supplementary Figs. 1 – 4 ). Here, to convert the tensor notation ( KL → M ), the coordinate system transformation matrix N ( i ) (3 × 3 dimensions) is expanded and rearranged to form the stress transformation matrix T ( i ) (6 × 6 dimensions). Surface functionalization of the piezoelectric particles All chemicals were purchased from Sigma-Aldrich and used as received. For functionalization, 0.6 g of PZT was ultrasonically dispersed (VWR Scientific Model 75 T Aquasonic, at about 90 W and about 40 kHz) in 50 g of deionized water with 1.049 g glacial acetic acid for 2 h. To this 1.049 g of 3-(trimethoxysilyl)propyl methacrylate (TMSPM) was added. The mixture was then refluxed while stirring. Particles were cleaned by centrifugation, followed by discarding the supernatant, and then dispersed in ethanol for at least two cycles. Particles were dried overnight under vacuum or gentle heat. The resulting 3D-printable functionalized PZT nanocomposites achieved a controlled volume loading from 2.5 vol% to 50 vol% (equivalent to 16 wt% to 88 wt%). High-resolution projection stereolithography The functionalized particles were sonicated in acetone and mixed with photosensitive resin for 3D printing; the acetone was then evaporated by gentle heat and stirring. High-resolution, large-area stereolithography systems were used for the piezoelectric architected material fabrication. Our 3D printing configurations for processing these colloidal piezoelectric feedstocks with a range of loading concentrations are described in Supplementary section 6 (Supplementary Figs. 9 – 11 ). The 3D models were built and sliced into 2D images using design method described in ref. 47 and Netfabb 48 . These 2D images are used to pattern light from a near-ultravioletlight-emitting diode (LED) using a programmable dynamic photomask. A reductive lens is used to isotropically reduce the near-ultraviolet light pattern to the desired length scale. A larger area build is generated by scanning and reflecting the light patterns within the horizontal x – y plane while maintaining the resolution (Fig. 2c ). Aligned with the optics is a substrate, which can be coated with the functionalized ultraviolet-sensitive colloidal paste to a controlled thickness. When illuminated, the colloid replicates the 2D image as a solid layer bound to the substrate or any previous layers. Recoating of the now-solidified layers and the substrate with the colloidal solution, followed by additional exposures from subsequent 2D image slices and synchronized movement of the precision stage, builds the 3D part with complex architectures. The as-fabricated piezoelectric lattices were measured to have strut thickness variation within 3% via X-ray microtomography using the method described in ref. 49 . Poling of the piezoelectric metamaterials The corona poling method was used to pole the as-fabricated samples (Supplementary Fig. 16 ). The samples were placed on a planar electrode connected to a high-voltage power supply (Glassman High Voltage Inc., Series EK) and poled under 32 kV at room temperature for 1 h. The experimental setup was also equipped with a digital multimeter (Agilent 34410 A) for measuring the current through the sample under voltage. To avoid screening of the bottom electrode, we kept samples inside highly insulating liquids (details in Supplementary section 8 , Supplementary Figs. 16 and 17 ). Characterization of the piezoelectric metamaterials To evaluate the effective piezoelectric charge constants, a piezoelectric testing fixture was set up to record the voltage output of the samples with loads being applied. The electric charges generated from the samples were calculated by multiplying the voltage output by the capacitance of the circuit (Supplementary Fig. 19b ). The instrument was fully calibrated using two commercial PZT cylindrical samples with piezoelectric coefficients of 540 pC N −1 and 175 pC N −1 respectively (Supplementary Fig. 19c ). The effect of triboelectrification was eliminated by comparing the measurements before and after polarizations (Supplementary Fig. 19d ). To measure the signatures of the voltage output directly, a resistor with 40 MΩ resistance was connected to the sample and the voltage was directly measured from both ends of the resistor. Finite element analysis ABAQUS 6.14 50 was used to conduct the finite element analysis. The base material properties used are summarized in Supplementary section 14 and Supplementary Table 5 . A ten-node quadratic piezoelectric tetrahedron (C3D10E) element was used to mesh the unit cells. Four degrees of freedom are allowed at each node: three translational degrees of freedom and one electrical potential degree of freedom. Periodic boundary conditions were applied to the node unit to capture the complete electromechanical response of the metamaterials 51 . Stresses are applied on the surfaces perpendicular to the 1-, 2- or 3-directions individually to calculate d 3M . Low strain and low linear deformation are also ensured. Data availability All data generated during this study are included within the paper and its Supplementary Information files and/or are available from the corresponding author upon request.
The piezoelectric materials that inhabit everything from our cell phones to musical greeting cards may be getting an upgrade thanks to work discussed in the journal Nature Materials released online Jan 21. Xiaoyu 'Rayne' Zheng, assistant professor of mechanical engineering in the College of Engineering, and a member of the Macromolecules Innovation Institute, and his team have developed methods to 3-D print piezoelectric materials that can be custom-designed to convert movement, impact and stress from any directions to electrical energy. "Piezoelectric materials convert strain and stress into electric charges," Zheng explained. The piezoelectric materials come in only a few defined shapes and are made of brittle crystal and ceramic—the kind that require a clean room to manufacture. Zheng's team has developed a technique to 3-D print these materials so they are not restricted by shape or size. The material can also be activated—providing the next generation of intelligent infrastructures and smart materials for tactile sensing, impact and vibration monitoring, energy harvesting, and other applications. Unleash the freedom to design piezoelectrics Piezoelectric materials were originally discovered in the 19th century. Since then the advances in manufacturing technology has led to the requirement of clean-rooms and a complex procedure that produces films and blocks which are connected to electronics after machining. The expensive process and the inherent brittleness of the material, has limited the ability to maximize the material's potential. The printed flexible sheet of piezoelectric material. Credit: Virginia Tech Zheng's team developed a model that allows them to manipulate and design arbitrary piezoelectric constants, resulting in the material generating electric charge movement in response to incoming forces and vibrations from any direction, via a set of 3-D printable topologies. Unlike conventional piezoelectrics where electric charge movements are prescribed by the intrinsic crystals, the new method allows users to prescribe and program voltage responses to be magnified, reversed or suppressed in any direction. "We have developed a design method and printing platform to freely design the sensitivity and operational modes of piezoelectric materials," Zheng said. "By programming the 3-D active topology, you can achieve pretty much any combination of piezoelectric coefficients within a material, and use them as transducers and sensors that are not only flexible and strong, but also respond to pressure, vibrations and impacts via electric signals that tell the location, magnitude and direction of the impacts within any location of these materials." 3-D printing of piezoelectrics, sensors and transducers A factor in current piezoelectric fabrication is the natural crystal used. At the atomic level, the orientation of atoms are fixed. Zheng's team has produced a substitute that mimics the crystal but allows for the lattice orientation to be altered by design. "We have synthesized a class of highly sensitive piezoelectric inks that can be sculpted into complex three-dimensional features with ultraviolet light. The inks contain highly concentrated piezoelectric nanocrystals bonded with UV-sensitive gels, which form a solution—a milky mixture like melted crystal—that we print with a high-resolution digital light 3-D printer," Zheng said. The team demonstrated the 3-D printed materials at a scale measuring fractions of the diameter of a human hair. "We can tailor the architecture to make them more flexible and use them, for instance, as energy harvesting devices, wrapping them around any arbitrary curvature," Zheng said. "We can make them thick, and light, stiff or energy-absorbing." The material has sensitivities 5-fold higher than flexible piezoelectric polymers. The stiffness and shape of the material can be tuned and produced as a thin sheet resembling a strip of gauze, or as a stiff block. "We have a team making them into wearable devices, like rings, insoles, and fitting them into a boxing glove where we will be able to record impact forces and monitor the health of the user," said Zheng. "The ability to achieve the desired mechanical, electrical and thermal properties will significantly reduce the time and effort needed to develop practical materials," said Shashank Priya, associate VP for research at Penn State and former professor of mechanical engineering at Virginia Tech. New applications The team has printed and demonstrated smart materials wrapped around curved surfaces, worn on hands and fingers to convert motion, and harvest the mechanical energy, but the applications go well beyond wearables and consumer electronics. Zheng sees the technology as a leap into robotics, energy harvesting, tactile sensing and intelligent infrastructure, where a structure is made entirely with piezoelectric material, sensing impacts, vibrations and motions, and allowing for those to be monitored and located. The team has printed a small smart bridge to demonstrate its applicability to sensing the locations of dropping impacts, as well as its magnitude, while robust enough to absorb the impact energy. The team also demonstrated their application of a smart transducer that converts underwater vibration signals to electric voltages. "Traditionally, if you wanted to monitor the internal strength of a structure, you would need to have a lot of individual sensors placed all over the structure, each with a number of leads and connectors," said Huachen Cui, a doctoral student with Zheng and first author of the Nature Materials paper. "Here, the structure itself is the sensor—it can monitor itself."
10.1038/s41563-018-0268-1
Earth
Cracks in Arctic sea ice turn low clouds on and off
Nature Communications (2020). DOI: 10.1038/s41467-019-14074-5 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-019-14074-5
https://phys.org/news/2020-01-arctic-sea-ice-clouds.html
Abstract Leads are a key feature of the Arctic ice pack during the winter owing to their substantial contribution to the surface energy balance. According to the present understanding, enhanced heat and moisture fluxes from high lead concentrations tend to produce more boundary layer clouds. However, described here in our composite analyses of diverse surface- and satellite-based observations, we find that abundant boundary layer clouds are associated with low lead flux periods, while fewer boundary layer clouds are observed for high lead flux periods. Motivated by these counterintuitive results, we conducted three-dimensional cloud-resolving simulations to investigate the underlying physics. We find that newly frozen leads with large sensible heat flux but low latent heat flux tend to dissipate low clouds. This finding indicates that the observed high lead fractions likely consist of mostly newly frozen leads that reduce any pre-existing low-level cloudiness, which in turn decreases downwelling infrared flux and accelerates the freezing of sea ice. Introduction Leads are quasi-linear openings within the interior of the polar ice pack, where the ocean is exposed directly to the atmosphere 1 . Leads range in width from several meters to tens of kilometers with narrow leads most abundant and in length from hundreds of meters to hundreds of kilometers (Supplementary Fig. 1 ). Due to the extreme air–water temperature contrast (20–40 °C), turbulent heat fluxes over leads can be two orders of magnitude larger than over the ice surface in winter 2 . While winter leads climatically cover 2–3% of the total surface area of the central Arctic and 6–9% of the Arctic peripheral seas (Supplementary Figs. 2 and 3 ), these large heat fluxes dominate the wintertime heat budget of the Arctic boundary layer 2 , 3 , 4 , 5 . Moreover, leads are also a source of moisture which can induce boundary layer clouds locally 6 . The presence of these clouds has the potential to extend the thermodynamic impact of leads over large-scale regions through their vertical and horizontal development 7 , 8 as well as the enhanced downward infrared radiative flux 9 , 10 , 11 . Boundary layer cloud cover also exerts a significant impact on the equilibrium sea ice thickness 10 , which is a key indicator of changes in Arctic heat transport 4 . Therefore, low-level clouds significantly influence how leads impact the Arctic surface energy balance. Correctly understanding the lead-modified surface energy budget is important. However, accurate determination of wintertime low-level cloud fraction over the Arctic Ocean from satellites is still a challenge owing to the presence of the underlying leads and sea ice, and the polar night. Previous observations demonstrate a replacement of thick, multi-year ice by thin, first-year ice 12 , a gradual acceleration in sea ice drift and wind stress 13 , increases in sea ice mean strain rate and the associated deformation rate 14 , and a marked widening of the marginal ice zone 15 in the Arctic. All these observational findings suggest an increasing presence of Arctic leads in the climate system, which motivates a study to understand associated changes in boundary layer clouds and thus estimate the associated large-scale heat balance in response to changes in lead fraction. As a first step in this direction, here we explore the potential influences that leads have on boundary layer clouds in the wintertime Arctic. Due to the limitation of observations in the Arctic, previous studies have mostly used model simulations to examine the clouds or convection induced by a single lead in idealized or simplified situations 8 , 11 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 . Few of those studies investigated the low-level clouds over a realistic ensemble of leads. In this study, diverse ground- and satellite-based observed records of the Arctic system near Barrow, Alaska, are combined with a three-dimensional cloud-resolving model to examine the effects of leads on boundary layer clouds. Based on the observations, we first attempt to establish a statistical relationship between the large-scale sensible heat flux due to leads and low-level cloud frequency. Furthermore, a cloud-resolving model is used to better understand the underlying physics of the observed lead-low cloud associations. We find low-level cloud occurrence frequency decreases with increasing large-scale lead flux and show that it is induced by the recently frozen leads, which constitute a majority of the satellite-detected lead fraction. Results Observed lead-low cloud associations Previous observational analysis 24 shows that low-level clouds predominate at Barrow throughout the year, especially during fall and winter. In this study, we investigate the effects of leads on low-level clouds at a local scale, offshore within 200 km of the Barrow site (as indicated by the black semicircle in Fig. 1 b, d) over the period 2008–2011 (see Methods). Profiles of the MMCR (Millimeter Wavelength Cloud Radar) derived cloud occurrence as a function of reflectivity (dBZ) for the composites of low and high large-scale lead flux are shown in Fig. 1 a, c, respectively. Based on prior research 25 , clouds are generally associated with dBZ < −10, while drizzle and light precipitation (heavy precipitation) correspond to dBZ between −10 and 10 (dBZ ≥ 10). Current understanding 16 , 20 suggests we would find more boundary layer clouds under high lead flux conditions, but we find just the opposite in our analyses. The curves in Fig. 1 c show that for the high lead flux intervals, clouds within the boundary layer (i.e., 0–1 km) with dBZ > −25 are observed to have a frequency of <10%. However, for the low lead flux periods, up to 40% of the boundary layer clouds have dBZ > −25 (Fig. 1 a). Fig. 1: MMCR derived cloud occurrence frequency and the corresponding mean lead fraction. a The vertical frequency distribution of cloud occurrence as a function of minimum reflectivity (dBZ) for the low lead flux periods. b Mean lead fraction based on AMSR-E data for four representative low lead flux periods in February 2011 (days 14, 15, 26, 27). c The same as a but for the high lead flux periods. d The same as b but for four representative high lead flux periods in February 2010 (days 19, 22, 23, 24). The coastal region of Alaska is shown in white, Barrow is indicated by a filled black circle, and the top of the 200-km radius semicircle is directed north. Full size image To investigate the robustness of this finding, we used the combined CloudSat-CALIPSO (Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations) 26 , 27 data to obtain profiles of cloud occurrence frequency in the region offshore within 200 km of Barrow and performed a similar analysis for the same composites of low and high lead flux intervals. The low-level cloud occurrence frequency (Supplementary Fig. 4 ) from CloudSat-CALIPSO over this region also decreases as the large-scale lead flux increases. The consistency of lead and low cloud associations between the MMCR and CloudSat-CALIPSO cloud occurrence frequencies suggests that the low-level clouds detected by ground-based remote sensing at Barrow are also representative of the low-level clouds over the adjacent ocean when the wind is onshore and of moderate speed (see Methods). The reliability of this unexpected result depends on effectively separating the effects of large-scale meteorology on the clouds and leads from the effects of leads on the low-level clouds. Therefore, we performed an analysis of the synoptic environments over a Barrow-centered region (Fig. 2 ) and pan-Arctic (Supplementary Fig. 5 ) for low and high lead flux periods, using hourly MERRA-2 (Modern-Era Retrospective analysis for Research and Applications version 2) reanalysis dataset 28 . The composite analyses for the low and high lead flux periods show similar patterns of geopotential height at each level (1000, 850, and 500 hPa). In both cases, a ridge tilts equatorward with height near Barrow and the adjacent ocean region, though it is slightly weaker in the composite of low lead flux intervals. The eastern flank of the ridge produces northerly wind for our study area. Overall, the two sets of periods have similar meteorological conditions, which minimizes the likelihood that the large-scale meteorology is significantly different between the two sets. The corresponding analysis using the NCEP-NCAR reanalysis produces results very similar to Fig. 2 (not shown). We also examined the meteorological conditions based on in-situ surface measurements at Barrow and the lead fractions that we used to calculate the large-scale lead fluxes (Supplementary Fig. 6 ). The 2-m air temperature, 10-m wind speed and wind direction have similar distributions in the high and low lead flux periods (Supplementary Fig. 6 a–c, e–g). Among all the factors that contribute to the large-scale lead flux, it is found that lead fraction dominates the variations in large-scale lead flux, with a correlation of 0.9 (Supplementary Fig. 6 d). This further demonstrates that meteorological conditions in both regimes resemble each other and cannot account for the differences between these two selected large-scale lead flux regimes. Fig. 2: Large-scale synoptic conditions. a – c Composites of geopotential height (black contours) and wind velocity (arrows) at 1000 hPa (wind is at 10 m), 850 and 500 hPa for the set of low lead flux periods. Color shading indicates 2-m air temperature, 850-hPa temperature, and vertical velocity, respectively. d – f The same as a – c but for the high lead flux periods. The domain is 65–85 °N, 90–180 °W, and the Barrow site is indicated by the filled magenta circle. Full size image Investigating the underlying mechanisms The surprising observational results suggest some intriguing physical mechanisms. For example, the dearth of low-level clouds in the high lead-flux case could reflect desiccation by precipitation or down-mixing of dry air by lead-induced convection. Three-dimensional cloud-resolving simulations (see Methods) were conducted in an effort to understand the underlying physics. The simulated boundary layer clouds from the four cases (i.e., OPEN, OPENOPEN, OPENFROZEN, and FROZENFROZEN, see Methods) are displayed in Figs. 3 and 4 . The FROZENFROZEN case is not shown in Fig. 3 because no radiatively significant clouds are generated in this case, as shown in Fig. 4 . The x – z hydrometeor profiles (Fig. 3 ) illustrate the cloud evolution in each case, while the time series of cloud coverage (Fig. 4 ) further quantitatively indicates the differences of cloud amount between these cases. Here a threshold of 3.0 g m −2 is chosen in Figs. 3 and 4 as downwelling LW radiation is particularly sensitive to total water path exceeding 3.0 g m −2 , resulting in an increase in downwelling LW radiation by roughly 6 W m −2 compared to the initial condition. In both OPEN and OPENOPEN cases, extensive clouds are produced downstream of the lead (Fig. 3 ), which is consistent with previous studies 16 , 20 , 22 . A comparison of these two cases demonstrates that more boundary layer clouds are generated if the lead (completely ice-free) area fraction is doubled. The corresponding 6-h averaged cloud fraction is 0.185 in OPENOPEN versus 0.080 in OPEN (Fig. 4 ). This is what we expected based on current understanding. However, if the lead fraction is doubled but the added lead area is frozen (i.e., OPENFROZEN), lead-induced low clouds decrease to 0.066 (Fig. 4 ). At 6 h, cloud coverage in the OPENFROZEN case is 0.042, which is 37.3% less than that in the OPEN case. These differences of cloud fraction between the OPEN and OPENFROZEN cases are consistent with our observational analyses in which higher lead flux periods are associated with lower cloud occurrence frequencies, and also indicate that newly frozen leads tend to reduce the low-level clouds generated by an upstream open lead. Fig. 3: Evolution of the simulated clouds response to leads. a The x – z profiles of the cloud condensate (cloud water plus cloud ice mixing ratio, shaded contour lines) and precipitating water (rain and snow, solid contour lines with intervals of 0.002 g kg −1 ) from the OPEN case. b – c The same as a but for the OPENOPEN and OPENFROZEN cases, respectively. For each case, four simulation times (1.5, 3.0, 4.5, and 6.0 hrs) are displayed. The purple bars near the top of the selected domain indicate the place where the clouds are radiatively significant, with a total water path greater than 3.0 g m −2 . All results are averaged along the y direction. The upstream open lead extends from x = 5 km to x = 9 km, as indicated by the two vertical solid lines in the domain, and the downstream open or frozen lead extends from x = 14 km to x = 19 km, as indicated by the two vertical dashed lines. Note that the scales are different on the x and y axis. Full size image Fig. 4: Time series of cloud coverage. Time series of the cloud coverage from the OPEN (black), OPENOPEN (purple), OPENFROZEN (orange), and FROZENFROZEN (blue) cases. The cloud coverage is defined as the fraction of grid points with radiatively significant clouds for the entire domain. Full size image Because of the different thickness of ice between recently frozen leads and open leads (5 cm versus 0 cm in our simulations), the surface fluxes may change significantly. We analyzed the changes of surface fluxes to understand why a recently frozen lead can have such a significant impact on the evolution of clouds. To emphasize the impacts of leads on the surface energy budget, all the energy exchange components were averaged over the lead area (Fig. 5 ). As a comparison, over the thick ice surface (i.e., –6 to 0 h), we find longwave radiative fluxes dominate the surface heat exchange, with turbulent latent heat (LH) and sensible heat (SH) fluxes being close to zero. Over the open lead area, all the energy fluxes are significantly increased to different extents (Fig. 5 a–c). Specifically, the 6-h averaged LH and SH in the OPEN case increase to 100 and 357 W m −2 , respectively. The magnitudes of the counterparts over open leads in OPEN, OPENOPEN and OPENFROZEN cases are quite similar. However, these surface fluxes change in different ways over the frozen leads in OPENFROZEN and FROZENFROZEN. Comparing the turbulent fluxes over downstream frozen lead to upstream open lead in OPENFROZEN (Fig. 5 c), we find that LH decreases sharply by 69% to 31 W m −2 due to the exponential dependence of the saturation vapor pressure upon temperature, while less reduction occurs in SH (decreased by roughly 35.8% to 228 W m −2 ). The differences in turbulent fluxes over open leads and frozen leads suggest that the turbulent fluxes (SH and LH) over the newly frozen leads play a key role in dissipating low-level clouds. Fig. 5: Time series of the surface fluxes averaged over lead area. a Time series of surface latent heat flux (LH), sensible heat flux (SH), net longwave radiative flux (LW net ), and the net total flux (NET) averaged over the lead area from the OPEN case. NET excludes conductive heat flux at the top layer, and upward flux is defined as positive. b The same as a but for the OPENOPEN case. Solid curves indicate the fluxes averaged over the upstream open lead while dashed lines indicate the fluxes averaged over the downstream open lead. c The same as b but for the OPENFROZEN case with an upstream open lead (solid lines) and a downstream frozen lead (dashed lines). d The same as b but for the FROZENFROZEN case. Full size image Changes of the 2D PDFs of the thermodynamic state characterized by liquid-ice static energy and total water mixing ratio from the OPEN case to the other three cases are displayed in Fig. 6 . The corresponding PDFs for each case are shown in Supplementary Fig. 7 . Results are from a 10 km by 12.8 km by 300 m domain with grid volumes of 200 m by 200 m by 12 m. We sampled the simulation results every 5 min, and calculated the PDF of each thermodynamic state bin for the last two hours. The PDF difference indicates an ~49% increase in the cloud volume fraction from the OPEN to OPENOPEN cases (Fig. 6 a), owing to the shift of a large number of grid volumes to a larger total water mixing ratio and larger liquid-ice static energy from below to above the cloudy-clear boundary (black curve). This is caused by the large vertical turbulent temperature flux (Supplementary Fig. 8 ) and water vapor flux (Supplementary Fig. 10 ) over the two open leads in the OPENOPEN case. However, the fraction of cloudy grid volumes decreases by 47% from OPEN to OPENFROZEN. While a large number of grid volumes shift to a higher liquid-ice static energy, increases in total water mixing ratio are not sufficient to maintain all of the previously cloudy grid volumes (Fig. 6 b). The reduced water vapor supply (Supplementary Figs. 10 , 11 ) as well as the relatively large temperature flux (Supplementary Figs. 8 , 9 ) over the frozen lead, which provides the necessary buoyancy for convection and entrainment of warm air, might play a central role in reducing the relative humidity and thus dissipating these low clouds. As for the difference between FROZENFROZEN and OPEN (Fig. 6 c), almost all the previously cloudy grid volumes in the OPEN case are shifted downward and below the cloudy-clear border due to the reduced water vapor flux, which is consistent with the zero cloud coverage shown in Fig. 4 . Our model simulation results provide a plausible explanation for the counterintuitive observational results: instead of including a larger fraction of completely open leads, the observed high lead fraction must largely consist of newly frozen leads which tend to reduce any pre-existing low-level cloudiness. Fig. 6: Mixing diagram of the difference between high and low lead fraction cases. a Differences of PDFs of thermodynamic state characterized by liquid-ice static energy ( S l i ∕c p ) and total water mixing ratio ( q w , sum of the water vapor, cloud water, and cloud ice) between OPENOPEN and OPEN. Plotted results are from the last two simulation hours for all model grid volumes within a selected domain with x extending from 20 to 30 km and z extending from the bottom level to 300 m, where most clouds occur. The solid black line indicates the theoretical cloudy-clear boundary line, with cloud ice mixing ratio equal to 0.01 g kg −1 . b The same as a but for differences between OPENFROZEN and OPEN case PDFs. c The same as a but for differences between FROZENFROZEN and OPEN case PDFs. In the OPEN case, x extends from 10 to 20 km. The percentages on the top left are the percentage changes of cloudy volume fraction from the OPEN case to the other three cases. Full size image Discussion Our observational analyses indicate an unexpected lead-low cloud association and our numerical experiments show supporting results, that a region with an observed high lead fraction likely contains many newly frozen leads with ice thickness ranging from roughly 2.5 cm up to 30 cm, which largely suppress the latent heat flux with lesser impact on the sensible heat flux 29 and consequently tend to dissipate boundary layer clouds (Supplementary Figs. 12 and 13 ). This explanation is reasonable given that open leads usually freeze over after remaining open for just a few hours and thinner ice generally grows faster than thicker ice. Consider a scenario in which no low-level clouds and no leads exist initially, after which leads open and produce low-level clouds. As new leads keep opening and then freezing, the recently frozen lead area will increase, which contributes to an increasing total detected lead area (e.g., Advanced Microwave Scanning Radiometer for EOS (AMSR-E) detected lead fraction). Because the accumulated newly frozen leads tend to dissipate low-level clouds, the large detected lead fraction is accompanied with fewer low clouds. As shown in Fig. 3 , the cloudiness between an upstream open lead and a downstream open or frozen lead is decreased. Additionally, doubling the open lead fraction from OPEN to OPENOPEN does not necessarily double the cloud coverage (i.e., the increase is nonlinear, Fig. 4 ). These results indicate potentially important lead–lead interactions which will be further examined in our future work. We further compared the surface energy fluxes, averaged over the thick ice surface and within half of the entire domain ( x = 0 to x = 51.2 km), from the OPENOPEN and OPENFROZEN cases, and find that with a lead fraction of 15.6%, freezing half the lead area and the consequent dissipation of low-level clouds could result in an increase in energy loss from the thick sea ice by ~1.6 W m −2 . The entire domain averaged surface fluxes in each case (Supplementary Fig. 14 ) indicate the impacts of the different lead scenarios on the large-scale surface energy budget, compared to the NOLEAD case. These preliminary findings stress the importance of differentiating open water leads from recently frozen leads, though our simulated cloud coverage is less than that observed owing to the idealized configurations (e.g., invariable lead fraction, periodic boundary layer conditions, and neglect of moisture advection). Future work will pursue such differentiation and will also examine the effects of lead modulation of low-level clouds on the lead-atmosphere feedbacks and the large-scale surface heat balance over the pan-Arctic. As mentioned above, besides the local surface-based process (i.e., lead fraction change), the large-scale meteorology may also have impacts on the low-level clouds, such as the temperature and humidity advection by the large-scale synoptic flow 30 , 31 , 32 , 33 . An analysis of 11 years of observations at NSA (North Slope of Alaska) 34 showed that distinct synoptic-scale and mesoscale meteorology regimes produce distinct cloud states. For example, high pressure over the Beaufort Sea and the Arctic Ocean with northerly cold, dry anticyclonic flow causes fairly small cloud occurrence frequencies at all levels, while cyclonic flow around Aleutian low pressure systems leads to large cloud occurrence frequencies throughout the column by advecting warm, moist air through the Bering Strait. In our study, the Barrow radiosonde data and surface measurements were used to select two categories with similar atmospheric states, so that the influences of leads on the low-level clouds dominate. However, the reader should bear in mind that a role of the large-scale synoptic conditions in cloudiness cannot be excluded with great certainty, to which the slight difference between the cloud occurrence frequency in high levels above the boundary layer (Fig. 1 a, c) may in part be related. Determining the relative roles of large-scale meteorology and large-scale lead fluxes in determining the distribution of low-level clouds will be addressed in our future studies. The modeling study demonstrates that recently frozen leads with a certain range of ice thickness tend to dissipate low clouds produced by open leads. It is hypothesized that both the suppressed water vapor supply and the strong convection induced by the large sensible heat flux provide favorable conditions for the dissipation of the clouds. Details of the processes that determine how low clouds are reduced by frozen leads, such as entrainment of warm air owing to the lead-induced convective plumes, will be further examined in future work. Methods Data and algorithm In this study, we focus on an Arctic region offshore within 200 km of Barrow because of the relatively long-time record of Arctic clouds collected at Barrow, the NSA’s main research site, by the Atmospheric Radiation Measurement (ARM) 35 program. The datasets used for the observational analyses are summarized in Supplementary Table 1 . Observations for the months of January–April and November–December for the years 2008–2011 were chosen because of the availability of reliable data. For the present analyses, control for meteorological conditions, including large-scale flow regime, is a potential challenge which we address by using a conditional sampling algorithm (described below) as well as examining the large-scale synoptic pattern from reanalyses. Using the twelve-hourly radiosondes and hourly surface measurements 36 at Barrow along with the daily lead fractions from the AMSR-E 37 , we first selected two subsets of the 12-h time intervals (centered at hour 6 or 18) that have similar atmospheric states at Barrow and over the adjacent ocean except for the large-scale turbulent sensible heat flux due to the leads. Then we compared the cloud occurrence frequency profiles for these two subsets using the NSA MMCR 38 data at Barrow. Specifically, we first calculated three quantities for all 12-h periods for which the needed data were available: dry fraction is the fraction of the atmosphere below 2000 m above ground level with relative humidity <50% at Barrow; turbulent surface flux from leads is the turbulent surface sensible heat flux per unit area over leads calculated from surface temperature of water (i.e., –2 °C), air temperature, surface pressure, relative humidity, and wind speed at Barrow following the previous work 39 , 40 ; large-scale turbulent surface flux due to leads is the turbulent surface sensible heat flux from leads multiplied by the lead fraction (i.e., "large-scale turbulent sensible heat flux”). The lead fraction was derived from the AMSR-E lead area fraction for the Arctic dataset 37 by averaging values within the semicircle north of Barrow (Fig. 1 b). The AMSR-E lead detection method 37 produces a lead area fraction which is the sum of open leads and those with thin ice cover by using the ratio of brightness temperatures observed at 89–19 GHz. Therefore, both open water leads and thin-ice-covered leads are included in the lead fraction. All 12-h intervals were then conditionally sampled based on the following three criteria: a dry fraction >0.42 to avoid deep cloud layers that could make lead modulation of cloudiness difficult to detect, a wind direction between 240° and 110° (clockwise) to select trajectories that are from the ocean and over leads when present, and a wind speed >2.7 m s −1 (i.e., 117 km 12 h −1 ) to select trajectories that have crossed a substantial proportion of the lead-fraction-analysis region (which extends 200 km from Barrow) during the 12-h analysis period. From the set of all available 12-h intervals that met these three criteria, two subsets were chosen: one with the component of large-scale turbulent sensible heat flux due to leads in the upper third of all such values ("high lead flux regime,” with large-scale flux >8 W m −2 ) and the other with this flux in the lower third of all such values ("low lead flux regime,” with large-scale flux <3.8 W m −2 ). A total of 91 samples (8.0%) were included for high lead flux intervals and 31 (2.6%) for the low lead flux intervals. Finally, the cloud occurrence frequency profiles were calculated for the two subsets using NSA MMCR data, processed with quality control and masking following the previous study 41 , to determine the statistical associations between large-scale lead flux and low-level cloud occurrence. The lead-low cloud associations were further examined using satellite data (offshore within 200 km of Barrow) from the CloudSat Cloud Profiling Radar (CPR) 26 combined with CALIPSO lidar 27 . To characterize the large-scale synoptic environment for the two subsets, we used both the hourly MERRA-2 28 and the six-hourly NCEP (National Centers for Environmental Prediction)–NCAR (National Center for Atmospheric Research) atmospheric reanalysis 42 . Model and experiment design The model we used is the System for Atmospheric Modeling, version 6.11 (SAM) 43 , which is a three-dimensional, nonhydrostatic, cloud-resolving model. SAM employs appropriate physical parameterizations for subgrid-scale processes and is well-suited for simulating boundary layer clouds 44 , 45 , 46 . The infrared radiative fluxes interact with clouds and the surface as well as the atmosphere and provide radiative heating rates. The subgrid-scale turbulence closure employs a first-order Smagorinsky closure scheme in which the stability is accounted for in the levels above the surface layer; the surface turbulent fluxes are estimated using Monin-Obukhov similarity 47 and more details about the integral forms of stability functions used in the Monin-Obukhov similarity are shown in the Supplementary Information. We used a two-moment ice-phase microphysics parameterization 48 . The horizontal domain size in our study is 102.4 km by 12.8 km with a horizontal grid spacing of 200 m. In addition, the model has 81 levels in the vertical direction with the model top at about 1.5 km, which is high enough to simulate the Arctic boundary layer. The vertical grid size is variable, with a minimum size of 12 m at the surface and an average size of 18 m. SAM uses periodic lateral boundaries, and a rigid lid at the top of the domain. The Simplified Land Model (SLM) 49 is coupled to SAM version 6.11 and is used to represent land-atmosphere interactions. SLM currently has 17 land surface types, including water and snow/ice. To simulate a recently frozen lead, we added a new land type, which is a 5-cm layer of ice covering open water. A land type index in each surface grid point automatically defines the parameters of the vegetation and soil. We set up a horizontally inhomogeneous surface with different land types to simulate the Arctic sea ice with leads. SLM also has an interactive soil model, currently with nine layers. We therefore used nine layers in the sea ice. For simplification, no snow is included in the present study. The vertical ice layer thickness is prescribed for each surface type, and the initial temperature profile is the equilibrium profile for the initial atmospheric conditions. In the SLM, the surface temperature of the ice evolves in response to the surface energy balance. Because the AMSR-E lead area fraction includes both open leads and newly frozen leads, we designed a series of simulations to investigate the possible impacts of frozen leads on the boundary layer clouds. We began with a “NOLEAD" case (simulation) where the entire domain was covered by a single sea ice land type with a thickness of 1.995 m. This simulation was conducted and run for 6 h (–6 to 0 h) to ensure that the atmosphere and sea ice reached an equilibrium state. Then, corresponding to our observational analyses with different lead fractions, we performed simulations for four different lead configurations, where all leads are 4 km wide: an “OPEN" case in which a single lead was opened in the domain to investigate the effects caused by the presence of the open lead, an “OPENOPEN" case which modifies the OPEN case by adding another identical open lead 5 km downstream, an “OPENFROZEN" case which modifies the OPEN case by adding an identical but frozen lead 5 km downstream, and a “FROZENFROZEN" case which modifies the OPENFROZEN case by having both leads be frozen. The lead area fraction doubled from the OPEN case to the other three cases, but with different frozen lead proportion (i.e., 0%, 50%, and 100% in the OPENOPEN, OPENFROZEN, and FROZENFROZEN, respectively), which allows us to examine the potential impacts of frozen leads as well as lead fraction on the low clouds. All four cases were run for 6 h (0–6 h). Initial conditions were based on the Surface Heat Budget of the Arctic (SHEBA) project following previous work 8 , with the large-scale wind direction approximately perpendicular to the lead orientation. Here ensemble simulations with slightly different variation in initial conditions are not considered because the simulated flow is strongly forced by fluxes from leads instead of variability associated with initial conditions. Data availability The authors declare that the observational and reanalysis data supporting the findings of this study are available within the paper and its supplementary information file. Data from our model simulations are available upon request. Code availability The codes used to generate these results are available upon request.
The prevailing view has been that more leads are associated with more low-level clouds during winter. But University of Utah atmospheric scientists noticed something strange in their study of these leads: when lead occurrence was greater, there were fewer, not more clouds. In the wintertime Arctic, cracks in the ice called "leads" expose the warm ocean directly to the cold air, with some leads only a few meters wide and some kilometers wide. They play a critical role in the Arctic surface energy balance. If we want to know how much the ice is going to grow in winter, we need to understand the impacts of leads. The extreme contrast in temperature between the warm ocean and the cold air creates a flow of heat and moisture from the ocean to the atmosphere. This flow provides a lead with its own weather system which creates low-level clouds. The prevailing view has been that more leads are associated with more low-level clouds during winter. But University of Utah atmospheric scientists noticed something strange in their study of these leads: when lead occurrence was greater, there were fewer, not more clouds. In a paper published in Nature Communications, they explain why: wintertime leads rapidly freeze after opening, so most leads have newly frozen ice that shuts off the moisture supply but only some of the heat flow from the ocean, thus causing any low-level clouds to dissipate and accelerating the freezing of sea ice compared to unfrozen leads. Understanding this dynamic, the authors say, will help more accurately represent the impact of winter-time leads on low-level clouds and on the surface energy budget in the Arctic—especially as the Arctic sea ice is declining.
10.1038/s41467-019-14074-5
Physics
Exotic material exhibits an optical response in enormous disproportion to the stimulus
Liang Wu et al. Giant anisotropic nonlinear optical response in transition metal monopnictide Weyl semimetals, Nature Physics (2016). DOI: 10.1038/nphys3969 Journal information: Nature Physics
http://dx.doi.org/10.1038/nphys3969
https://phys.org/news/2018-03-exotic-material-optical-response-enormous.html
Abstract Although Weyl fermions have proven elusive in high-energy physics, their existence as emergent quasiparticles has been predicted in certain crystalline solids in which either inversion or time-reversal symmetry is broken 1 , 2 , 3 , 4 . Recently they have been observed in transition metal monopnictides (TMMPs) such as TaAs, a class of noncentrosymmetric materials that heretofore received only limited attention 5 , 6 , 7 . The question that arises now is whether these materials will exhibit novel, enhanced, or technologically applicable electronic properties. The TMMPs are polar metals, a rare subset of inversion-breaking crystals that would allow spontaneous polarization, were it not screened by conduction electrons 8 , 9 , 10 . Despite the absence of spontaneous polarization, polar metals can exhibit other signatures of inversion-symmetry breaking, most notably second-order nonlinear optical polarizability, χ (2) , leading to phenomena such as optical rectification and second-harmonic generation (SHG). Here we report measurements of SHG that reveal a giant, anisotropic χ (2) in the TMMPs TaAs, TaP and NbAs. With the fundamental and second-harmonic fields oriented parallel to the polar axis, the value of χ (2) is larger by almost one order of magnitude than its value in the archetypal electro-optic materials GaAs 11 and ZnTe 12 , and in fact larger than reported in any crystal to date. Main The past decade has witnessed an explosion of research investigating the role of band-structure topology, as characterized for example by the Berry curvature in momentum space, in the electronic response functions of crystalline solids 13 . While the best established example is the intrinsic anomalous Hall effect in time-reversal breaking systems 14 , several nonlocal 15 , 16 and nonlinear effects related to Berry curvature generally 17 , 18 and in Weyl semimetals (WSMs) specifically 19 , 20 have been predicted in crystals that break inversion symmetry. Of these, the most relevant to this work is a theoretical formulation 21 of SHG in terms of the shift vector, which is a quantity related to the difference in Berry connection between two bands that participate in an optical transition. Figure 1a and its caption provide a schematic and description of the optical set-up for measurement of SHG in TMMP crystals. Figure 1b, c shows results from a (112) surface of TaAs. The SH intensity from this surface is very strong, allowing for polarization rotation scans with signal-to-noise ratio above 10 6 . In contrast, SHG from a TaAs (001) surface is barely detectable (at least six orders of magnitude lower than the (112) surface). Below, we describe the use of the set-up shown in Fig. 1a to characterize the second-order optical susceptibility tensor, χ ijk , defined by the relation, P i (2 ω ) = ε 0 χ ijk (2 ω ) E j ( ω ) E k ( ω ). Figure 1: Second-harmonic generation versus angle as TaAs is effectively rotated about the axis perpendicular to the (112) surface. a , Schematic of the SHG experimental set-up. To stimulate second-harmonic light, pulses of 800 nm wavelength were focused at near-normal incidence to a 10-μm-diameter spot on the sample. Polarizers and waveplates mounted on rotating stages allowed for continuous and independent control of the polarization of the generating and second-harmonic light that reached the detector. θ 1 and θ 2 are the angles of the polarization plane after the generator and the analyser, respectively, with respect to the [1,1,−1] crystal axis. b , c , SHG intensity as a function of angle of incident polarization at 20 K. In b , c , the analyser is, respectively, parallel and perpendicular to the generator. In both plots the amplitude is normalized by the peak value in b . The solid lines are fits that include all the allowed tensor elements in 4 mm symmetry, whereas the dashed lines include only d 33 . The insets are polar plots of measured SHG intensity versus incident polarization angle. Full size image As a first step, we determined the orientation of the high-symmetry axes in the (112) surface, which are the [1,−1,0] and [1,1,−1] directions. To do so, we simultaneously rotated the linear polarization of the generating light (the generator) and the polarizer placed before the detector (the analyser), with their relative angle set at either 0° or 90°. Rotating the generator and analyser together produces scans, shown in Fig. 1b, c , which are equivalent to rotation of the sample about the surface normal. The angles at which we observe the peak and the null in Fig. 1b and the null in Fig. 1c allow us to identify the principal axes in the (112) plane of the surface. Having determined the high-symmetry directions, we characterize χ ijk by performing three of types of scans, the results of which are shown in Fig. 2 . In scans shown in Fig. 2a, b , we oriented the analyser along one of the two high-symmetry directions and rotated the plane of linear polarization of the generator through 360°. Figure 2c shows the circular dichroism of the SHG response—that is, the difference in SH generated by left and right circularly polarized light. For all three scans the SHG intensity as a function of angle is consistent with the second-order optical susceptibility tensor expected for the 4 mm point group of TaAs, as indicated by the high accuracy of the fits in Fig. 1b, c and Fig. 2a–c . Figure 2: Second-harmonic intensity with fixed analysers, circular dichroism and temperature dependence on a TaAs (112) sample. a , b , SHG as function of generator polarization with analysers fixed along 0° ( a ) and 90° ( b ), which are the [1,1,−1] and [1,−1,0] crystal axes, respectively. Solid lines are fits considering all three non-zero tensor elements. The insets are polar plots of measured SHG intensity. c , Circular dichroism (SHG intensity difference between right- and left-hand circularly polarized generation light) normalized by the peak value in Fig. 1b versus polarization angle of the analyser at 300 K. The solid line is a fit. d , Temperature dependence of | d 15 / d 33 | and | d 31 / d 33 | of a TaAs (112) sample. The error bars are determined by setting d 15 and d 31 in phase and out of phase with respect to d 33 in the fits. Full size image In the 4 mm structure xz and yz are mirror planes but reflection through the xy plane is not a symmetry; therefore, TaAs is an acentric crystal with an unique polar ( z ) axis. In crystals with 4 mm symmetry there are three independent nonvanishing elements of χ ijk : χ zzz , χ zxx = χ zyy and χ xzx = χ yzy = χ xxz = χ yyz . Note that each has at least one z component, implying null electric dipole SHG when all fields are in the xy plane. This is consistent with observation of nearly zero SHG for light incident on the (001) plane. Below, we follow the convention of using the 3 × 6 second-rank tensor d ij , rather than χ ijk , to express the SHG response, where the relation between the two tensors for TaAs is: χ zzz = 2 d 33 , χ zxx = 2 d 31 and χ xzx = 2 d 15 (ref. 22 ) (see Methods and Supplementary Section A ). Starting with the symmetry-constrained d tensor, we derive expressions, specific to the (112) surface, for the angular scans with fixed analyser shown in Fig. 2a, b ( Methods and Supplementary Section A ). We obtain | d eff cos 2 θ 1 + 3 d 31 sin 2 θ 1 | 2 /27 and | d 15 | 2 sin 2 (2 θ 1 )/3 for analyser parallel to [1,1,−1] and [1,−1,0], respectively, where d eff ≡ d 33 + 2 d 31 + 4 d 15 . Fits to these expressions yield two ratios: | d eff / d 15 | and | d eff / d 31 | . Although we do not determine | d 33 / d 15 | and | d 33 / d 31 | directly, it is clear from the extreme anisotropy of the angular scans that d 33 , which gives the SHG response when both generator and analyser are parallel to the polar axis, is much larger than the other two components. We can place bounds on | d 33 / d 15 | and | d 33 / d 31 | by setting d 15 and d 31 in and out of phase with d 33 . We note that the observation of circular dichroism in SHG, shown in Fig. 2c , indicates that relative phase between d 15 and d 33 is neither 0° or 180°, but rather closer to 30° ( Supplementary Section A ). The results of this analysis are plotted in Fig. 2d , where it is shown that | d 33 / d 15 | falls in the range ∼ 25–33 for all temperatures, and | d 33 / d 31 | increases from ∼ 30 to ∼ 100 with increasing temperature. Perhaps because of its polar metal nature, the anisotropy of the second-order susceptibility in TaAs is exceptionally large compared with what has been observed previously in crystals with the same set of non-zero d ij . For example, α -ZnS, CdS and KNiO 3 have | d 31 | ≅ | d 15 | ≅ d 33 /2 (ref. 23 ), while in BaTiO 3 the relative sizes are reversed, with | d 31 | ≅ | d 15 | ≈ 2 | d 33 | (ref. 24 ). Even more striking than the extreme anisotropy of χ ijk is the absolute size of the SHG response in TaAs. The search for materials with large second-harmonic optical susceptibility has been of continual interest since the early years of nonlinear optics 25 . To determine the absolute magnitude of the d coefficients in TaAs, we used GaAs and ZnTe as benchmark materials. Both crystals have large and well-characterized second-order optical response functions 11 , 12 , with GaAs regarded as having a SH susceptibility among the largest of any known crystal. GaAs and ZnTe are also ideal as benchmarks, because their response tensors have only one nonvanishing coefficient, d 14 ≡ 1/2 χ xyz . Figure 3a–c shows polar plots of SHG intensity as TaAs (112), ZnTe (110), and GaAs (111) are (effectively) rotated about the optic axis with the generator and analyser set at 0° and 90°. Also shown (as solid lines) are fits to the polar patterns obtained by rotating the χ (2) tensor to a set of axes that includes the surface normal, which is (110) and (111) for our ZnTe and GaAs crystals, respectively ( Methods and Supplementary Section A ). Even prior to analysis to extract the ratio of d coefficients between the various crystals, it is clear that the SHG response of TaAs (112) is large, as the peak intensity in this geometry exceeds ZnTe (110) by a factor of 4.0(±0.1) and GaAs (111) by a factor of 6.6(±0.1). Figure 3d compares the parallel polarization data for TaAs shown in Fig. 3 a with SHG measured under the same conditions in the (112) facets of two other TMMPs: TaP and NbAs. The strength of SHG from the three crystals, which share the same 4 mm point group, is clearly very similar, with TaP and NbAs intensities relative to TaAs of 0.90(±0.02) and 0.76(±0.04), respectively. The SHG response in these compounds is also dominated by the d 33 coefficient. Finally, we found that the SHG intensity of all three compounds does not decrease after exposure to atmosphere for several months. Figure 3: Benchmark SHG experiments on TaAs (112), TaP (112), NbAs (112), ZnTe (110) and GaAs (111). a – c , SHG polar plots in the same scale for TaAs (112) ( a ), ZnTe (110) ( b ) and GaAs (111) ( c ) in both the parallel and perpendicular generator/analyser configurations at room temperature. For TaAs, data in the perpendicular configuration is magnified by a factor of 4 for clarity. The SHG intensity of ZnTe and GaAs are multiplied by a factors of 4 and 6.6, respectively, to match the peak value of TaAs. d , SHG polar plots for TaAs (112), TaP (112) and NbAs (112) in the parallel configurations at room temperature, with plots of TaP and NbAs rotated by 60° and 120° for clarity. Full size image To obtain the response of the TMMPs relative to the two benchmark materials we used the Bloembergen–Pershan formula 25 to correct for the variation in specular reflection of SH light that results from the small differences in the index of refraction of the three materials at the fundamental and SH frequency. (See Methods . Details concerning this correction, which is less than 20%, can be found in Supplementary Section B .) Table 1 presents the results of this analysis, showing that | d 33 | ≅ 3,600 pm V −1 at the fundamental wavelength 800 nm in TaAs exceeds the values in the benchmark materials GaAs 11 and ZnTe 12 by approximately one order of magnitude, even when measured at wavelengths where their response is largest. The d coefficient in TaAs at 800 nm exceeds the corresponding values in the ferroelectric materials BiFeO 3 (ref. 26 ), BaTiO 3 (ref. 24 ) and LiNbO 3 (ref. 23 ) by two orders of magnitude. In the case of the ferroelectric materials, SHG measurements have not been performed in their spectral regions of strong absorption, typically 3–7 eV. However, ab initio calculations consistently predict that the resonance-enhanced d values in this region do not exceed roughly 500 pm V −1 (refs 27 , 28 ). Table 1 Second-harmonic generation coefficients of different materials at room temperature. Full size table The results described above raise the question of why χ zzz in the TMMPs is so large. Answering this question quantitatively will require further work in which measurements of χ (2) as a function of frequency are compared with theory based on ab initio band structure and wavefunctions. For the present, we describe a calculation of χ (2) using a minimal model of a WSM that is based on the approach to nonlinear optics proposed by Morimoto and Nagaosa (MN) 21 . This theory clarifies the connection between band-structure topology and SHG, and provides a concise expression with clear geometrical meaning for χ (2) . Hopefully this calculation will motivate the ab initio theory that is needed to quantitatively account for the large SH response of the TMMPs and its possible relation to the existence of Weyl nodes. The MN result for the dominant ( zzz ) response function is In equation (1) the nonlinear response is expressed as a second-order conductivity, σ zzz ( ω , 2 ω ), relating the current induced at 2 ω to the square of the applied electric field at ω , that is, J z (2 ω ) = σ zzz E z 2 ( ω ). (The SH susceptibility is related to the conductivity through the relation χ (2) = σ (2) /2 iωε 0 ). The indices 1 and 2 refer to the valence and conduction bands, respectively, ε 21 is the transition energy, and v i , 12 is the matrix element of the velocity operator v i = (1/ ℏ ) ∂H / ∂k i . Bandstructure topology appears in the form of the ‘shift vector,’ R zz ≡ ∂ k z φ z , 12 + a z , 1 − a z , 2 , which is a gauge-invariant length formed from the k derivative of the phase of the velocity matrix element, φ 12 = Im{log v 12 }, and the difference in Berry connection, a i = − i 〈 u n | ∂ k i | u n 〉, between bands 1 and 2. Physically, the shift vector is the k -resolved shift of the intracell wavefunction for the two bands connected by the optical transition. We consider the following minimal model for a time-reversal symmetric WSM that supports four Weyl nodes, Here, σ i and s i are Pauli matrices acting on the orbital and spin degrees of freedom, respectively, t is a measure of the bandwidth, a is the lattice constant, m y and m z are parameters that introduce anisotropy, and inversion breaking is introduced by Δ . The Hamiltonian defined in equation (2) preserves two-fold rotation symmetry about the z -axis and the mirror symmetries M x and M y . These symmetries form a subset of the 4 mm point group which is relevant to the optical properties of TMMPs. Figure 4 illustrates the energy levels, topological structure, and SHG spectra that emerge from this model. As shown in Fig. 4a , pairs of Weyl nodes with opposite chirality overlap at two points, k = (±π/2 a , 0,0), in the inversion-symmetric case with Δ = 0. With increasing Δ the nodes displace in opposite directions along the k y axis, with Δk y ≅ Δ / a . The energy of electronic states in the k z = 0 plane, illustrating the linear dispersion near the four Weyl points, is shown in Fig. 4b . Figure 4c shows the corresponding variation of | v 12 | 2 R zz ( k ) for the s x = +1 bands whose Weyl points are located at k y < 0 (the variation of | v 12 | 2 R zz ( k ) for the s x = −1 bands is obtained from the transformation k → − k ). The magnitude of σ (2) derived from this model vanishes as Δ → 0, and is also sensitive to the anisotropy parameters m y and m z . Figure 4d shows that spectra corresponding to parameters t = 0.8 eV, Δ = 0.5, m z = 5, and m y = 1 can qualitatively reproduce the observed amplitude and large anisotropy of χ (2) ( ω , 2 ω ). Figure 4: Numerical results for the second-harmonic response of a WSM. a , Location of Weyl points in the k z = 0 plane for Δ = 0.5. For Δ = 0, Weyl points with opposite chiralities are located at (±π/2 a ,0,0) (denoted by black dots). b , The band structure for Δ = 0.5 and k z = 0. c , Colour plot of | v 12 | 2 R zz , which appears as an integrand in the formula for σ zzz (2) . For clarity we plot only | v 12 | 2 R zz for s x = +1 bands with Weyl points located at k y < 0. | v 12 | 2 R zz for s x = −1 bands is obtained by setting k → − k . | v 12 | 2 R zz shows structures at Weyl points. d , | χ zzz | and | χ xzx | plotted as a function of incident photon energy for Δ = 0.5. We adopted parameters t = 0.8 eV, m y = 1, m z = 5. Full size image As discussed above, our minimal model of an inversion-breaking WSM is intended mainly to motivate further research into the mechanism for enhanced SHG in the TMMPs. However, the model does suggest universal properties of χ (2) that arise from transitions near Weyl nodes between bands with nearly linear dispersion. According to bulk band-structure measurements 7 , such transitions are expected at energies below approximately 100 meV in the TaAs family, corresponding to the far-infrared and terahertz regimes. In these regimes, where the interband excitation is within the Weyl cones, the momentum-averaged | v 12 | 2 R zz ( k ) tends to a non-zero value, 〈 v 2 R 〉, leading to the prediction that σ (2) → g ( ω )〈 v 2 R 〉/ ω 2 as ω → 0. Because g ( ω ), the joint density of states for Weyl fermions, is proportional to ω 2 , we predict that σ (2) approaches a constant (or alternatively χ (2) diverges as 1/ ω ) as ω → 0, even as the linear optical conductivity vanishes in proportion to ω (ref. 29 ). The 1/ ω scaling of SHG and optical rectification is a unique signature of a WSM in low-energy electrodynamics, as it requires the existence of both inversion breaking and point nodes. In real materials, this divergence will be cut off by disorder and non-zero Fermi energy. Disorder-induced broadening, estimated from transport scattering rates 30 , and Pauli blocking from non-zero Fermi energy, estimated from optical conductivity 30 and band calculation 3 , each suggest a low-energy cutoff in the range of a few meV. We conclude by observing that the search for inversion-breaking WSMs has led, fortuitously, to a new class of polar metals with unusually large second-order optical susceptibility. Although WSMs are not optimal for frequency-doubling applications in the visible regime because of their strong absorption, they are promising materials for terahertz generation and optoelectronic devices such as far-infrared detectors because of their unique scaling in the ω → 0 limit. Looking forward, we hope that our findings will stimulate further investigation of nonlinear optical spectra in inversion-breaking WSMs for technological applications and in order to identify the defining response functions of Weyl fermions in crystals. Methods Crystal growth and structure characterization. Single crystals of TaAs, TaP and NbAs were grown by vapour transport with iodine as the transport agent. First, polycrystalline TaAs/TaP/NbAs was produced by mixing stoichiometric amounts of Ta/Nb and As/P and heating the mixture to 1,100/800/700 °C in an evacuated quartz ampule for two days. 500 mg of the resulting powder was then resealed in a quartz ampoule with 100 mg of iodine and loaded into a horizontal two-zone furnace. The temperatures of the hot and cold ends were held at 1,000 °C and 850 °C, respectively, for TaAs and 950 °C and 850 °C for TaP and NbAs. After four days, well-faceted crystals up to several millimetres in size were obtained. Crystal structure was confirmed using single-crystal X-ray micro-Laue diffraction at room temperature at beamline 12.3.2 at the Advanced Light Source. Optics set-up for second-harmonic generation. The optical set-up for measuring SHG is illustrated in Fig. 1a . Generator pulses of 100 fs duration and centre wavelength 800 nm pass through a mechanical chopper that provides amplitude modulation at 1 kHz and are focused at near-normal incidence onto the sample. Polarizers and waveplates in the beam path are used to vary the direction of linear polarization and to generate circular polarization. Both the specularly reflected fundamental and the second-harmonic beam are collected by a pickoff mirror and directed to a short-pass, band-pass filter combination that allows only the second-harmonic light to reach the photomultiplier tube (PMT) photodetector. Another wire-grid polarizer placed before the PMT allows for analysis of the polarization of the second-harmonic beam. Temperature-dependence measurements were performed by mounting the TaAs sample in a cold-finger cryostat on an xyz -micrometer stage. Benchmark measurements on TaAs, TaP, NbAs, ZnTe and GaAs were performed at room temperature in atmosphere with the samples mounted on an xyz -micrometer stage to maximize the signal. Calculation and fitting procedure for SHG. In noncentrosymmetric materials the second-order term, P i (2) = ε 0 χ ijk E j E k , which gives rise to SHG and optical rectification, is allowed 31 , 32 . These two phenomena arise from excitation with a single frequency; therefore, there is an automatic symmetry with respect to permutation of the second and third indices in χ ijk . This motivates the use of a 3 × 6 second-rank tensor d ij instead of χ ijk , and the former is more often used in SHG. The relation between d ij and χ ijk is as follows: the first index i = 1,2,3 in d ij corresponds to i ′ = x , y , z , respectively, in χ i ′ j ′ k ′ and the second index j = 1,2,3,4,5,6 in d ij corresponds to j ′ k ′ = xx , yy , zz , yz / zy , zx / xz , xy / yx in χ i ′ j ′ k ′ . Further discussion is provided in the Supplementary Information . To fit the SHG polar pattern of TMMPs, we first fit data obtained with a fixed analyser at 90°, which is the [1,−1,0] crystal axis, because there is only one free parameter (| d 15 |) in this configuration: With this value for | d 15 |, we then fit data in the three other types of scans discussed in the text, with d 15 and d 31 set in and out of phase with d 33 . The angular dependences in the parallel and perpendicular configurations are When the analyser is fixed along 0°, the angular dependence is This procedure yields upper and lower bounds on the anisotropy ratios | d 15 / d 33 | and | d 31 / d 33 |, which are shown with error bars in Fig. 2d . In the circular dichroism experiment, to lowest order in d ij , the angular dependence is In the case of GaAs and ZnTe, all scans are fitted accurately by the only symmetry-allowed free parameter, | d 14 | . The angular dependences for ZnTe (110) in the parallel and perpendicular configurations are The angular dependences for GaAs (111) in the parallel and perpendicular configurations are See Supplementary Information for a full derivation. Bloembergen–Pershan correction. When measuring in the reflection geometry, one needs to consider the boundary condition to calculate χ (2) from χ R (2) , which was directly measured, where ‘R’ stands for reflection geometry. The correction was worked out by Bloembergen–Pershan (BP) 33 : where ε is the relative dielectric constant and T ( ω ) = (2/( n ( ω ) + 1)) is the Fresnel coefficient of the fundamental light. In the current experiment, performed at 800 nm, the BP correction is fairly small (less than 20%). See Supplementary Information for more details. Data availability. The data that support the plots within this paper and other findings of this study are available from the corresponding author upon reasonable request.
No earlier theory had envisioned that the responses would be so large! Scientists "poked" three crystals with pulses of light. Unexpectedly, the crystals exhibited the largest nonlinear optical response of any known crystal. The response was a huge amount of different colored light with twice the frequency of the pulse. These crystals are members of a new class of materials known as Weyl semimetals. This work challenges our thinking on optical responses in Weyl semimetals. We expected light pulses to produce different colored light with twice the frequency but not such a huge response. The origin of this gigantic response is unknown! This study is already stimulating the development of fundamentally new theories about these responses. These materials are promising for package-penetrating sensors, night vision goggles, and other devices. Weyl fermions are novel particles that were predicted to be seen in high-energy physics experiments but have not been observed. However, scientists recently observed these particles as an emergent property of electrons in a unique set of semimetal materials. Recent studies have shown that Weyl semimetals exhibit an unexpectedly large nonlinear optical response. This response gives rise to the largest optical second harmonic generation effect of any known crystal. These observations were made in the transition metal systems: tantalum arsenide (TaAs), tantalum phosphide (TaP), and niobium arsenide (NbAs). A lthough it was clear from symmetry considerations that there would be a non-linear response in these systems, there was no theoretical prediction suggesting the large magnitude of response observed. By comparison with other well-known nonlinear crystals, the non-linear optical response in these systems is larger by factors of 10 to 100. Given the unexpected response, scientists anticipate these findings will stimulate the development of advanced ab initio methods to calculate nonlinear optical response functions in these materials. Also, Weyl semimetals are expected to have a wide range of optoelectronic applications as materials to be used for the development of terahertz generators and far-infrared radiation detectors.
10.1038/nphys3969
Biology
Later, gator? One-fifth of all reptile species could go extinct, new study says
Neil Cox et al, A global reptile assessment highlights shared conservation needs of tetrapods, Nature (2022). DOI: 10.1038/s41586-022-04664-7 Journal information: Nature
https://dx.doi.org/10.1038/s41586-022-04664-7
https://phys.org/news/2022-04-gator-one-fifth-reptile-species-extinct.html
Abstract Comprehensive assessments of species’ extinction risks have documented the extinction crisis 1 and underpinned strategies for reducing those risks 2 . Global assessments reveal that, among tetrapods, 40.7% of amphibians, 25.4% of mammals and 13.6% of birds are threatened with extinction 3 . Because global assessments have been lacking, reptiles have been omitted from conservation-prioritization analyses that encompass other tetrapods 4 , 5 , 6 , 7 . Reptiles are unusually diverse in arid regions, suggesting that they may have different conservation needs 6 . Here we provide a comprehensive extinction-risk assessment of reptiles and show that at least 1,829 out of 10,196 species (21.1%) are threatened—confirming a previous extrapolation 8 and representing 15.6 billion years of phylogenetic diversity. Reptiles are threatened by the same major factors that threaten other tetrapods—agriculture, logging, urban development and invasive species—although the threat posed by climate change remains uncertain. Reptiles inhabiting forests, where these threats are strongest, are more threatened than those in arid habitats, contrary to our prediction. Birds, mammals and amphibians are unexpectedly good surrogates for the conservation of reptiles, although threatened reptiles with the smallest ranges tend to be isolated from other threatened tetrapods. Although some reptiles—including most species of crocodiles and turtles—require urgent, targeted action to prevent extinctions, efforts to protect other tetrapods, such as habitat preservation and control of trade and invasive species, will probably also benefit many reptiles. Main Although comprehensive extinction-risk assessments have been available for birds, mammals and amphibians for well over a decade 3 , reptiles have, until now, not been comprehensively assessed. Therefore, conservation science and practice has typically relied on the International Union for Conservation of Nature (IUCN) Red List categories and distributions of the other three tetrapod classes to inform policy and guide priorities for investments 2 , despite differing expectations as to how effective common strategies will be across classes 9 , 10 . With a high diversity in arid regions and some islands and archipelagos (for example, Antilles, New Caledonia and New Zealand) compared with other tetrapods, reptiles were thought to require different conservation strategies and geographical priorities 6 . In the absence of Red List assessments, researchers have resorted to indirect measures of extinction risk such as range size and human pressure 6 , 11 . Here we examine the results of a comprehensive Red List assessment of reptiles and outline their implications for the conservation needs of reptiles. Comprising the turtles (Testudines), crocodiles (Crocodylia), squamates (Squamata: lizards, snakes and amphisbaenians) and tuatara (Rhynchocephalia), reptiles are a paraphyletic class representing diverse body forms, habitat affinities and functional roles in their respective ecosystems 12 . The largely terrestrial squamates are by far the most speciose group (9,820 species in this assessment), whereas the primarily aquatic turtles and crocodiles are often larger bodied but include only 351 and 24 species, respectively. Rhynchocephalians diverged from the snake and lizard lineage in the Triassic period and include one extant species 13 . Given this diversity of reptiles, threats to their persistence are likely to be equally varied, and so these need to be specified to guide effective conservation action. Extinction risk and threats We assessed reptiles globally using the IUCN Red List criteria with input from 961 scientists (Supplementary Note 1 ) achieved primarily through 48 workshops (Supplementary Table 1 ). Across all 10,196 species assessed, 21.1% are threatened with extinction (categorized as vulnerable, endangered or critically endangered; Supplementary Table 2 ). As a group, a greater number of reptile species are threatened than birds or mammals, but fewer than amphibians. Proportionately more mammals and amphibians are threatened than reptiles (Fig. 1a ). The reptile threat prevalence falls within a previous estimate of 15–36% threatened (best estimate 19%) from a random sample of 1,500 reptile species 8 . To our knowledge, this study represents the first global test of a sampled Red List extrapolation. The proportion of turtles and crocodiles that are threatened (57.9% and 50.0%, respectively) is much higher than those of squamates (19.6%) and tuatara (0%), and comparable to the most-threatened tetrapod groups, salamanders (57.0%) and monotremes (60.0%) (Fig. 1b ). Within squamates, iguanid (73.8%) and xenosaurid (60.0%) lizards and uropeltid (61.1%) and tropidophiid (60.0%) snakes are highly threatened. Since 1500, 31 reptile species (0.3%) have been driven extinct, including 24 squamates and 7 turtles, with 2 squamate species from Christmas Island categorized as extinct in the wild (persisting only as captive populations); 40 critically endangered species are ‘possibly extinct’ (that is, species that are likely to be extinct, but that have a small chance that they may be extant; Extended Data Fig. 1 ). Additional species probably became extinct before being documented by science 14 . Fig. 1: Taxonomic patterns of extinction risk in tetrapods. a , Taxonomic patterns organized by class. The numbers above each column refer to the numbers and percentages of species threatened (that is, those categorized as critically endangered, endangered or vulnerable). b , Extinction risk by major taxonomic groups. Blue lines indicate the best estimate of the percentage of species threatened. CR, critically endangered; DD, data deficient; EN, endangered; EW, extinct in the wild; EX, extinct; LC, least concern; NT, near threatened; VU, vulnerable. Source data Full size image Limited information resulted in 1,507 species (14.8%) being categorized as data deficient, similar to the number in mammals (15.1%) and lower than for amphibians (20.4%), but much higher than for birds (0.5%). Taxonomic groups with fossorial or other secretive habits and/or restricted to poorly studied areas (such as blindsnakes (Gerrhopilidae and Typhlopidae)) had greater proportions of species categorized as data deficient (Supplementary Table 2 ). The greatest numbers of data-deficient species occur in Asia (585), South America (284) and Africa (271), with fewer data-deficient species in North and Middle America (163), Oceania (124), Australia (55), the Caribbean (34) and Europe (3). Uncertainty about the status of data-deficient species suggests that the proportion of reptiles threatened with extinction ranges from 18.0% (assuming no data-deficient species are threatened) to 32.8% (assuming all data-deficient species are threatened) with a best estimate of 21.1%. Concentrations of threatened reptiles are mostly in regions in which other tetrapods are also threatened (Extended Data Fig. 2 ). Threatened reptiles are concentrated in southeastern Asia, West Africa, northern Madagascar, the northern Andes and the Caribbean, but largely absent from Australian drylands; the Kalahari, Karoo and Sahara deserts; northern Eurasia; and the Rocky Mountains and northern North America (Fig. 2a ). In remarkably few regions, however, are reptiles disproportionately threatened relative to other tetrapods (that is, have at least twice the number of species in a threatened category): parts of southern Asia and northeastern USA (Fig. 2b and Extended Data Table 1 ). Moreover, for most (87%) terrestrial regions in which tetrapods occur, no tetrapod class is disproportionately threatened compared with the other classes. Fig. 2: Geographical patterns of threat in reptiles and other tetrapods in terrestrial regions. a , Distribution of reptile species that are threatened (critically endangered, endangered or vulnerable). b , Regions with disproportionate numbers of threatened species for each tetrapod class (areas for each class where the proportional threat in species diversity is at least twice the loss for the next-most threatened class). c , Loss of reptile phylogenetic diversity (PD) if all threatened species became extinct. d , Regions with disproportionate phylogenetic diversity loss for each tetrapod class (calculated as in b ). Grey, areas with no threatened species ( a , c ) or regions in which no class is disproportionately threatened ( b , d ). Data are shown at a resolution of 50 km. Full size image With deep phylogenetic lineages and high species diversity, reptiles stand to lose a large amount of phylogenetic diversity (a measure of difference within an evolutionary tree 15 ) if the current extinction crisis continues apace. Assuming all threatened species (and only these species) become extinct, the combined loss of reptile phylogenetic diversity (calculated using existing phylogenetic trees 16 , 17 ) will be approximately 15.6 billion years. Southeastern Asia, India, West Africa and the Caribbean 8 (Fig. 2c ) comprise the top 15% areas of phylogenetic diversity loss, with high concentrations of threatened and evolutionarily distinct species (Extended Data Fig. 3 ). Comparing the distributions of threatened phylogenetic diversity across all four tetrapod groups reveal relatively small geographical areas of disproportionate importance for any class (Fig. 2d and Extended Data Table 2 ). The anthropogenic factors increasing extinction risk in reptiles are mainly habitat destruction from agricultural expansion, urban development and logging (Fig. 3a ). Other important threats are invasive species and hunting, which includes commercial harvest and trade (Fig. 3a ). Among reptile groups, crocodiles and turtles are most frequently affected by hunting and less by agriculture, whereas squamates are most frequently threatened by agriculture (Fig. 3a ). The major threats are broadly similar across tetrapods (Fig. 3b ). For all tetrapod groups, agriculture threatens the most species, logging is the second or third most prevalent threat, and invasive species and disease are the fourth or fifth most prevalent threat. Threats causing habitat destruction (complete removal of habitat) affect proportionately more species than those causing habitat change (degradation of habitat). The largest differences in relative threat prevalence are for hunting, which threatens mammals much more than the other tetrapods, and urban development, which affects amphibians, reptiles and mammals more than birds. Fig. 3: Threats to reptiles and other tetrapods. a , Crocodiles, lizards (including amphisbaenians), snakes and turtles. b , All tetrapods. Only threats to species categorized as critically endangered, endangered or vulnerable were included. Some species are subject to more than one threat (mean = 2.4; s.d. = 1.3 threats per species). Source data Full size image Climate change is a looming threat to reptiles, for example, by reducing thermally viable windows for foraging 18 , skewing offspring sex ratios in species that have temperature-dependent sex determination 19 and contracting ranges 20 . Given the Red List three-generation horizon for assessments, the lack of long-term studies limits the documentation of climate change as a near-future threat to reptiles 21 , in contrast to, for example, birds (Fig. 3b ). Disease is documented as a threat for only 11 species of reptiles (<1% of extant, non-data-deficient species), although pathogens such as Ophidiomyces ophiodiicola (which causes snake fungal disease 22 ) pose a potential threat and are little studied outside North America. Intentional use of reptiles (local consumption and trade) is an important threat to reptiles 23 , and was found to threaten 329 species (3.2%), especially turtles (30.8% of all turtle species). More than half of all reptile species occur in forested habitats (Fig. 4c ). Although some reptiles, particularly lizards, are speciose in arid or seasonally dry habitats such as deserts, grasslands, shrublands and savannahs 6 , 24 , these species are less threatened than those occupying forest habitats (13.7% of species restricted to arid habitats versus 26.6% of species restricted to forests; Fisher’s exact test, P = 0.00001; Fig. 4b ). The top threats to reptiles—agriculture, urban development and logging—are also the top threats to species inhabiting forested habitats, affecting 65.9%, 34.8% and 27.9% of forest-dwelling threatened reptiles, respectively, helping to explain the higher extinction risk of forest species. Agriculture and logging are significantly more likely to threaten forest-dwelling than non-forest dwelling reptiles (Fisher’s exact test, P = 0.00001), whereas urbanization threatens forest-dwelling no differently than non-forest dwelling reptiles (Fisher’s exact test, P = 0.25). Turtles and crocodiles are much more frequently associated with wetlands than other reptiles (Fig. 4a ). Fig. 4: Habitat use by reptiles and other tetrapods. a , Habitats used by crocodiles, lizards (includes amphisbaenians), snakes and turtles. b , Percentage of reptiles using each habitat that are threatened. c , Habitats used by tetrapods. d , Percentage of threatened tetrapod species using each habitat. See Supplementary Table 3 for additional, rarely used habitats not shown here. Artificial habitats are not shown. Source data Full size image Like reptiles, more than twice as many bird and mammal species occur in forests compared with any other habitat type (Fig. 4c ). Forests are also the most common habitat for amphibians, although wetlands are important for many species, especially for breeding (Fig. 4c ). Also similar to reptiles, the proportions of forest-inhabiting bird, mammal and amphibian species that are threatened are higher than for species that do not inhabit forests (16.7% versus 13.0%, 27.5% versus 20%, and 42.4% versus 34.4%, respectively; Fisher’s exact tests, P = 0.00001). Threat levels for each tetrapod class in arid habitats tend to be lower (less than 23% of species occurring in such regions). Across tetrapods, forests support high diversity and are also subject to widespread threats. Surrogacy of other tetrapods for reptiles With numerous threatened tetrapod species (227 birds, 194 mammals, 607 amphibians and 474 reptiles) ranging completely outside formally protected areas, assessing surrogacy is important to gauge the magnitude of efforts needed to conserve these species. We addressed surrogacy using a complementarity representation approach for threatened species, which better addresses the extent to which areas selected for surrogates capture target features than, for example, spatial congruence 25 . When combined, threatened birds, mammals and amphibians—the tetrapod groups for which comprehensive Red List data were previously available—are good surrogates for the conservation of threatened reptile diversity when prioritizing the richness of rarity-weighted threatened species at both 50-km and 100-km resolution (median species accumulation indices are 0.66 and 0.76, respectively; Extended Data Fig. 4a ). Using this same prioritization strategy, birds and mammals individually are reasonable surrogates for reptiles, whereas amphibians are poor surrogates (Extended Data Fig. 4a ). By contrast, for a complementarity representation strategy that prioritizes individual threatened species with the smallest ranges, birds, mammals and amphibians are not good surrogates for reptiles (species accumulation indices < 0.40), although combined they are reasonable surrogates at both 50-km and 100-km resolutions (median species accumulation indices are 0.44 and 0.64, respectively; Extended Data Fig. 4b ). These results indicate that the smallest-ranged threatened reptiles tend to be isolated from threatened birds, mammals and amphibians. In addition, priority areas for threatened birds and mammals independently, and birds, mammals and amphibians combined, showed high spatial congruence with priority sites for threatened reptiles for both strategies (prioritizing either rarity-weighted threatened species or complementarity representation), although correlations among global portfolios of priority areas were lower (Extended Data Tables 3 , 4 ). Although our results for the smallest ranged threatened species are consistent with previous expectations of low surrogacy 9 , overall, we found reasonably high surrogacy 10 . Discussion Our discovery of broad similarities in the geography and nature of threats between reptiles and other tetrapods was unexpected given previous arguments about the exceptionalism of reptiles for being particularly diverse in arid habitats 6 . The implications for tetrapod conservation are that geographical prioritizations previously performed for birds, mammals and amphibians overlap broadly with prioritizations for all except the most range-restricted threatened reptiles. The absence of reptiles in many global conservation prioritization analyses to date is unlikely to have left the class less represented than others. Nevertheless, the low surrogacy value of other tetrapods for reptiles with the most restricted ranges suggests that a case-by-case focus is required for these microendemics. Indeed, the ranges of 31 threatened reptiles do not overlap with the ranges of any other threatened tetrapod (among threatened species, 84 birds, 11 mammals and 7 amphibians are similarly isolated from other threatened tetrapods) (Supplementary Table 3 ). Researchers have predicted that reptiles are particularly vulnerable to climate change in tropical biomes 26 as well as freshwater 27 and arid habitats 18 , although so far no clear geographical signal in reptile declines due to climate change has been detected 28 . If such vulnerabilities are found, then—as climate change continues to alter the distributions and extinction risk of species—the surrogacy across tetrapods could unravel with, for example, reptiles in specific habitat types declining swiftly and disproportionately (relative to other tetrapods). Among the conservation strategies needed to prevent reptile extinction, land protection is critically important to buffer many threatened species from the dual threats of agricultural activities and urban development. The hundreds of threatened reptiles that currently occur completely outside protected areas underscore the need for targeted safeguards of important sites. Beyond place-based strategies, conservation policy and practice must halt unsustainable harvest and stem the spread of invasive disease to prevent many more species from becoming threatened 23 . Furthermore, introduced mammals to islands threaten 257 reptile species (2.8% of all reptiles), calling for continued campaigns to eradicate introduced mammals in those places. With a comprehensive, global assessment of the extinction risk of reptile species now available, these data can be incorporated into the toolbox of conservation practice and policy. At the species level, they can serve as the starting point for ‘green status’ (formerly ‘green list’) assessments that define, measure and incentivize species recovery 29 . More generally, they can be integrated into the calculation of species threat abatement and restoration metrics 2 , the identification of key biodiversity areas 30 and resource allocation using systematic conservation planning 31 , all of which have primarily been dependent on data from birds, mammals and amphibians among animals to date. Future reassessments will allow reptile data to be included in the Red List Index 32 , a widely used indicator of biodiversity trends 1 . Although efforts aimed at protecting other threatened tetrapods probably benefit many of the 1,829 threatened reptiles—especially forest-dwelling species—conservation investments targeted at uniquely occurring reptiles or those requiring tailored policies must also be implemented to prevent extinction. Encouragingly, the First Draft of the Post-2020 Global Biodiversity Framework 33 to be agreed by governments in 2022 explicitly targets safeguarding important sites (target 3), complemented by emergency action for individual threatened species (target 4). This political determination to reverse the slide of species toward extinction bodes well for reptiles. Methods We used the IUCN Red List criteria 34 , 35 and methods developed in other global status-assessment efforts 36 , 37 to assess 10,078 reptile species for extinction risk. We additionally include recommended Red List categories for 118 turtle species 38 , for a total of 10,196 species covered, representing 89% of the 11,341 described reptile species as of August 2020 39 . Data compilation We compiled assessment data primarily through regional in-person and remote (that is, through phone and email) workshops with species experts (9,536 species) and consultation with IUCN Species Survival Commission Specialist Groups and stand-alone Red List Authorities (442 species, primarily marine turtles, terrestrial and freshwater turtles, iguanas, sea snakes, mainland African chameleons and crocodiles). We conducted 48 workshops between 2004 and 2019 (Supplementary Table 1 ). Workshop participants provided information to complete the required species assessment fields (geographical distribution, population abundance and trends, habitat and ecological requirements, threats, use and trade, literature) and draw a distribution map. We then applied the Red List criteria 34 to this information to assign a Red List category: extinct, extinct in the wild, critically endangered, endangered, vulnerable, near threatened, least concern and data deficient. Threatened species are those categorized as critically endangered, endangered and vulnerable. Taxonomy We used The Reptile Database 39 as a taxonomic standard, diverging only to follow well-justified taxonomic standards from the IUCN Species Survival Commission 40 . We could not revisit new descriptions for most regions after the end of the original assessment, so the final species list is not fully consistent with any single release of The Reptile Database. Distribution maps Where data allowed, we developed distribution maps in Esri shapefile format using the IUCN mapping guidelines 41 (1,003 species). These maps are typically broad polygons that encompass all known localities, with provisions made to show obvious discontinuity in areas of unsuitable habitat. Each polygon is coded according to species’ presence (extant, possibly extant or extinct) and origin (native, introduced or reintroduced) 41 . For some regions covered in workshops (Caucasus, Southeast Asia, much of Africa, Australia and western South America), we collaborated with the Global Assessment of Reptile Distributions (GARD) ( ) to provide contributing experts with a baseline species distribution map for review. Although refined maps were returned to the GARD team, not all of these maps have been incorporated into the GARD. Habitat preferences Where known, species habitats were coded using the IUCN Habitat Classification Scheme (v.3.1) ( ). Species were assigned to all habitat classes in which they are known to occur. Where possible, habitat suitability (suitable, marginal or unknown) and major importance (yes or no) was recorded. Habitat data were available for 9,484 reptile species. Threats All known historical, current and projected (within 10 years or 3 generations, whichever is the longest; generation time estimated, when not available, from related species for which it is known; generation time recorded for 76.3% of the 186 species categorized as threatened under Red List criteria A and C1, the only criteria using generation length) threats were coded using the IUCN Threats Classification Scheme v.3.2 ( ), which follows a previously published study 42 . Where possible, the scope (whole (>90%), majority (50–90%), minority (<50%) of the population; unknown) and severity (causing very rapid (>30%), rapid (>20%), slow but notable (<20%) declines over 10 years or 3 generations, whichever is longer; negligible declines; unknown) of the threat was recorded. Threat data were available for 1,756 of the 1,829 threatened reptile species. Assessment review Each assessment underwent two reviews. First, a scientist familiar with the species but not involved in the assessment reviewed the account for biological accuracy and accurate application of the Red List criteria. Once the assessors revised the assessment satisfactorily, staff from the IUCN Red List Unit reviewed the assessment primarily for accurate application of the Red List criteria. The assessors revised the assessment again, if necessary, to satisfy any concerns of the IUCN Red List Unit before the assessment was finalized. Data limitations Although we made an extensive effort to complete assessments for all reptiles, some data gaps remain. Missing species As of December 2020, 1,145 reptile species, primarily snakes and lizards, were omitted from the present study, including the phylogenetic diversity analyses, because they were described recently and they were described after previous comprehensive assessments from the region. Geographically, they are primarily from tropical regions (as are assessed reptiles) with an underrepresentation of African species (distribution of omitted species: Asia, 41%; Africa, 8%; Australia, 7%; Europe, 3%; North/Central America, 20%; South America, 19%; Caribbean, 5%; Oceania, 4%; percentages sum to greater than 100% because some species occur in two regions). Because they are recently described, many are poorly known, may be rare or occur in a very restricted area, or in poorly surveyed areas that are often subject to high levels of human impacts. As such, recent descriptions are more likely to receive a data-deficient or threatened Red List category than be assigned of least concern 41 . The net effect on our analyses is a slight underestimate of the number of threatened snakes and lizards, and plausibly a slight overestimate of least concern species. With tetrapod species described in the future likely to be small-ranged, threatened lizards and amphibians 43 , 44 , surrogacy levels may decline from those reported here. Geographical coverage Although we made extensive efforts to map the current known distribution for each species, this information is incomplete for some species. Where appropriate, and following expert guidance, we interpolated between known localities if the ecological conditions appeared appropriate. In addition, species occurrence is unlikely to be spread evenly or entirely throughout the area depicted in range maps, with gaps expected, for example, in patches of unsuitable habitat. Data-deficient species For species assessed to be data deficient (1,507 reptiles, 14.8%), there was inadequate information on the distribution, population status or threats (historical, current or projected future) of the species (both from published sources and expert knowledge) to make a direct, or indirect, assessment of the risk of extinction. All species were assessed according to their recognized taxonomic circumscription at the time of assessment. Taxonomic uncertainty therefore did not result in a data-deficient assignment, although some species were listed as data deficient because they are morphologically indistinguishable from another species and therefore estimates of distribution and abundance are not feasible. Time span of assessments The assessments were completed between 1996 and 2020, with 1,503 assessments completed before 2011. The IUCN Rules of Procedure ( ) recommend reassessment every 10 years and thus, as of 2020, 15% of the assessments can be considered outdated. Of the species assessed 1996–2010, slightly more species were threatened (23.0%) than the species assessed more recently (20.7%). This difference is largely explained by the greater percentages of crocodiles and turtles with outdated assessments (29% and 35%, respectively) compared with tuatara, lizards and snakes (0%, 12% and 17%, respectively) and the highly threatened nature of crocodiles and turtles (Supplementary Table 2 ). The continuing deterioration of biodiversity globally 1 suggests that the species with outdated assessments are more likely to be in higher threat categories today than when they were when last assessed, causing an underestimation of current reptile threat status. Analyses Percentage of species threatened with extinction To estimate the percentage of species threatened with extinction (categories critically endangered, endangered and vulnerable), we used the following formula, which assumes that data-deficient species have the same proportion of threatened species as species that were not data deficient. $${{\rm{Prop}}}_{{\rm{threat}}}=\frac{{\rm{CR}}+{\rm{EN}}+{\rm{VU}}}{N-{\rm{DD}}}$$ where Prop threat is the best estimate of the proportion of species that are threatened; CR, EN, VU and DD are the number of species in each corresponding Red List category and N is the number of species assessed (excluding extinct and extinct in the wild species). Data for amphibians, birds and mammals For all analyses that included data for amphibians, birds and mammals, we used the 2020-1 version 3 of the tabular and spatial data downloaded from the IUCN Red List website in May 2020. Threats Threats calculations were restricted to species in threatened Red List categories (critically endangered, endangered and vulnerable). Multiple threats can affect a single species. Summaries of threats are for the first level of the IUCN classification scheme. Threats thought to affect only a minority of the global population (<50% of the population) (coded as ‘minority’) were not included. In addition, we removed threats that were assessed to cause ‘no declines’ and ‘negligible declines’ from the analysis (as indicated by the severity coding). We considered all threats without scope or severity scored to be major threats and retained them in the analysis. Habitat Analyses of habitat use were restricted to the first level of the IUCN habitat-classification scheme. We excluded habitats for which the major importance to the species was scored ‘no’ and suitability was scored ‘marginal’ and considered all habitats without major importance or suitability scored to be suitable and of major importance and included them in the analyses. We did not consider artificial habitats in the analyses. Only a small number of reptile species inhabits ‘caves/subterranean’ and ‘marine coastal’ habitats, so they were not included in Fig. 4 but their threat prevalence is summarized in Supplementary Table 4 . Statistics Statistical tests were designed to avoid inclusion of multiple observations from the same species (because species can occur in multiple habitats and be threatened by multiple threats). To assess whether arid habitat or forest species were more likely to be threatened, we included only species that were restricted to one of these habitat types. For threats analyses, we compared species that occur in forests (including those that occur in forests and other habitats) to those that do not occur in forests. All tests were two-tailed Fisher’s exact tests. Geographical patterns The geographical patterns of threat and phylogenetic diversity shown in Fig. 2 are for only terrestrial species (so, for reptiles, excluding 87 species of marine turtles and sea snakes). Tetrapod classes vary widely in the numbers of pelagic marine species and in the methods used to map distributions. Restricting analyses to terrestrial species ensured more-consistent analyses and avoided wide variation in summary values caused by small numbers of species. Analyses of the distribution maps used polygons either with the following IUCN map code designations or with no codes indicated: Presence = extant (code 1) and probably extant (code 2) Origin = native (code 1), reintroduced (code 2) and introduced (code 6) Seasonality = resident (code 1), breeding season (code 2), non-breeding season (code 3) and passage (code 4). Ranges for species categorized as critically endangered (possibly extinct) are coded as possibly extinct (code 4) and excluded from the spatial analyses. All spatial analyses were conducted on a global 0.5° by 0.5 ° latitude–longitude grid (approximately 50 km at the Equator). To explore the influence of spatial resolution, we repeated the surrogacy and phylogenetic diversity analyses at a 100-km resolution. We converted polygon range maps (tagged with the appropriate codes as described above) to these grids. We used a global equal-area pseudocylindrical projection, Goode homolosine. We mapped the distribution of threatened species as a count of the number of species with ranges overlapping each grid cell. Estimating the spatial distribution of disproportionate threat and phylogenetic diversity loss We identified global areas in which each tetrapod class is disproportionately threatened compared with all other classes by comparing the species-richness-adjusted level of threat among the four tetrapod classes. First, for each grid cell, we identified the proportional threat level of each class by dividing the number of species in threatened Red List categories (vulnerable, endangered and critically endangered) by the total number of species for the class found in that cell. Second, for all grid cells in which at least five tetrapod species are present, we compared proportional threat values across the four classes and identified a grid cell as having a disproportionate threat level for a given class if: (1) the grid cell had a proportional level of threat equal to 10% or higher for the class; and (2) the grid cell had a proportional level of threat for the class at least twice as high as the proportional level of threat for the next class. We assessed the sensitivity of disproportionate threat patterns to our definition of disproportionate threat by varying the degree of difference in proportional threat level between the highest and second highest class. We identified the number of grid cells with disproportionate threat for each class when the class had a proportional threat level: (1) higher than any other class; (2) 25% or more higher than any other class; (3) 50% or more higher than any other class; (4) 100% or more higher than any other class; and (5) 200% or more higher than any other class. In the main text, we report results for the 100% or more threat level. Results for all thresholds are included in Extended Data Tables 1 , 2 . Conservation strategies We identified global conservation priorities for each tetrapod class using two alternative strategies: strategy 1 prioritized areas containing many threatened species with relatively highly restricted ranges, whereas strategy 2 prioritized areas core to the most range-restricted threatened species globally. We implemented both conservation strategies within the spatial conservation-planning software Zonation 45 and the R package zonator 46 , using the additive benefit function and the core-area Zonation algorithms for strategies 1 and 2, respectively, at 50-km and 100-km resolutions for threatened reptiles. The additive benefit function algorithm prioritizes areas by the sum of the proportion of the global range size of all species included in a given grid cell—a quantity similar to weighted species endemism (as defined previously 47 ) and endemism richness (as defined 48 ). On the basis of this algorithm, cells with many species occurring only in that cell or few other cells receive the highest priority. The core-area Zonation algorithm prioritizes areas by the maximum proportion of the global range size of all species included in a given grid cell: cells including the highest proportions of the ranges of the most range-restricted species are given the highest priority. Therefore, comparing the two strategies, strategy 1 gives more importance to the number of species within grid cells (that is, more species = a higher summed proportion), potentially at the expense of the single most-range-restricted species globally, which are instead prioritized directly by strategy 2. Because complementary representation problems such as these spatial prioritizations often have multiple solutions, we ran five iterations of each algorithm used and summarized variation across those. Estimating surrogacy To assess the degree to which conserving the diversity of threatened species of birds, mammals and amphibians (individually or combined) serves as a surrogate for conserving threatened reptile diversity, we calculated a species accumulation index (SAI) of surrogate effectiveness. The SAI is derived from the comparison of three curves: (1) the ‘optimal curve’ represents the accumulation of the diversity of threatened reptile species when conservation is planned using data for threatened reptiles directly; (2) the ‘surrogacy curve’ represents the accumulation of the diversity of threatened reptile species when conservation is planned using the diversity of threatened species diversity of a different class as a surrogate; and (3) the ‘random curve’ represents the accumulation of the diversity of threatened reptile species when conservation areas are selected at random. We estimated optimal, surrogate and random curves based on each reptile-surrogate combination (birds, mammals and amphibians individually and combined). Using 100 sets of approximately random terrestrial grid-cell sequences allowed us to generate 95% confidence intervals around a median ‘random curve’. In addition, because we ran five iterations of each spatial prioritization algorithm for each tetrapod class, optimal and surrogate curves were also summarized using the median and 95% confidence intervals across the five iterations. We then derived the quantitative measure of surrogacy as SAI = ( s − r )/( o − r ), where s is the area under the surrogate curve, r is the area under the random curve and o is the area under the optimal curve. SAI = 1 when the optimal and surrogate curves are the same (perfect surrogacy). It is between 1 and 0 when the surrogate curve lies above the random curve (positive surrogacy), zero when the surrogate and random curves coincide (no surrogacy) and negative when the surrogate curve lies below the random curve (negative surrogacy). We calculated the SAI using R code modified from a previous study 49 . For each reptile–surrogate combination, we report median and 95% confidence intervals across all combinations of optimal, surrogate and random curves (5 target and surrogate curve iterations and 100 random curve iterations). Although not strictly a measure of surrogacy 25 , we also calculated the spatial congruence (Spearman’s rank correlation, analogous to a previously published approach 9 ) of Zonation priorities for each conservation strategy and spatial resolution. Coverage by protected areas We overlayed protected areas (polygons, categorized as IUCN I–VI from the World Database of Protected Areas 50 ) over the ranges of all threatened tetrapods and classified species with ranges completely outside any protected area as unprotected. Phylogenetic diversity To calculate phylogenetic diversity 15 , we used published time trees of mammals 51 , birds 52 and amphibians 53 . For reptiles, we combined two time trees: a comprehensive squamate time tree containing 9,755 squamate species, including the species Sphenodon punctatus 16 , and a turtle and crocodilian tree containing 384 species 17 . The time trees contain some species lacking genetic data, added by taxonomic interpolation 54 to maximize taxonomic coverage. In total, we analysed 32,722 tetrapod species including 10,139 reptiles, 5,364 mammals, 9,879 birds and 7,239 amphibians. For squamates, and for turtles and crocodiles, 10,000 fully resolved trees were available. For each group, we randomly sampled 100 trees and combined them to obtain 100 fully resolved reptile time trees, to accommodate for uncertainty. Similarly, we randomly sampled 100 amphibian and 100 mammal time trees over the 10,000 available. We thoroughly compared the species name mismatches between geographical and phylogenetic data to match synonyms and correct misspelled names. We also imputed species for which the genus (but not the species) was already present in the tree, for example newly described species (262 amphibian, 1,694 bird, 236 mammal and 777 reptile species). Imputed species were randomly attached to a node within the genus subtree. Because polytomies can result in an overestimation of the phylogenetic diversity, we randomly resolved all polytomies using a previously published method 54 implemented in R code. This procedure was performed 100 times for birds, and one time for each of the 100 amphibian, 100 mammal and 100 reptile time trees. We included 30,778 tetrapod species, each with geographical and phylogenetic data, in the phylogenetic diversity analyses. This total included 6,641 amphibians, 8,758 birds, 5,550 mammals and 9,829 reptiles. For each class, we estimated phylogenetic diversity 14 for all species and after the removal of threatened species, at 50-km and 100-km resolution. To consider phylogenetic uncertainty (that is, the placement of interpolated species) in phylogenetic diversity calculation for each of the 100 fully resolved trees for each class, we conducted a sensitivity analysis using a previously described method 55 . This method calculates an evolutionary distinctiveness score that (1) increases the total phylogenetic diversity of the clade when including interpolated species and (2) corrects the evolutionary distinctiveness score of species in genera with interpolated species (missing relatives). Following this method, we calculated evolutionary distinctiveness scores 56 for each cell from the subtree including all species present in the focal cell with the R package caper 57 . For genera with interpolated species, the mean evolutionary distinctiveness score of non-interpolated species was assigned to interpolated species of that genus. For those genera, we computed a second evolutionary distinctiveness score corresponding to the mean evolutionary distinctiveness score of the focal genera (including interpolated species). For species belonging to genera with no interpolated species, the first and second evolutionary distinctiveness scores were identical. Next, we calculated the mean of the two evolutionary distinctiveness scores and reported this value as the evolutionary distinctiveness score of each species. Finally, we computed phylogenetic diversity as the sum of evolutionary distinctiveness scores. Therefore, phylogenetic diversity corresponds to Crozier’s version of phylogenetic diversity 58 , that is, the sum of the branch lengths connecting all members of a species assemblage without the root. Next, we reported median phylogenetic diversity, computed over 100 fully resolved trees for each class. In the figures, cells with fewer than five species were excluded to avoid outliers. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this paper. Data availability Taxonomic data for reptiles were from the Reptile Database ( ). All spatial and tabular data for the tetrapod analyses are permanently available at . Trees used for the phylogenetic diversity analyses are available at Zenodo ( ). In addition, assessment data, including range maps, for all tetrapods are publicly available on the IUCN Red List of Threatened Species website ( ). Occasionally, where a species may be threatened because of over-collection, sensitive distribution information is not publicly available. Protected areas boundaries were from the World Database of Protected Areas ( ). Source data are provided with this paper. Code availability Python scripts used for the spatial analyses are permanently available at . No code was used for the Fisher’s exact tests, which were performed in Excel and available with the tabular data at . Code used for the phylogenetic diversity, areas of disproportionate threat and surrogacy analyses are available at Zenodo ( ).
Reptiles are in trouble: In fact, over 21% of reptile species are threatened with extinction worldwide, according to a first-of-its-kind global assessment of more than 10,000 species. The findings show that some reptiles, including many species of crocodiles and turtles, require urgent conservation efforts to prevent their extinction. "The results of the global reptile assessment signal the need to ramp up global efforts to conserve them," said study co-leader Neil Cox, of the International Union for the Conservation of Nature. "Because reptiles are so diverse, they face a wide range of threats across a variety of habitats. "A multifaceted action plan is necessary to protect these species, with all the evolutionary history they represent." The study was published Wednesday in the peer-reviewed British journal Nature. Study authors say the top threats to reptiles come from agriculture, logging, urban development and invasive species, although they acknowledge that the risk that climate change poses is uncertain. In all, of the 10,196 species assessed, they found that at least 1,829 (21%) of species were threatened with extinction (categorized as being vulnerable, endangered or critically endangered). Crocodiles and turtles are among the most at-risk species, with 57.9% and 50% of those assessed under threat, respectively. Some good news: The research revealed that efforts already underway to conserve threatened mammals, birds and amphibians are more likely than expected to also benefit many threatened reptiles. Study authors say that many of the risks that reptiles face are similar to those faced by those other animal groups and suggest that conservation efforts to protect these groups—including habitat restoration and controlling invasive species—may have also benefited reptiles. "These study results show that reptile conservation research no longer needs to be overshadowed by that of amphibians, birds and mammals. It is concerning, though, that more than a fifth of all known reptile species are threatened," said Mark Auliya of the Zoological Research Museum in Bonn, Germany. In addition to turtles and crocodiles, reptiles in the study included lizards, snakes and tuatara, the only living member of a lineage that evolved in the Triassic period 200 million to 250 million years ago. "Many reptiles, like the tuatara or pig-nosed turtle, are like living fossils, whose loss would spell the end of not just species that play unique ecosystem roles, but also many billions of years of evolutionary history, said Mike Hoffmann, of the Zoological Society of London. "Their future survival depends on us putting nature at the heart of all we do." Another expert, Maureen Kearney, a program director at the National Science Foundation, said that "the potential loss of one-fifth of all reptile species reminds us how much of Earth's biodiversity is disappearing, a crisis that is threatening all species, including humans."
10.1038/s41586-022-04664-7
Earth
Climate change causes landfalling hurricanes to stay stronger for longer
Li et al., Slower decay of landfalling hurricanes in a warming world. Nature (2020). DOI: 10.1038/s41586-020-2867-7, www.nature.com/articles/s41586-020-2867-7 Journal information: Nature
http://dx.doi.org/10.1038/s41586-020-2867-7
https://phys.org/news/2020-11-climate-landfalling-hurricanes-stronger-longer.html
Abstract When a hurricane strikes land, the destruction of property and the environment and the loss of life are largely confined to a narrow coastal area. This is because hurricanes are fuelled by moisture from the ocean 1 , 2 , 3 , and so hurricane intensity decays rapidly after striking land 4 , 5 . In contrast to the effect of a warming climate on hurricane intensification, many aspects of which are fairly well understood 6 , 7 , 8 , 9 , 10 , little is known of its effect on hurricane decay. Here we analyse intensity data for North Atlantic landfalling hurricanes 11 over the past 50 years and show that hurricane decay has slowed, and that the slowdown in the decay over time is in direct proportion to a contemporaneous rise in the sea surface temperature 12 . Thus, whereas in the late 1960s a typical hurricane lost about 75 per cent of its intensity in the first day past landfall, now the corresponding decay is only about 50 per cent. We also show, using computational simulations, that warmer sea surface temperatures induce a slower decay by increasing the stock of moisture that a hurricane carries as it hits land. This stored moisture constitutes a source of heat that is not considered in theoretical models of decay 13 , 14 , 15 . Additionally, we show that climate-modulated changes in hurricane tracks 16 , 17 contribute to the increasingly slow decay. Our findings suggest that as the world continues to warm, the destructive power of hurricanes will extend progressively farther inland. Main Hurricanes thrive on moisture. Moisture from warm tropical oceans fuels the intense winds of a hurricane heat engine 2 , 3 . In a warming world, the moisture supply is enhanced. Warmer oceans supply more moisture, and warmer air, according to the Clausius–Clapeyron relation 18 , holds more moisture. As a result, we expect the maximum intensity a hurricane can achieve over its lifetime to increase 6 , 9 . Indeed, as the world warms, the strongest hurricanes (which, compared with the weaker ones, are less affected by impeding factors, for example, wind shear) are getting stronger, with the most pronounced intensification seen for the North Atlantic hurricanes 8 . Without moisture, hurricanes wither. A landfall severs 1 , 19 , 20 a hurricane from the ocean, its moisture source. Consequently, the intensity decays rapidly. (When the intensity drops below 33 m s −1 , the hurricane, according to the Saffir–Simpson scale 21 , is termed a tropical storm; however, for simplicity, we refer to tropical storms also as hurricanes.) How might hurricane decay rates change in a warming world? In contrast to the extensive studies of hurricanes over ocean, this question has attracted scant attention. Decay timescale τ We study North Atlantic landfalling hurricanes (Fig. 1a ) over 1967–2018 using the best-track database Atlantic HURDAT2 (ref. 11 ), widely considered the most reliable database of all ocean basin databases. (See Methods for further discussion on the data.) For each hurricane, we analyse the intensity, V , during the first day past landfall, the period over which the hurricane inflicts most of the destruction. Over this period, V decays exponentially 4 , 5 : V ( t ) = V (0)e − t / τ , where t is the time past landfall and τ , the decay timescale, is a single parameter that characterizes the rate of decay. (After the first day, V ( t ) can no longer be characterized by a single parameter; see Methods .) The larger the τ , the slower the decay, and therefore, the stronger the hurricane. We focus on the variation—if any—of τ over the past half-century. Fig. 1: Effect of SST on the decay of North Atlantic landfalling hurricanes. We analyse 71 landfall events over 1967–2018 (see Methods ). a , Hurricane tracks over 1967–1992 (in blue) and 1993-2018 (in red); see panel b for the corresponding distribution of τ . The dotted box, bounded by 10° N, 35° N, 75° W and 100° W, shows the pre-landfall region in which we compute the SST; also see Extended Data Figs 1 c, d and 2e . The map is from the MATLAB function worldmap . b , Histogram and probability density of τ . The average τ increases from 21.1 ± 1.3 h (1967–1992, 26 events) to 27.6 ± 2.3 h (1993–2018, 45 events), for which we also note ±1 standard error of the mean (s.e.m.). The error bars in the histogram are computed using the bootstrap sampling method and correspond to ±1 standard deviation (s.d.) in each bin (see Methods ). c – f , Time series of τ and SST. c , τ versus year (grey line); d , SST versus year (blue line); e , superposed τ versus year (grey line) and SST versus year (blue line); f , τ versus SST. We note that the τ time series echoes the SST time series with Pearson correlation r = 0.72. In panels c , d and f , we also show the error bars (which correspond to ±1 s.e.m.; see Methods ), the linear regression line (solid black line), and the 95% confidence band about the regression line (dotted black lines). The increase in τ and SST over time and the relationship between τ and SST are statistically significant (at 95% CI) and robust to the specifics of smoothing (Methods; Extended Data Tables 2 and 3 ). Source data Full size image For each landfall event, we compute τ using V ( t ); see Methods . From one event to another, the value of τ varies considerably. This value for any particular event is influenced by many factors, including non-climatic ones, such as the terrain underneath the hurricane 4 , 5 , 22 , 23 . To discern any potential effect of the climate on τ , we first analyse the distribution of τ on a multi-decadal timescale. In Fig. 1b , we plot the histogram and probability density of τ for two time periods, each spanning a quarter-century. In each period, the values of τ span a large range, signifying the influence of many factors on any individual event. However, it is also clear that with time higher values of τ are preferentially realized. Here we seek to understand the cause of this increase. We begin by examining the variations in τ at a multi-annual timescale. We average τ for all the landfall events in a given year and apply a 3-year smoothing, twice in a row, to this time series. (Because each value of τ in the time series is based on several events, this approach lessens the effects of non-climatic factors and random noise; at the same time, the smoothing can still preserve a sharp step response 24 .) In Fig. 1c , we plot the resulting τ time series. As expected from Fig. 1b , τ increases with time. Further, the increase is noticeably non-monotonic: the τ time series undulates about a linearly increasing trend. From this linear trend we note that over the past half-century τ has increased by 94%, from 17 h to 33 h. Put another way, whereas 50 years ago the intensity one day past landfall was 24% of the landfall intensity, that figure is now 48%. (For a typical translation speed of 5 m s −1 , one day past landfall corresponds to a distance of 432 km inland.) τ and sea surface temperature We next examine whether both the trend and multi-annual variability in the decay timescale may depend on climate. As a proxy for the climate, we first analyse the sea surface temperature (SST), using the HadISST database 12 . We average the SST in time over the hurricane season, June–November, and in space over a region abutting the coastal area of landfall (Fig. 1a ), and, finally, we smooth using the same procedure as for the τ time series. In Fig. 1d , we plot the resulting SST versus year, and in Fig. 1e , we superpose the τ time series and the SST time series. Notably, like the τ time series, the SST time series also undulates about a linearly increasing trend—and does so in consonance with the τ time series, with correlation r = 0.72 (Fig. 1e, f ). The foregoing analysis shows that τ and SST are reasonably well correlated on multi-annual timescales. Next, using computational simulations, we turn our attention to the causality that underlies this correlation. We simulate landfalling hurricanes using Cloud Model 1 (CM1), a three-dimensional, non-hydrostatic, nonlinear, time-dependent computational model that has been widely used to study the dynamics of idealized hurricanes 23 , 25 , 26 , 27 ; see Methods . First, we simulate and contrast the fate of four hurricanes that are first allowed to develop over warm oceans under identical conditions except for the SST (and the attendant environmental sounding). That is, SST is the sole control parameter in the simulations. The hurricanes intensify at different rates over the oceans. The warmer the ocean, the greater the moisture supply and, consequently, the faster the intensification (Fig. 2a ). When their intensities reach about 60 m s −1 , a category 4 hurricane on the Saffir–Simpson scale 21 , we instantaneously turn off the moisture flux throughout the bottom surface of the hurricanes 19 , 20 to represent a complete landfall. Thereafter, we again subject the hurricanes to identical conditions. Fig. 2: Effect of SST on the decay of simulated landfalling hurricanes. a , V versus t . For t < 0, the hurricanes develop over warm oceans; the different colours represent different SST. At t = 0, the hurricanes make landfall with V ≈ 60 m s −1 (Extended Data Fig. 4a,b ). The solid lines correspond to the moist simulations and the plus symbols to the dry simulations. (In the main text, we discuss the dry simulations in the context of how the storm moisture and SST affect the decay; in Methods, we discuss the dry simulations in the context of how the hurricane size may affect the decay.) b , τ versus SST. We note that the values of τ are larger than those in Fig. 1c ; these differences stem from the simplified setup we use for the simulations (Methods). c , Rainfall versus SST. This is the total rainfall accumulated inside a radius of 100 km and over the first two days past landfall. The qualitative trend in total rainfall is not sensitive to the choice of averaging radius or time period. Full size image Although the intensity at landfall is the same for all four hurricanes, their decay past landfall carries a clear signature of their development over the ocean before the landfall (Fig. 2a ). The intensities of the hurricanes that developed over warmer oceans decay at a slower rate. In other words, echoing the field observations, τ increases with SST (Fig. 2b ). But, unlike the field observations, where many factors can affect the decay, here we can unambiguously attribute the changes in τ to the attendant difference in SST. Now we discuss how the SST affects the decay. Central to our considerations is the role of moisture. At this juncture, it may be instructive to consider the landfall of real hurricanes for which an active source of moisture past landfall is immediately evident. At the official beginning of a landfall, the centre of a hurricane moves over land. But the moisture supply persists as roughly half of the hurricane still lies over ocean. This supply continually and rapidly wanes, becoming negligible approximately 3.5 h past landfall (Methods). This timescale is only a fraction of the period over which we analyse the field observations. Therefore, the causal link between the SST and the decay may not stem, for the most part, from this moisture supply; also see the later discussion on translation speed. More starkly, in the simulated landfalls, this moisture supply is absent, and yet the effect of the SST on the decay is apparent. To proceed, we turn our attention from an active source of moisture to one that may not be immediately evident: the ‘storm moisture’, which is the moisture stored in a hurricane during its passage over ocean and carried past landfall. We test its role by pairing each of the four simulated hurricanes (discussed above) with a partner. At the moment of landfall, the paired hurricanes are identical—that is, the same velocity field, the same pressure field, the same temperature field—except for one aspect: we remove the moisture (in all phases: vapour, liquid and ice) in the partner hurricanes. Thereafter, we evolve these ‘dry’ hurricanes over land subject to the exact same conditions as their moist counterparts. In Fig. 2a , we plot the decay for the four pairs of hurricanes. The causal role of storm moisture is now clear. The dry hurricanes decay at a notably faster rate compared with their moist partners—the storm moisture slows the decay of the moist hurricanes. Moreover, the decay rates of the dry hurricanes, unlike those of the moist hurricanes, are unaffected by their development over ocean. Indeed, the decay for the four dry hurricanes is indistinguishable. Devoid of storm moisture, SST exerts no influence on the decay of dry hurricanes. On the other hand, for moist hurricanes, the higher the SST, the greater the stock of the storm moisture, and, consequently, the slower the decay. We conclude that the storm moisture furnishes the causal link between τ and SST. A complementary aspect of this link is well known 9 , 28 , 29 —because the enhanced storm moisture eventually precipitates as rain, the rainfall from hurricanes increases with increase in the SST (Fig. 2c ). Additional factors Next, we discuss two additional factors that, in addition to the SST, might also have contributed to the observed slowdown of the decay. Translation speed: the translation speed of hurricanes could slow down in a warming world 30 , 31 . As a hurricane moves over land, a slower translation speed—specifically, its coastline-perpendicular component—allows the supply of moisture from ocean for a longer time, enhancing the storm moisture and thus promoting a slower decay past landfall. To test this potential effect, we compute the time series of the coastline-perpendicular translation speed, v t cos θ , for the landfalling hurricanes in our study (Methods). From the plot of v t cos θ versus year, we note that there is no significant change (at 95% confidence interval, CI) over the past half-century (Fig. 3a ), and from the plot of τ versus v t cos θ , we note there is no significant relationship (at 95% CI) between the two (Fig. 3b ). This analysis suggests that the observed increase in τ over the past half-century is unlikely to be linked with the translation speed. However, for ocean basins or time periods where there is a pronounced slowdown in v t cos θ , its influence on τ may become discernible. Hurricane tracks: the tracks of hurricanes could systematically shift in a warming world 32 , 33 . The track changes can effect changes in the decay by subjecting the landfalling hurricanes to regions of distinct τ . The regional variation can stem from factors such as the terrain 4 , 5 , 20 or the shape of the coastline 34 . To test for track changes and their potential effect on τ , we first consider whether, similar to the poleward shift in the latitude of hurricane lifetime maximum intensity 35 , there is also a poleward shift in the latitude of the landfall events in our study (Methods). We find no significant change (at 95% CI) in the landfall latitude over the past half-century and no significant relationship (at 95% CI) between τ and landfall latitude (Extended Data Fig. 3a, b ). Fig. 3: Effect of hurricane motion on the decay of North Atlantic landfalling hurricanes. a , b , Effect of the coastline-perpendicular translation speed: a , v t cos θ versus year (grey line); b , τ versus v t cos θ . We also show the error bars for v t cos θ and τ (which correspond to ±1 s.e.m.), the linear regression line (solid black line), and the 95% confidence band about the regression line (dotted black lines). The time series of v t cos θ is smoothed using the same procedure as the τ time series. c , d , Effect of the hurricane track. c , Landfall events in region E (US East Coast; the landfalls are mostly from recurving tracks) and region W (Gulf of Mexico and Caribbean; the landfalls are mostly from straight-moving tracks). Each circle marks the centroid location (Methods) of an event (1967–1992 in blue and 1993–2018 in red), with its size proportional to the corresponding τ of the event. The map is from the MATLAB function worldmap . d , τ in region E and region W and over 1967–1992 (in blue) and 1993–2018 (in red) for the 71 landfall events of our study. We also show error bars for τ (which correspond to ±1 s.e.m.) and list the fraction of events corresponding to each region and time period. The number of hurricane events for regions E and W are, respectively, 4 and 22 (over 1967–1992) and 13 and 32 (over 1993–2018). Full size image Turning our attention from the poleward shift, next we note that studies 16 , 17 of North Atlantic hurricanes report an eastward shift in their tracks. Specifically, as the climate warms, the fraction of landfall events on the United States East Coast increases while the fraction of landfall events on the Gulf of Mexico and Caribbean decreases. (The corresponding tracks, based on their shapes, are termed, respectively, ‘recurvers’ and ‘straight movers’.) To test whether the landfall events in our study also manifest a similar trend, we divide the events into two regions, region E and region W (Fig. 3c ), and two (quarter-century long) time periods. We find that the fraction of the events indeed shifts towards region E with time (Fig. 3d ). Further, in any given time period, the decay in region E is slower than the decay in region W (Fig. 3d ), as has also been noted previously 22 , 36 (but the precise causes of this regional variation remain unclear). It follows that the track changes preferentially increase the fraction of the events that correspond to a slower decay and therefore contribute to an increase in τ with time. By computing the increase in τ resulting from track changes and from SST increase separately, we estimate their relative contributions to be, respectively, about 26% and about 74%; see Methods . Concluding remarks In summary, we have shown that over the past 50 years the value of τ for North Atlantic landfalling hurricanes has increased by 94%. This increase is primarily fuelled by the enhanced stock of storm moisture supplied by warmer oceans. An additional contribution stems from the climate-modulated changes in the tracks of the hurricanes. Unlike the effect of enhanced moisture, which invariably slows the decay, the effect of track changes is tied to the regional differences in the values of τ where the hurricanes make landfall. For North Atlantic landfalling hurricanes, our analysis suggests that the effect of the eastward shift in the tracks has been consonant with the effect of the contemporaneous SST increase. As potentially promising topics for future work, we suggest (1) study of other factors (such as extratropical interactions; see Methods ) that may affect the decay; (2) study of landfalling hurricanes from other ocean basins (see Methods for a note of caution regarding the reliability of global data). Further, our findings call attention to the critical role of storm moisture in the dynamics of decay. However, the prevailing theoretical models of decay 13 , 14 , 15 treat a landfalling hurricane as a dry vortex that decays owing to the frictional drag with the land underneath. Lacking moisture, these non-thermodynamic models cannot furnish any link between the climate and the decay. We argue that including moist thermodynamics as an essential component of a theoretical model of decay may help to elucidate the key processes that underly the intricate dynamics of decay. Last, we note that our findings have direct implications for the damage inflicted by landfalling hurricanes in a warming world. Even when the intensity at landfall remains the same (Extended Data Fig. 2c ), the slower decay means that regions far inland face increasingly intense winds (accompanied by heavy rainfall). Consequently, the economic toll incurred keeps soaring. This factor may shed new light on a puzzling trend. For over a century, the frequency and intensity of landfalling hurricanes have remained roughly unchanged 9 , 37 , 38 , but their inflation-adjusted economic losses have steadily increased 37 , 38 . It has been argued 37 , 38 that this increase stems entirely from societal factors (the growth in coastal population and wealth), with the warming climate playing no part. We propose that this accounting may be missing the costs tied to the slower decay of the hurricanes in a warming world. Finally, for hazard planning, we call attention to inland regions—these are less prepared for hurricanes than their coastal counterparts and therefore are more vulnerable to damage from slowly decaying hurricanes. Methods North Atlantic landfalling hurricanes We analyse field data of North Atlantic landfalling hurricanes from the best-track database 11 Atlantic HURDAT2. This database provides the hurricane intensity and other parameters every 6 h. We focus on the time period 1967–2018; we do not consider the pre-1967 data because they are less reliable 4 , 39 . In this period, we study all the ‘landfall events’ (meaning, each time a hurricane makes landfall; a single hurricane may have multiple landfalls) that meet two criteria. First, at the first inland data point, V ≥ 33 m s −1 , the minimum intensity for ‘hurricane wind’ according to the Saffir–Simpson scale 21 . Second, there are at least four continuous inland data points (this excludes the hurricanes whose stay over land is less than one day). (We determine the inland data using the MATLAB function land_or_ocean 40 .) Applying these criteria yields 75 events. Of those, we exclude one event (hurricane Georges’s 1998 landfall over Cuba) where the intensity increased past landfall. Further, to prevent the statistics being skewed by outliers, we exclude three events where the value of the decay timescale, τ , was abnormally large (>2 s.d. above the mean value; Extended Data Fig. 2a ). (Including these outliers does not qualitatively affect the results; Extended Data Fig. 2b .) By excluding these four events, our study comprises 71 events. For better statistics, it would be advantageous to consider landfalling hurricanes from all of the ocean basins. The intensity data from the different ocean basins, however, differ widely in reliability 8 , 41 . Collating data from the different basins, therefore, can introduce large noise that may obscure a climatic signature. We thus focus on the North Atlantic landfalling hurricanes, whose best-track database 11 , Atlantic HURDAT2, is widely considered the most reliable of all the ocean basin databases. Decay timescale, τ For each landfall event, we compute τ from the time series V ( t ) for t = t 1 , t 2 , t 3 , and t 4 , the first four synoptic times (0000 utc , 0600 utc , 1200 utc and 1800 utc ) past landfall. To this time series, we fit the Kaplan–DeMaria model of exponential decay 4 , 5 : V ( t ) = V (0)e − t / τ , which can be expressed as \(V(t)=V({t}_{1}){{\rm{e}}}^{-(t-{t}_{1})/\tau }\) . Specifically, we compute the best-linear-fit line to the data points plotted as ln( V ( t )/ V ( t 1 )) versus t − t 1 ; the slope of this line equals −1/ τ (Extended Data Fig. 1a ). We note that the original Kaplan–DeMaria model, which applies to V ( t ) for a period of more than one day, contains two parameters: τ and an additive constant. Over the first day, however, V ( t ) conforms well to the exponential model with only one parameter, τ . This can be verified by computing the adjusted r 2 as the goodness of fit. For most events in our study, the adjusted r 2 ≥ 0.9 (Extended Data Fig. 1b ). In Fig. 1b , we plot the histogram and probability density for τ . To calculate the histogram, we bin with a window of 10 h. We also plot the error bars for the histogram, which we calculate using the bootstrap sampling method (repeated random sampling with replacement in each time series). The error bars correspond to ±1 s.d. in each bin. To calculate the probability density, we use the MATLAB function ksdensity with a window of 10 h. To compute the time series of τ (Fig. 1c ) and of other factors, we apply a 3-year smoothing, twice in a row, using the MATLAB function smooth and set its option span equal to 3 years. We compute the corresponding s.e.m. as \(\mathrm{s.d.}/\sqrt{N}\) , where N is the number of events in the 5-year window (because the smoothing used a 3-year window twice) centred on a given year. (The s.e.m. for the SST time series is computed differently; we consider the SST data from all the over-ocean grid points inside the dotted box of Fig. 1a for the hurricane seasons in any given 5-year window.) From Fig. 1c , we compute the increase in τ assuming a linear trend. It is possible that the trend is nonlinear, or piecewise linear, with the increase in τ being more pronounced over the past two decades. For simplicity, we consider the linear trend. Last, we note that the methods used to estimate the best-track value of V have steadily improved over time. For inland data, the most important of these changes is the increase in the density of weather stations. Because a denser sampling improves the odds of finding the true maximum wind speed, the recorded V becomes biased towards higher values with time. We note, however, that we compute τ not using the value of V but of the ratio V ( t )/ V ( t 1 ). If we denote the bias in V by δ V , the resulting bias in this ratio can be estimated as δ V ( t )/ V ( t ) − δ V ( t 1 )/ V ( t 1 ). Consequently, τ is less sensitive to the bias than V . In particular, if δ V ∝ V (see, for example, ref. 42 ), then there is no effect of the bias on τ . Although we expect that the bias does not substantially affect the τ time series, with the available data it is very difficult to precisely quantify the effect. Further, if there are appreciable differences in the methods used to estimate the different V ( t ) for any event, its effect should also be considered. Future studies may seek to quantify precisely how changes in the methods of estimating V may affect the τ time series. Statistical significance We judge the relationship between two variables to be statistically significant if the two-sided, 95% CI of their slope from linear regression excludes zero (see, for example, ref. 30 ). In computing the CI, we adjust the degrees of freedom if the time series of either of the variables has serial correlation (which we test using the Durbin–Watson test). Specifically, first we compute the decorrelation timescale from the autocorrelation of the residuals (see, for example, ref. 43 ). For example, the τ time series is not significantly autocorrelated after 2 yr (Extended Data Table 3b ). Taking its decorrelation timescale as 3 yr, we then compute the effective degrees of freedom as n /3 − 2 = 15, where n = 52 is the sample size. Using the effective degrees of freedom, we compute the CI. (We use the same procedure to compute the confidence bands plotted in the figures, for example, Fig. 1c, d, f .) In Extended Data Tables 2 and 3c , we list the results of this analysis, where, for reference, we also list the uncorrected P values (which we compute using the full degrees of freedom). Smoothing and robustness of results As noted in the main text, we smooth all the time series to lessen the effects of non-climatic factors and random noise. Further, smoothing yields more reliable s.d. (and, therefore, s.e.m.) by increasing the number of samples per data point (Extended Data Table 3a ). However, smoothing also induces serial correlation. The unsmoothed time series has either no serial correlation (for example, SST time series) or a small decorrelation timescale (<2 years; for example, τ time series). With an increase in the time window of smoothing, the decorrelation timescale monotonically increases. Accounting for the decorrelation timescale, we find that the statistical significance of unsmoothed to variously smoothed time series remains robust to the the specifics of the smoothing (Extended Data Table 3c ). Computational simulations We perform computational simulations of landfalling hurricanes using Cloud Model 1 (CM1, version 18.3) 25 , 26 , 27 . See Extended Data Table 1 for a list of the simulation parameters. To simulate the effect of global warming, we change the SST and the attendant environmental sounding (the vertical profiles of temperature and humidity; these profiles are based on measurements 44 over the North Atlantic ocean during the hurricane seasons of 1995–2002). The actual change in the sounding in a warming climate can be complex, but, to focus on the salient aspects, here we follow ref. 6 and consider a simplified scenario. We modify the sounding by (1) shifting its temperature profile (uniformly at all altitudes) to match the change in the SST and (2) changing its humidity profile so that the relative humidity profile remains the same 6 , 44 . All other parameters in the sounding are kept unchanged. To simulate a complete landfall, we set the coefficient of enthalpy, C e , equal to zero 19 , 20 . This turns off the flux of moisture (and sensible heat) from throughout the bottom surface of the hurricane. For simplicity, we keep all the other simulation parameters the same as for the hurricane over the ocean. We calculate τ using V ( t ). We follow the same procedure as for the field data, except now we consider the first two days past landfall. (Calculating τ using V ( t ) over the first day past landfall yields comparable results.) This is because whereas in the field data V ( t ) decays as V (0)e − t / τ over the first day, the V ( t ) for the simulation data are in good accord with the exponential model over the first two days; as we discuss presently, this difference is due to the simple model of surface drag in the simulations. We conduct sensitivity tests on how τ is affected by the intensity at landfall (Extended Data Fig. 4c,d ). We find that the increasing trend of τ versus SST is qualitatively the same for hurricanes making landfall at different intensities. We also conduct sensitivity tests on how τ is affected by the surface drag (as quantified by the coefficient of momentum, C D ). We find that an increase in C D notably reduces the value of τ , from 38 h for the default C D to 14 h for C D = 0.006, but qualitatively the decay trends remain unaffected by the value of C D (Extended Data Fig. 4e, f ). For the simulations reported in the main text, we use the same surface drag for hurricane over ocean and over land. We note, however, that the surface drag over land is typically higher than that over ocean. We argue that it is this difference that makes the values of τ from the simulations larger than those from the field observations (compare Fig. 1c and Fig. 2b ). Last, we conduct axisymmetric simulations and find that the trends for τ remain robust (Extended Data Fig. 5 ). Timescale of completing landfall We estimate the timescale of completing landfall— from when the centre of a hurricane moves over land to when the supply of the moisture from the ocean underneath becomes negligible— using a simple model of landfall, as follows. We consider an axisymmetric hurricane moving from ocean to land with a constant translation speed v t and at an angle θ (Extended Data Fig. 7a ). We denote the hurricane’s effective radius of moisture supply 45 , 46 as R o ; for any radial location r > R o , the supply of the moisture from the ocean underneath is negligible. (A typical value of R o is about 3 R , where R is the radius of maximum wind, RMW.) For r ≤ R o , we approximate the hurricane’s surface-wind profile, v ( r ), as the modified Rankine vortex 47 , 48 : \(v=V\left(\frac{r}{R}\right)\) for r ≤ R and \(v=V{\left(\frac{R}{r}\right)}^{1/2}\) for r ≥ R , with V ( t ) = V (0)e − t / τ . The enthalpy flux due to the supply of moisture (and sensible heat) from the ocean, F k , can be expressed by the bulk formula F k = C e ρV ( k * − k ), where ρ is the density of air, k is the specific enthalpy of air in the boundary layer, and k * is the saturated enthalpy of air in the surface layer. Following ref. 6 , we assume the relative humidity in the boundary layer as 75%. Owing to the shrinking contact area between the bottom of the landfalling hurricane and the ocean underneath, the moisture supply wanes over time. For the model outlined above, the timescale of completing landfall depends on the values of R o , R , v t , θ and τ . (It also depends on the shape of the coastline 22 , 34 ; for simplicity, here we consider a straight coastline.) For the typical values of R o = 100 km, R = 30 km, v t = 5 m s −1 , cos θ = 0.9 (Extended Data Fig. 1f ) and τ = 25 h (the average value for North Atlantic landfalling hurricanes over 1967–2018), the timescale for the enthalpy flux to drop to 10% of its value over the ocean is about 3.5 h (Extended Data Fig. 7b ). Hurricane size and decay We have noted that SST affects hurricane decay via the storm moisture. But SST may also affect the decay by modulating the size of a hurricane. In idealized f -plane simulations, higher SST results in larger hurricanes 49 , 50 , 51 . (In real hurricanes, however, the relationship between SST and hurricane size is more complex 51 .) Indeed, in our simulations, the hurricane size increases with SST—as SST increases from 300 K to 303 K, the corresponding RMW increases from 18.2 km to 22.5 km (Extended Data Fig. 6 ). To test whether the hurricane size directly affects the decay, we consider dry hurricanes (Fig. 2a ). At landfall, the size of these hurricanes, just like their moisture-laden counterparts, increases with SST. And yet, their decay past landfall is indistinguishable. This suggests that the hurricane size may not have a discernible role in influencing the decay. We note, however, that indirect effects —in particular, effects that account for how the storm moisture depends on the hurricane size— may be important to consider. Translation speed time series To compute the time series of the coastline-perpendicular translation speed, v t cos θ , we first compute v t and θ for each landfall event. We compute v t using the coordinates of the first four inland locations tabulated in Atlantic HURDAT2. (In computing τ , we used the same four locations.) v t is the average translation speed over these locations. We compute θ using the coordinates of the first two inland locations and the local shape of the coastline (Extended Data Fig. 1e ). In any given year, we average the v t cos θ value for all events and then smooth this data using the same procedure as we employed for the τ time series. We plot the resulting v t cos θ time series in Fig. 3a . Also, we note that, like the v t cos θ time series, there is no significant change (at 95% CI) in the v t time series (which we compute using the same procedure as the v t cos θ time series) over the past half-century (Extended Data Fig. 3e ). Over this time period, previous studies (see figure 3a in ref. 30 , figure 1d in ref. 52 and figure 3b in ref. 31 ) also show no substantial change in v t . Finally, over this time period, there is no significant relationship (at 95% CI) between τ and v t (Extended Data Fig. 3f ). Latitude time series For each landfall event, we compute the centroid of the first four inland locations tabulated in Atlantic HURDAT2. In any given year, we average the latitudes of the aforementioned centroids for all events and then smooth this data using the same procedure as we employed for the τ time series. Hurricane tracks and decay We analyse the effect of track changes on τ by dividing the landfall events into two regions (E and W) and two time periods (1967–1992 and 1993–2018); see Fig. 3c, d . (With our sample size, 71 events, it is difficult study the spatio-temporal variation of τ at a finer scale. For example, increasing the number of regions or time periods from two to three and plotting the data as in Fig. 3d results in overlapping error bars for τ .) The overall τ for both regions taken together is 21.1 h for the first period and 27.6 h for the second period (Fig. 1b )— the increase is 6.5 h. This increase has contributions from both track changes and SST increase. Next we estimate their relative contributions using Fig. 3d . In the first period, τ = 28.4 h for region E and 19.8 h for region W. The respective fractions of the events are 15.4% and 84.6%. From the first to the second period, the value of τ in both regions increases (36.2 h in region E and 24 h in region W). The increase may be attributed to the contemporaneous increase in the regional SSTs. Had the hurricane tracks remained unchanged, the fraction of events would have remained the same. In this scenario, we can compute the overall τ resulting from SST increase as the weighted average 15.4% × 36.2 h + 84.6% × 24 h = 25.9 h. However, because of the track changes, the fraction of the events shifts eastward (28.9% in region E and 71.1% in region W). As a result, the contribution of region E increases, and the overall τ becomes 27.6 h. Thus, in the 6.5 h increase in τ from the first to the second period, the SST increase contributes 25.9 h − 21.1 h = 4.8 h, or 74%, and the track changes contribute the remainder, 27.6 h − 25.9 h = 1.7 h, or 26%. The relative contributions of SST increase and track changes on τ may also be estimated using a different approach: partial correlation 53 . If we do not divide the events into distinct regions, we can study the effect of the track changes at a multi-annual timescale. We compute the longitude time series for the centroids of the event locations by following the same procedure as we employed for the latitude time series. In accord with the above analysis, the longitude time series shows a significant eastward shift (at 95% CI) over the past half-century (Extended Data Fig. 3c ). Further, there is a significant relationship (at 95% CI) between τ and longitude; correlation r = −0.61 (Extended Data Fig. 3d ; Extended Data Table 2 ). And, as we have discussed in the main text, there is a significant relationship (at 95% CI) between τ and SST; correlation r = 0.72 (Fig. 1f ; Extended Data Table 2 ). Finally, longitude and SST are not independent—there is a significant relationship (at 95% CI) between them; correlation r = −0.59 (Extended Data Table 2 ). To estimate the relative contributions of SST and tracks, we use the aforementioned values of correlations and find the partial correlation between τ and longitude (with SST held constant) = −0.33 (with P < 10 −4 ) and the partial correlation between τ and SST (with longitude held constant) = 0.56 (with P value approximately 10 −2 ). Thus, in accord with the analysis of Fig. 3d , the relative magnitudes of the two partial correlations suggest that the primary contribution to the increase in τ stems from the SST increase, with an additional contribution from the track changes. Also, we note that the τ –longitude relationship, unlike the τ –SST relationship, is not significant (at 95% CI) when both variables are detrended (Extended Data Table 2 ). Thus, the τ –longitude relationship is largely manifest in the long-term trend rather than in the multi-annual variability, whereas the τ –SST relationship extends to both. Extratropical interaction and decay An extratropical interaction can affect the decay of a landfalling hurricane. This interaction can cause the hurricane to undergo an extratropical transition 54 . Here we undertake a preliminary analysis of whether our results concerning the τ time series are affected by extratropical transitions. Of the 71 landfall events of our study, the Atlantic HURDAT2 database marks 5 as having undergone an extratropical transition within the first day past landfall. We exclude these events and recompute the τ time series. We find that excluding the landfalls with the extratropical transitions leaves the results largely unaffected (Extended Data Fig. 2d ). More broadly, an extratropical interaction can affect the decay without an extratropical transition. Consider, for example, interaction with the jet stream. A recent analysis 55 showed that, over the past four decades, the vertical shear attendant on the North Atlantic jet stream has been increasing, which, in turn, is caused by changes in the climate. If a landfalling hurricane interacts with the jet stream, the increased wind shear may cause its intensity to decay rapidly. The overall effect on τ will be mediated by the details of the interaction, which are complex and difficult to study. Future studies may shed light on the effect of such extratropical interactions on the decay of landfalling hurricanes. Data availability Hurricane intensity: the Atlantic HURDAT2 database is available at . SST: the HadISST database is available at . The data for the intensity and other parameters for the 71 landfall events of our study are included in the Supplementary Information . The data for the τ time series and the SST time series plotted in Fig. 1 are provided with the paper. Source data are provided with this paper. Code availability The Cloud Model 1 (CM1) source code is available at . Change history 19 April 2021 A Correction to this paper has been published:
Climate change is causing hurricanes that make landfall to take more time to weaken, reports a study published 11th November 2020 in the journal Nature. The researchers showed that hurricanes that develop over warmer oceans carry more moisture and therefore stay stronger for longer after hitting land. This means that in the future, as the world continues to warm, hurricanes are more likely to reach communities farther inland and be more destructive. "The implications are very important, especially when considering policies that are put in place to cope with global warming," said Professor Pinaki Chakraborty, senior author of the study and head of the Fluid Mechanics Unit at the Okinawa Institute of Science and Technology Graduate University (OIST). "We know that coastal areas need to ready themselves for more intense hurricanes, but inland communities, who may not have the know-how or infrastructure to cope with such intense winds or heavy rainfall, also need to be prepared." Many studies have shown that climate change can intensify hurricanes—known as cyclones or typhoons in other regions of the world—over the open ocean. But this is the first study to establish a clear link between a warming climate and the smaller subset of hurricanes that have made landfall. The scientists analyzed North Atlantic hurricanes that made landfall over the past half a century. They found that during the course of the first day after landfall, hurricanes weakened almost twice as slowly now than they did 50 years ago. "When we plotted the data, we could clearly see that the amount of time it took for a hurricane to weaken was increasing with the years. But it wasn't a straight line—it was undulating—and we found that these ups and downs matched the same ups and downs seen in sea surface temperature," said Lin Li, first author and Ph.D. student in the OIST Fluid Mechanics Unit. The scientists tested the link between warmer sea surface temperature and slower weakening past landfall by creating computer simulations of four different hurricanes and setting different temperatures for the surface of the sea. Once each virtual hurricane reached category 4 strength, the scientists simulated landfall by cutting off the supply of moisture from beneath. Li explained: "Hurricanes are heat engines, just like engines in cars. In car engines, fuel is combusted, and that heat energy is converted into mechanical work. For hurricanes, the moisture taken up from the surface of the ocean is the 'fuel' that intensifies and sustains a hurricane's destructive power, with heat energy from the moisture converted into powerful winds. Making landfall is equivalent to stopping the fuel supply to the engine of a car. Without fuel, the car will decelerate, and without its moisture source, the hurricane will decay." The researchers found that even though each simulated hurricane made landfall at the same intensity, the ones that developed over warmer waters took more time to weaken. The scientists found a strong correlation between the time it took for a hurricane to weaken after landfall and sea surface temperature, when both were plotted by year Credit: OIST "These simulations proved what our analysis of past hurricanes had suggested: warmer oceans significantly impact the rate that hurricanes decay, even when their connection with the ocean's surface is severed. The question is why," said Prof. Chakraborty. Using additional simulations, the scientists found that "stored moisture" was the missing link. The researchers explained that when hurricanes make landfall, even though they can no longer access the ocean's supply of moisture, they still carry a stock of moisture that slowly depletes. When the scientists created virtual hurricanes that lacked this stored moisture after hitting land, they found that the sea surface temperature no longer had any impact on the rate of decay. "This shows that stored moisture is the key factor that gives each hurricane in the simulation its own unique identity," said Li. "Hurricanes that develop over warmer oceans can take up and store more moisture, which sustains them for longer and prevents them from weakening as quickly." The increased level of stored moisture also makes hurricanes "wetter"—an outcome already being felt as recent hurricanes have unleashed devastatingly high volumes of rainfall on coastal and inland communities. This research highlights the importance for climate models to carefully account for stored moisture when predicting the impact of warmer oceans on hurricanes. The study also pinpoints issues with the simple theoretical models widely used to understand how hurricanes decay. "Current models of hurricane decay don't consider moisture—they just view hurricanes that have made landfall as a dry vortex that rubs against the land and is slowed down by friction. Our work shows these models are incomplete, which is why this clear signature of climate change wasn't previously captured," said Li. The researchers now plan to study hurricane data from other regions of the world to determine whether the impact of a warming climate on hurricane decay is occurring around the globe. Prof. Chakraborty concluded: "Overall, the implications of this work are stark. If we don't curb global warming, landfalling hurricanes will continue to weaken more slowly. Their destruction will no longer be confined to coastal areas, causing higher levels of economic damage and costing more lives."
10.1038/s41586-020-2867-7
Biology
Researchers identify gene that controls soybean seed permeability, calcium content
Nature Genetics, DOI: 10.1038/ng.3339 Journal information: Nature Genetics
http://dx.doi.org/10.1038/ng.3339
https://phys.org/news/2015-06-gene-soybean-seed-permeability-calcium.html
Abstract Loss of seed-coat impermeability was essential in the domestication of many leguminous crops to promote the production of their highly nutritious seeds. Here we show that seed-coat impermeability in wild soybean is controlled by a single gene, GmHs1-1 , which encodes a calcineurin-like metallophosphoesterase transmembrane protein. GmHs1-1 is primarily expressed in the Malpighian layer of the seed coat and is associated with calcium content. The transition from impermeability to permeability in domesticated soybean was caused by artificial selection of a point mutation in GmHs1-1 . Interestingly, a number of soybean landraces evaded selection for permeability because of an alternative selection for seed-coat cracking that also enables seed imbibition. Despite the single origin of the mutant allele Gmhs1-1 , the distribution pattern of allelic variants in the context of soybean population structure and the detected signature of genomic introgression between wild and cultivated soybeans suggest that Gmhs1-1 may have experienced reselection for seed-coat permeability. Main Many wild leguminous species produce seeds with variable seed-coat impermeability or hard-seededness as a mechanism for maintaining seed dormancy and viability for long periods 1 , 2 , and hard-seededness is considered essential for the long-term survival of wild species 3 , 4 . However, hard-seededness impedes seed production in agriculture. Therefore, seed-coat permeability, which allows for rapid and uniform seed germination, was one of the key traits targeted in the domestication of many leguminous crops 5 , 6 , 7 , 8 , 9 , 10 . One of these was soybean, which is now the most economically important crop in the world. It is believed that soybean was domesticated from its wild relative Glycine soja in China ∼ 5,000 years ago, resulting in a diversity of landraces with permeable seed coats 11 , 12 , 13 . Today, hard-seededness continues to impede the utilization of wild germplasm for cultivar enhancement. However, a moderate level of hard-seededness is important for the quality of stored soybeans and for their viability in the southern United States and the tropics, where seeds lose viability within a short period of time after being harvested 11 , 14 . In addition, hard-seededness is associated with calcium content in the seed coat 15 , 16 , 17 and thus can potentially enhance the nutritional value of soy-based foods. Previous studies mapped a common quantitative trait locus (QTL) underlying hard-seededness to an overlapping region on soybean chromosome 2 (refs. 8 , 11 , 17 , 18 ), but the genes responsible for hard-seededness have not been identified in any species. To understand the molecular basis of hard-seededness, we crossed the permeable soybean cultivar Williams 82 with each of two hard-seeded G. soja accessions, PI 468916 and PI 479752, and obtained two F 2 populations. The F 1:2 seeds, whose coats developed from the maternal tissues of the F 1 plants, were hard-seeded ( Fig. 1a,b ). Phenotyping of F 2:3 seeds from individual F 2 plants from the two populations revealed 3:1 ratios of hard-seededness to permeability ( Fig. 1c,d and Supplementary Table 1 ), suggesting that the former is dominant over the latter, and showed that the trait was controlled mainly by a single locus, designated GmHs1-1 . Figure 1: Hard-seededness and seed-coat permeability of parental soybean lines and their progeny. ( a ) Photographic illustration of hard-seededness of PI 468916 ( G. soja ), seed-coat permeability of Williams 82 ( G. max ) and hard-seededness of F 1:2 seeds from a (Williams 82 × PI 468916) F 1 plant at 0, 2 and 4 h. ( b ) Proportions of seeds from Williams 82 plants (purple diamonds), PI 468916 plants (blue circles) and their F 1:2 progeny (pink triangles) that imbibed water at multiple time points over the course of 12 d. ( c , d ) Phenotypic segregation of hard-seededness and seed-coat permeability in two subsets of the mapping population used in initial mapping of the GmHs1-1 locus. Full size image An initial scan revealed a linkage between GmHs1-1 and the markers defining the common QTL region on chromosome 2 (refs. 8 , 11 , 17 , 18 ) ( Supplementary Table 2 ), suggesting that the hard-seededness investigated in this and previous studies is likely to be controlled by the same locus. Additional markers were used to identify recombinants between markers and the GmHs1-1 locus in the two populations ( Fig. 2a ), and finally to fine map GmHs1-1 to a 22-kb region harboring two genes, Glyma02g43700.1 and Glyma02g43710.1 ( Fig. 2a ), according to the Williams 82 reference genome 19 . Figure 2: Map-based cloning of the GmHs1-1 locus and candidate gene–association analysis. ( a ) Physical locations of markers defining the GmHs1-1 region that harbors two genes according to the soybean reference genome, numbers of recombinants carrying crossovers as determined by molecular markers and phenotypes of individual recombinants, and seven sites in the coding region of the GmHs1-1 candidate locus showing nucleotide differences between Williams 82 and the two G. soja parental lines that resulted in amino acid changes. Pink and black boxes indicate exons, gray boxes indicate 3′ untranslated regions (UTRs) and the gray boxes ending in arrowheads indicate 3′ UTRs and the transcriptional orientations of the two genes. ( b ) Sequence comparison of the candidate GmHs1-1 locus between Williams 82 and ten G. soja accessions. Sequence variations in exons and introns, sequence variations in flanking regions 2.5 kb upstream and 1.5 kb downstream and the frequencies of these variations in the ten G. soja accessions are indicated by purple, blue and gray bars, respectively. The seven sites shown in a are marked here in the same order by asterisks. ( c ) Expression levels of Glyma02g43700.1 relative to those of an actin-expressing gene in the two parental lines at five seed developmental time points as detected by quantitative real-time PCR. Expression levels shown are the mean ± s.e.m. of three biological replicates. WPA, weeks post-anthesis. ( d ) Predicted conserved metallophosphatase-phosphodiesterase/alkaline phosphatase D (MPP_PhoD) domain in the Glyma02g43700.1 protein. The asterisk indicates the amino acid switch site caused by the C>T mutation. Full size image We then sequenced Glyma02g43700.1 and Glyma02g43710.1 in the G. soja parents. Seven polymorphic sites that each resulted in amino acid changes between PI 468916 and Williams 82 were detected in Glyma02g43700.1 ( Fig. 2a ). Only one of the seven polymorphisms, a (C>T) point mutation in Glyma02g43700.1 , was detected as a difference between PI 479752 and Williams 82 ( Fig. 2a ). This C>T mutation, which resulted in a change from threonine to methionine, is also the only mutation in this gene, including in its introns and exons and its flanking ∼ 2.5-kb and ∼ 1.5-kb regions, that could be used to distinguish Williams 82 from the two G. soja parents and eight additional G. soja accessions previously sequenced 20 , 21 , among which 88 variant sites were found ( Fig. 2b ). By contrast, no nucleotide differences associated with amino acid changes were observed between the G. soja parents and Williams 82 in Glyma02g43710.1 . Glyma02g43700.1 was primarily expressed in developing seed coats, and its expression level in PI 468916 was much higher than in Williams 82, particularly at the stage of 4–5 weeks after anthesis ( Fig. 2c ). No expression of Glyma02g43710.1 was detected ( Supplementary Fig. 1 ). These observations suggest that Glyma02g43700.1 is most likely to be the GmHs1-1 locus. Orthologs or homologs of Glyma02g43700.1 have been found in many other plants ( Supplementary Table 3 ), but none of them has been shown to have any known functions. Nevertheless, Glyma02g43700.1 was predicted to encode a calcineurin-like metallophosphoesterase transmembrane protein ( Fig. 2d ) localized to cellular membranes ( Fig. 3a,b ). The amino acid switch resulting from the C>T mutation was predicted to be located outside of membranes ( Supplementary Fig. 2a ) and to affect the α-helix of the protein structure ( Supplementary Fig. 2b,c ). The transcripts of Glyma02g43700.1 were predominantly abundant in the Malpighian layer of the seed coat, particularly in the lucent region of Malpighian cell walls separating Malpighian terminal caps from their basal parts 22 , 23 ( Fig. 3c,d ). This so-called light line is thought to be essential for hard-seededness 24 . Figure 3: Subcellular localization and tissue-specific expression of GmHs1-1. ( a ) Subcellular localization of the GmHs1-1–GFP fusion protein in tobacco epidermal cells under control of the 35S promoter as observed with a dark field for green fluorescence (upper right) and with subcellular localization of GFP with salt treatment (bottom right). The same cells with a bright field for cell morphology (upper left) and merging of cell morphology and GmHs1-1 localization (bottom left) are also shown. Scale bars, 25 μm. ( b ) Protein blots detected GmHs1-1 in cellular membrane. Anti-GFP was used to detect the GmHs1-1–GFP fusion protein in tobacco epidermal cells. Anti-GAPDH that bound to soluble protein and anti–H + -ATPase that bound to cellular membrane were used as controls. ( c ) Structural comparison of seed coats comprising four distinguishable layers (the waxy cuticle, the thick-walled Malpighian cells that form the palisade layer, the osteosclerid cells and the interior parenchyma) between two parental lines. Scale bars, 25 μm. ( d ) In situ hybridization for GmHs1-1 performed with RNA probes in both antisense and sense directions to detect nonspecific binding. Scale bars, 25 μm. Full size image The genomic sequence of Glyma02g43700.1 from PI 468916 was introduced into the permeable soybean cultivar Mustang. As exemplified in Figure 4a , in each of the three transformation events studied, T 1 plants with the transgene produced hard seeds, and the transgene segregated in the T 2 progeny, resulting in phenotypic segregation ( Fig. 4b,c ). These observations confirmed that Glyma02g43700.1 was the GmHs1-1 locus. As observed in other legumes 22 , 25 , 26 , the seed coat of G. soja —particularly the Malpighian layer—is thicker than that of Glycine max ( Fig. 3c ), but no difference in the thickness of the Malpighian layer was observed between progeny with and without the transgene ( Fig. 4d ). Figure 4: Complementation test and characterization of GmHs1-1 . ( a ) Phenotypic comparison between T 1 seeds with the GmHs1-1 transgene (TP42-T 1 – GmHs1-1 ) and T 1 seeds without the transgene (TP44-T 1 –no) after 2 h in water. TP42-T 1 and TP44-T 1 were derived from two transformation events of the cultivar Mustang ( Gmhs1-1 or Gmhs1-1 ), but the former carried the transgene and the latter did not. ( b ) Phenotypic segregation of T 2 seeds derived from TP42 resulting from segregation of the transgene. ( c ) Proportion of T 2 seeds from TP42-T 2 plants with the transgene and TP42-T 2 plants without the transgene that were impermeable at various time points. ( d ) Comparison of the Malpighian layers of seed coats in seeds from TP42-T 2 – GmHs1-1 plants with the transgene and TP42-T 2 –no plants without the transgene. Scale bars, 15 μm. ( e ) Comparison of calcium content in seed coats of PI 468916 (line 1), Williams 82 (line 2) and T 2 progenies without (line 3) and with (line 4) the transgene derived from TP42. Data are shown as mean ± s.e.m. from three biological replicates. Significant differences (labeled a, b and c) were detected by one-way analysis of variance; α = 0.05. ( f ) Relative expression (mean ± s.e.m. from three biological replicates) of the GmHs1-1 locus in PI 468916 (line 1), Williams 82 (line 2) and T 2 progenies without (line 3) and with (line 4) the transgene derived from TP42. ( g ) Relative expression (mean ± s.e.m. from three biological replicates) of the GmHs1-1 locus in a permeable RIL (line 1) and a hard-seeded RIL (line 2) derived from Williams 82 × PI 468916 and two hard-seeded RILs (lines 3 and 4) derived from Williams 82 × PI 479752. Seed coats were examined at 5 weeks after anthesis. Full size image It has been documented that calcineurin is highly conserved from yeast to mammals (but not in plants) and is an important mediator participating in a wide variety of cellular processes and Ca 2+ -dependent signal-transduction pathways in yeast and mammals 27 , 28 , 29 . Despite the lack of significant sequence similarity between these calcineurins and GmHs1-1 ( Supplementary Table 3 ), the calcium content of seed coats in the GmHs1-1 transgenic lines was significantly increased compared to that in lines without the transgene ( Fig. 4e ), indicating an association of calcium content with the function(s) of GmHs1-1 for hard-seededness. This association further suggests that, similar to the calcineurins investigated in yeast and mammals, GmHs1-1 in soybean may also function in a Ca 2+ -dependent manner, although the cellular processes involved in the development of hard-seededness remain unknown. The expression level of GmHs1-1 or Gmhs1-1 in the transgenic lines was higher than that of Gmhs1-1 in the lines without the transgene from the same transformation event, but lower than in the G. soja donor ( Fig. 4f ). In addition, in several hard-seeded recombinant inbred lines (RILs) derived from the mapping populations, GmHs1-1 was expressed at a level slightly higher than in the transgenic lines but much lower than in the G. soja parents ( Fig. 4g ). As the GmHs1-1 alleles in these three categories of soybean lines are actually identical, the observed variations in the abundance of GmHs1-1 transcripts and the degree of hard-seededness would be largely explained by the different genetic backgrounds of these lines. Indeed, because of the existence of a few minor QTLs underlying hard-seededness in G. soja 8 , 11 , 17 , 18 , in addition to the GmHs1 locus, hard-seededness exhibits a somewhat quantitative trait-inheritance pattern in both mapping populations, as illustrated in Figure 1c,d . On the basis of recent resequencing data from a soybean population 30 , 36 accessions (4 G. soja and 32 G. max ) were predicted to share an identical promoter of the GmHs1-1 locus. We sequenced the promoter and coding sequences (CDSs) of the 4 G. soja accessions and of 1 accession (PI 594777) randomly selected from the 32 G. max accessions ( Supplementary Fig. 3a ) and found that the promoter of PI 594777 was actually identical to that of the G. soja accession (GsojaD) used to establish a G. soja pan-genome 21 ( Supplementary Fig. 3b ). These two accessions differed only at the (C>T) mutation site in their CDSs and exhibited degrees of permeability and hard-seededness similar to those shown by Williams 82 and PI 468916, respectively. Two of the four G. soja accessions shared an identical promoter with Williams 82 and Mustang ( Supplementary Fig. 3b ). In addition to the promoter region, identical intronic sequences and 3′ terminator regions in two G. soja accessions and a number of landraces were also revealed by the resequencing data 30 . These observations, together with the association analysis ( Fig. 2b ), further validated that the (C>T) mutation was causative for the phenotypic transition. To understand the process of selection for Gmhs1-1 during domestication, we investigated the seven mutation sites in the CDS of the GmHs1-1 locus ( Figs. 2a and 5a ) in a representative soybean population 31 , 32 ( Supplementary Table 4 and Supplementary Fig. 4 ). These accessions were grouped into five haplotypes ( Fig. 5a ). An association test revealed the strongest signal at the C>T mutation site, which actually showed a complete association with this key phenotypic transition ( Fig. 5a and Supplementary Fig. 4 ). Of the 89 cultivated accessions, 83 were found to carry Gmhs1-1 , and 6 (landraces) carried GmHs1-1 ( Fig. 5a and Supplementary Table 4 ). Interestingly, large proportions (55%–90%) of the seeds of these six landraces show seed-coat cracking ( Fig. 5b and Supplementary Fig. 3b ), an unfavorable seed trait that is moderated by a dominant QTL on chromosome 9 (ref. 33 ), and are thus phenotypically 'permeable' 34 ( Fig. 5b and Supplementary Table 4 ). Nevertheless, intact seeds of these six landraces are hard-seeded, although the degree of their impermeability was lower than that observed in G. soja ( Fig. 5b,c and Supplementary Fig. 3b ). Figure 5: Haplotype expression and association analyses at the GmHs1-1 locus. ( a ) Genotypes, phenotypes, haplotypes and association analyses with mutation sites in the coding region of the GmHs1-1 locus of a soybean population that included 28 G. soja accessions (GS), 49 landraces (LR), 17 North American ancestors (NA) and 23 North American cultivars (NC), as listed in Supplementary Table 4 . Phenotypes: H, hard-seededness; P, permeability; P/C, permeability caused by seed-coat cracking. Asterisk indicates hard-seeded landraces, each with a large proportion of seeds showing seed-coat cracking. In graph at bottom, black circles represent mutations not associated with the phenotypic transition, and the filled red circle represents the causative mutation associated with the phenotypic transition. ( b ) Exemplification of the phenotypes of hard-seeded landrace PI 567364 in comparison with PI 468916 and Williams 82. ( c ) Phenotyping of the soybean population as shown in a . Given percentages of impermeable seeds of individual accessions were phenotyped at different time points over the course of 3 d. For each of the hard-seeded landraces (asterisk), only seeds without obvious seed-coat cracking were phenotyped. ( d ) Expression of the GmHs1-1 locus in G. soja accessions PI 483464, PI 339871A and PI 549046; in hard-seeded landraces PI 603420 and PI 567364, each with a proportion of seeds showing seed-coat cracking; in permeable landrace PI 594777; and in two cultivated varieties, Williams 82 and Mustang. Values shown represent the mean ± s.e.m. from three biological replicates. Seed coats were examined at 5 weeks after anthesis. Full size image The expression levels of GmHs1-1 in the hard-seeded landraces were slightly higher than or similar to those observed in Williams 82 and a permeable landrace (PI 594777) ( Fig. 5d ) whose Gmhs1-1 allele shares a promoter with a G. soja accession (GsojaD) ( Supplementary Fig. 3b ). This indicated that the variation in GmHs1-1 expression was unlikely to be responsible for the transition from hard-seededness to permeability, although the extremely high GmHs1-1 expression seemed to be associated with the extremely high degree of hard-seededness in G. soja . One of the simplest explanations for such an association would be background effects on the expression of GmHs1-1 —for example, the minor QTLs underlying hard-seededness in G. soja 8 , 11 , 17 , 18 . It is likely that those minor QTLs have been eliminated from the hard-seeded landraces, resulting in reduced GmHs1-1 expression and functionality. Nevertheless, it is obvious that the C>T mutation was the major cause for the transition from hard-seededness to permeability, which seems to have been accompanied by the loss of those G. soja –specific minor QTLs in most cultivated soybeans through domestication. Apparently these hard-seeded landraces escaped selection for Gmhs1-1 , perhaps because of selection for seed-coat cracking as an alternative means to enable seed imbibition ( Fig. 5b ). We also genotyped a mini-core collection of landraces in China 13 , 32 . Of those 195 landraces, 186 carried Gmhs1-1 , and the remaining 9 harbored GmHs1-1 ( Supplementary Table 5 ). These nine GmHs1-1 –expressing landraces were dispersed in all major clades of the collection's population structure ( Supplementary Fig. 5a ) and geographically distributed in all major soybean eco-regions in China ( Supplementary Fig. 5b ). These observations indicate a single origin of Gmhs1-1 , a conclusion that is also supported by the presence of an ∼ 160-kb selective sweep surrounding the GmHs1-1 locus ( Supplementary Fig. 6a,b ) and by the contrasting patterns of linkage disequilibrium in the selective-sweep region between G. soja and G. max subpopulations ( Supplementary Fig. 7 ) revealed by soybean resequencing data 30 . The existence of GmHs1-1 in a small number of highly diverged landraces could be explained by gene flow or genomic introgression between G. max and G. soja . Indeed, signatures of genomic introgression from G. soja to landraces carrying GmHs1-1 were detected ( Supplementary Fig. 6c–j ). Such introgression, which must have been followed by selection or reselection for favorable traits such as seed-coat permeability, would have favored the creation and proliferation of the 'elite' landraces for ancient agriculture. Methods Plant materials. Soybean parental lines were from the USDA Soybean Germplasm Collection. Two populations that consisted of 4,085 F 2 -derived F 3 progenies were generated, with 2,232 from a Williams 82 × PI 468916 cross and 1,853 from a Williams 82 × PI 479752 cross. The F 3 lines derived from individual F 2 recombinants were advanced to produce F 3:4 seeds for elucidation and/or validation of the genotypes of the recombinants in the GmHs1-1 region. In addition, a nearly complete set of RILs from the two mapping populations was developed via the single-seed-descent method, and some of the RILs were used to validate the phenotypes from the earlier generations and for quantitative real-time PCR (qRT-PCR) analysis. The soybean population 31 , 32 used for analyses of allelic variation and association tests and a mini-core collection of Chinese soybean landraces 13 , 32 used for analyses of causative mutations are listed in Supplementary Tables 4 and 5 . Phenotyping. Seed coats develop from maternal tissues; thus we used the phenotypes of seed coats of the F 2 plants to deduce the genotypes of the F 1 plants at the locus controlling hard-seededness, the phenotypes of seed coats of the F 3 plants from individual F 2 plants to deduce the genotypes of the respective F 2 plants and so forth. All G. soja accessions investigated in this study, including the two parental lines, showed complete impermeability to water and no imbibition after 2–3 d in water at room temperature ( Figs. 1b and 5c ). Even after 12 d in water, only ∼ 20% of these G. soja accessions had imbibed water. By contrast, Williams 82, Mustang and the other cultivated soybeans included in this study started to imbibe water as quickly as 15–45 min after being placed in water and reached 90% imbibition after 2 h at room temperature ( Fig. 5c ). Thus the hard-seededness and seed-coat permeability of these germplasms were distinguishable as early as 45 min and up to 12 d or even longer after the seeds were introduced to water. The final phenotypes of these germplasms were defined on the basis of the performance of 20 seeds of each line after 2 h in water. Hard-seededness and impermeability were defined as ≤20% of an accession showing imbibition, whereas seed-coat permeability was defined as ≥80% of an accession showing imbibition. To maximize the likelihood of imbibition being the result of seed-coat permeability rather than seed-coat cracking or damage, only seeds with intact seed coats were chosen for evaluations of hard-seededness, including seeds of landraces expressing GmHs1-1 , which normally exhibit seed-coat cracking ( Fig. 5c ). Although 5% of the F 1:2 seeds derived from the maternal tissue of F 1 plants began imbibing in 40 min, ∼ 80% remained impermeable after 1 d in water. Thus, we concluded that hard-seededness is dominant over permeability, although the observed dominance seemed to be affected by other minor QTLs such as those previously described 8 , 11 , 17 , 18 . As a result, the F 2:3 seeds derived from individual F 2 plants showed a fairly continuous distribution pattern of phenotypic segregation for hard-seededness and seed-coat permeability, as shown in Figure 1c,d . This is a typical inheritance pattern for a trait controlled by a major QTL (i.e., GmHs1-1 ) that is likely to be affected by other minor QTLs underlying the same trait. On the basis of these observed patterns of phenotypic segregation in the mapping populations and subsequent molecular characterization of the GmHs1-1 locus, we proposed that the substantially high degree of hard-seededness in the G. soja accessions ( Fig. 5c ) was achieved by enhanced transcription of GmHs1-1 , which was perhaps caused by minor QTLs or other G. soja –specific activators interacting with GmHs1-1 . In the mapping populations, 3:1 ratios of hard-seededness to seed-coat permeability for the F 2 populations, as revealed by the F 2:3 seeds, were observed. The criterion for hard-seededness was <20% of F 2:3 seeds from each F 2 plants showing imbibition after 2 h in water ( Supplementary Table 1 ). Overall, the GmHs1-1 transgenic lines showed relatively lower levels of hard-seededness than the G. soja accessions, the F 1:2 seeds and many F 2:3 seeds from the mapping populations that carried the GmHs1-1 alleles. This was most likely because of the lack of the minor QTLs affecting hard-seededness in the transgenic lines. Nevertheless, as shown in Figure 4c , ≤50% seeds of all progeny lines with the transgene derived from the transformation events showed imbibition at 2 h, placing these lines into the category of hard-seededness. By contrast, all seeds of the transformation progeny lines without the transgene showed complete imbibition within 1 h, similar to Mustang, which was the line used for transformation. Molecular mapping. We performed an initial linkage analysis using representative F 2 plants from each of the two populations as shown in Figure 1c,d and Supplementary Table 1 . The genotypes of individual F 2 plants at the GmHs1-1 locus were deduced on the basis of the phenotypes of 20 F 3 seeds. DNA samples were isolated from leaves of individual F 2 plants. Simple sequence repeat (SSR) markers 2_1668 and 2_1697 linked to the previously mapped major QTL region on chromosome 2 (refs. 8 , 11 , 17 , 18 ) were found to be linked to the gene underlying hard-seededness in PI 468916 and PI 479752 ( Supplementary Table 2 ). Subsequently, additional SSR markers between 2_1668 and 2_1697 and SNP markers developed by sequencing of gene fragments in the parental lines were used to identify recombinants between individual markers and the GmHs1-1 locus in the two populations. To maximize the accuracy of molecular mapping, we used only recombinants with two extreme phenotypes—for example, ≥80% of seeds of a recombinant showing imbibition ('permeable') and ≤20% seeds of a recombinant showing imbibition ('hard-seeded')—in fine mapping. In particular, the phenotypes and genotypes of the four recombinants defining the 22-kb GmHs1-1 region were further validated by analysis of the F 3:4 progenies and RILs derived from these recombinants. DNA and RNA isolation, PCR, RT-PCR, qRT-PCR, sequencing and alignments. Genomic DNA isolation, PCR primer design, PCR amplification, SNP-based cleaved amplified polymorphic sequence (CAPS) marker development, PCR fragment purification, total RNA isolation, cDNA synthesis by RT-PCR, qRT-PCR and sequencing of PCR and RT-PCR fragments were conducted as previously described 32 , 35 . Primers used for PCR, RT-PCR, qRT-PCR and sequencing are listed in Supplementary Table 6 . These primers were designed on the basis of unique and consensus sequences, both publicly available and newly generated from the genes investigated in this study. In the qRT-PCR experiments, we used the actin-expressing gene as the internal control, and we normalized the relative expression levels of the GmHs1-1 locus in examined samples by setting the lowest expression level as 1.0. Alignment of nucleotide and amino acid sequences was done with MUSCLE 36 . Plasmid construction and transformation. The genomic DNA of the GmHs1-1 candidate gene including the 2.5-kb flanking region upstream of the start codon, the portion from the start codon to the stop codon and the 1.5-kb flanking region downstream of the stop codon was obtained from PI 468916 by PCR amplification of three fragments with the primers shown in Supplementary Table 6 . The amplified fragments were cloned into pCR2.1-TOPO TA vector (Life Technologies) and then sequenced. The selected pCR2.1-TOPO clone with the verified 2.5-kb upstream flanking sequence was digested with BamH I and ligated with the BamH I-digested pZY101.2 vector to form a construct dubbed Pro- Hs 1. Then the selected pCR2.1-TOPO clone with the portion of the gene from the start codon to the stop codon was digested with EcoR I and ligated with the EcoR I-digested Pro- Hs1 construct to form a second construct dubbed Pro- Hs1 :Gene- Hs1 . Finally the selected pCR2.1-TOPO clone with the 1.5-kb downstream flanking region was digested with Spe I and ligated with Spe I-digested Pro- Hs1 :Gene- Hs1 to form the final construct, which harbored the GmHs1-1 candidate gene cassette regulated by the putative endogenous promoter that resides in the 2.5-kb flanking region upstream from the start codon and terminated with the 1.5-kb flanking region downstream of the candidate gene (dubbed Pro- Hs1 :Gene- Hs1 :Ter- Hs1 ). The final construct was confirmed by digestion with relevant restriction enzymes and by sequencing, and a confirmed construct was introduced into Agrobacterium tumefaciens and subsequently transferred into the soybean cultivar Mustang, which has a permeable seed coat, according to a previously described protocol 37 . The transformation experiments were conducted at the University of Missouri Plant Transformation Core Facility. The presence of the final construct in recovered transgenic plants was confirmed with CAPS and insertion-deletion markers used to detect the presence of the GmHs1-1 candidate allele and by sequencing of the amplified insert-vector junction fragments with the primers shown in Supplementary Table 6 . T 1 plants were further advanced to T 2 in a greenhouse, and subsequently the T 2 and T 3 lineages were phenotyped for hard-seededness. Subcellular localization. For subcellular localization, the GmHs1-1 CDS from PI 468916 was obtained by RT-PCR with a primer set (GmHs1-1–GFP) ( Supplementary Table 6 ) and cloned into the pCR8/GW/TOPO entry vector (Invitrogen). The construct was then transformed into competent Escherichia coli cells. The plasmid DNA was isolated from a positive transformant and then subjected to LR recombination reaction with the Gateway destination vector pGWB405, which contains the 35S promoter and GFP, using Gateway LR ClonaseTM II Plus Enzyme Mix (Invitrogen). The fused plasmid was introduced to leaf epidermal cells of 3- to 4-week-old Nicotiana benthamiana plants by Agrobacterium infiltration according to a previously described protocol 32 . The transformed leaf cells were observed and photographed with a microscope (Nikon A1_MP). Protein blotting. Protein blotting was used to detect binding of the GmHs1-1 protein with cellular membrane 38 . Immunoprobing of head-GFP was conducted with polyclonal rabbit anti-GFP (2555, Cell Signaling; 1:3,000) in TBS; membrane protein anti–H + -ATPase (AS07 260, Agrisera, Sweden; 1:2,000) and solution protein anti-GAPDH (G8795, Sigma, USA; 1:5,000) were used as controls. Anti-rabbit IgG (7074, Cell Signaling; 1:10,000) conjugated with alkaline phosphatase was used as the secondary antibody with an enhanced chemiluminescence protein gel blot detection system (Amersham, Sweden). RNA in situ hybridization. RNA in situ hybridization with GmHs1-1 –specific probes was performed according to a previously described protocol 35 . A 135-bp fragment specific to the GmHs1-1 cDNA of PI 468916 was amplified with a primer set (Hs1-ISH) ( Supplementary Table 6 ) and then integrated into the pGEM-T Easy vector (Promega, USA). Digoxigenin-labeled sense and antisense probes were obtained from EcoR I-digested linear pGEM-T Easy vector with an integrated 135-bp insert by in vitro transcription with SP6 or T7 RNA polymerase (Roche, Germany) according to the manufacturer's protocol. Gene annotation and protein structure analysis. Homolog searches against the GenBank nonredundant protein database and the GenBank conserved-domain database were conducted to annotate genes and their conserved domains in the mapped regions. The structure of GmHs1-1 was predicted by I-TASSER, a bioinformatics method for predicting the three-dimensional structure of protein molecules on the basis of amino acid sequences 39 , and the topology of the GmHs1-1 protein was predicted by TMHMM 2.0, software that predicts transmembrane protein topology with a hidden Markov model 40 . Measurement of calcium content. Sample preparation and measurement of calcium content were done according to a published protocol 41 , with minor modifications. Seed-coat samples were ground with a plastic rod, and 0.25 ml of powdered seed coat from each sample was measured and deposited into a 16 × 110 mm borosilicate glass test tube. Then 2.5 mL of concentrated nitric acid (TraceMetal Grade, Lot No. 1114090, Fisher) was added and the ground sample was digested in the tube at room temperature overnight, after which the digested samples were heated to 105 °C for 2 h. After being cooled to room temperature, the samples were diluted to 0.8% nitric acid content with ultrapure 18.2 MΩ water from a Milli-Q system (Millipore). Finally, 10 mL of solution from each sample was used for measurement of calcium content by means of inductively coupled plasma mass spectrometry (Elan DRC-E, PerkinElmer). Association, linkage disequilibrium and nucleotide-diversity analyses. Association and linkage disequilibrium analyses were performed using TASSEL 42 . Nucleotide diversity (π) was calculated for each of the genes located on chromosome 2 with the SNP data from 302 resequenced soybean accessions 30 using a previously described method 31 . The selective sweep surrounding the GmHs1-1 region was identified via previously described methodology 43 . The ratios of the nucleotide diversity among the soybean accessions carrying the Gmhs1-1 allele (π Gmhs1-1 ) to the nucleotide diversity among the soybean accessions carrying the GmHs1-1 allele (π GmHs1-1 ) were used to identify regions with significantly lower levels of polymorphisms in the soybean accessions carrying the Gmhs1-1 allele. Genes with π GmHs1-1 less than 0.002 were excluded in the analysis of selective sweeps along chromosome 2. Detection of genomic introgression. The haplotypes in each of the four hard-seeded landraces that were identical by descent (IBD) to individuals within the G. soja and G. max subpopulations were identified via a previously described approach 44 , 45 . The matrix of SNPs from the 302 resequenced soybean accessions 30 and the positions of the SNPs according to the Williams 82 reference genome 19 served as inputs for the IBD detection pipeline. To estimate the frequency of the shared haplotypes in different regions of the genome, we divided the genome into bins of 10 kb and calculated the number of recorded IBD tracts between each of the four GmHs1-1 landraces and the two G. soja and G. max subpopulations per bin. As the total number of pairwise comparisons differed between the subpopulations, these numbers were normalized from 0 (no IBD detected) to 1 (IBD shared by all individuals within a subpopulation). The normalized IBD between each of the four landraces and the G. soja subpopulation (nIBD G.soja ) and the normalized IBD between each of the four landraces and G. max subpopulations (relative nIBD G.max ) were then used to calculate the relative IBD between the compared groups (rIBD = nIBD G.soja − nIBD G.max ). We profiled rIBD blocks along chromosomes in the order of the 10-kb bins to find putative genomic introgression from G. soja to each of the four landraces. URLs. GenBank Conserved Domain Database, ; GenBank Protein Database, ; I-TASSER, ; MUSCLE, ; TASSEL 5.0, ; TMHMM (2.0), . Accession codes. Sequence data from this article have been deposited in NCBI GenBank under accession codes KP698733 , KP698734 , KP698735 , KP698736 and KR106134 , KR106135 , KR106136 , KR106137 , KR106138 , KR106139 , KR106140 , KR106141 , KR106142 , KR106143 , KR106144 , KR106145 , KR106146 , KR106147 , KR106148 , KR106149 , KR106150 , KR106151 , KR106152 , KR106153 , KR106154 , KR106155 , KR106156 , KR106157 . Accession codes Primary accessions NCBI Reference Sequence KP698733 KP698734 KP698735 KP698736 KR106134 KR106135 KR106136 KR106137 KR106138 KR106139 KR106140 KR106141 KR106142 KR106143 KR106144 KR106145 KR106146 KR106147 KR106148 KR106149 KR106150 KR106151 KR106152 KR106153 KR106154 KR106155 KR106156 KR106157
Purdue University researchers have pinpointed the gene that controls whether soybean seed coats are hard or permeable, a finding that could be used to develop better varieties for southern and tropical regions, enrich the crop's genetic diversity and boost the nutritional value of soybeans. Jianxin Ma (Jen-SHIN' Ma), associate professor of agronomy, and fellow researchers found that a mutation in the gene GmHs1-1 causes the tough seed coats of wild soybeans to become permeable. Farmers selected that trait about 5,000 years ago in a key step to domesticating soybeans from their hard-seeded relative Glycine soja. The gene could be modified to produce improved varieties for growing regions in which seed permeability can be a handicap, Ma said. GmHs1-1 is also associated with the calcium content of soybeans, offering a genetic target for enhancing the nutrition of soy food products. Understanding the mechanism that determines seed permeability could also give researchers better access to the largely untapped genetic diversity of wild soybeans to enrich cultivated varieties, whose lack of genetic richness has curbed improvements in yields. "This is the first gene associated with hard seededness to be identified in any plant species," Ma said. "This discovery could help us quickly pinpoint genes that control this trait in many other plants. We're also excited about the potential applications for modifying the calcium concentration in seed coats. This could be transformative as we identify similar genes that control calcium levels in other legumes." Hard seededness enables the long-term survival of many wild plant species by protecting seeds in severe conditions and inhospitable environments, allowing them to remain dormant until conditions are right for germination. Encased in a water- and airtight coat, seeds can remain viable for extended periods of time, in some cases, more than 100 years. But the hard skin that lends wild seeds their resilience is a problem in agricultural production. It prevents seeds from germinating quickly and in a uniform, predictable pattern. Wild soybean seeds take from several weeks to months to germinate whereas cultivated soybean seeds can begin absorbing water in 15 minutes. Millennia ago, farmers in Asia recognized the value of seed permeability and artificially selected the trait to produce the predecessors of modern cultivated soybean varieties, Ma said. But the genetic factors underpinning seed coat permeability remained a mystery until Ma and his team used a map-based cloning approach to hone in on GmHs1-1 as the gene responsible for hard seededness. The team found that a mutation in a single pair of nucleotides in the gene causes seed coat permeability - that is, a change in one pair out of the approximately 1 billion base pairs that make up the soybean genome. "We finally understand the genetic change that allowed the domestication of soybeans," he said. "When we make this kind of discovery, we're always very excited." Ma said modifying the gene could produce hardier seeds for the southern U.S. and the tropics, regions in which the soft coats of cultivated soybeans reduce their viability shortly after harvest. The discovery could also help researchers improve the cooking quality of soybeans and other legumes, such as the common bean, whose varying levels of hard seededness make consistent quality difficult to achieve. The team's next goal is to identify genes that interact with GmHs1-1 and understand how they work together to control calcium and possibly other mineral content. The paper was published in Nature Genetics on Monday (June 22) and is available at dx.doi.org/10.1038/ng.3339
10.1038/ng.3339
Nano
IBM scientists demonstrate in-memory computing with 1 million devices for applications in AI
Abu Sebastian et al. Temporal correlation detection using computational phase-change memory, Nature Communications (2017). DOI: 10.1038/s41467-017-01481-9 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-017-01481-9
https://phys.org/news/2017-10-ibm-scientists-in-memory-million-devices.html
Abstract Conventional computers based on the von Neumann architecture perform computation by repeatedly transferring data between their physically separated processing and memory units. As computation becomes increasingly data centric and the scalability limits in terms of performance and power are being reached, alternative computing paradigms with collocated computation and storage are actively being sought. A fascinating such approach is that of computational memory where the physics of nanoscale memory devices are used to perform certain computational tasks within the memory unit in a non-von Neumann manner. We present an experimental demonstration using one million phase change memory devices organized to perform a high-level computational primitive by exploiting the crystallization dynamics. Its result is imprinted in the conductance states of the memory devices. The results of using such a computational memory for processing real-world data sets show that this co-existence of computation and storage at the nanometer scale could enable ultra-dense, low-power, and massively-parallel computing systems. Introduction In today’s computing systems based on the conventional von Neumann architecture (Fig. 1a ), there are distinct memory and processing units. The processing unit comprises the arithmetic and logic unit (ALU), a control unit and a limited amount of cache memory. The memory unit typically comprises dynamic random-access memory (DRAM), where information is stored in the charge state of a capacitor. Performing an operation (such as an arithmetic or logic operation), f , over a set of data stored in the memory, A , to obtain the result, f ( A ), requires a sequence of steps in which the data must be obtained from the memory, transferred to the processing unit, processed, and stored back to the memory. This results in a significant amount of data being moved back and forth between the physically separated memory and processing units. This costs time and energy, and constitutes an inherent bottleneck in performance. Fig. 1 The concept of computational memory. a Schematic of the von Neumann computer architecture, where the memory and computing units are physically separated. A denotes information stored in a memory location. To perform a computational operation, f ( A ), and to store the result in the same memory location, data is shuttled back and forth between the memory and the processing unit. b An alternative architecture where f ( A ) is performed in place in the same memory location. c One way to realize computational memory is by relying on the state dynamics of a large collection of memristive devices. Depending on the operation to be performed, a suitable electrical signal is applied to the memory devices. The conductance of the devices evolves in accordance with the electrical input, and the result of the operation can be retrieved by reading the conductance at an appropriate time instance Full size image To overcome this, a tantalizing prospect is that of transitioning to a hybrid architecture where certain operations, such as f , can be performed at the same physical location as where the data is stored (Fig. 1b ). Such a memory unit that facilitates collocated computation is referred to as computational memory. The essential idea is not to treat memory as a passive storage entity, but to exploit the physical attributes of the memory devices to realize computation exactly at the place where the data is stored. One example of computational memory is a recent demonstration of the use of DRAM to perform bulk bit-wise operations 1 and fast row copying 2 within the DRAM chip. A new class of emerging nanocale devices, namely, resistive memory or memristive devices with their non-volatile storage capability, is particularly well suited for computational memory. In these devices, information is stored in their resistance/conductance states 3 , 4 , 5 , 6 . An early proposal for the use of memristive devices for in-place computing was the realization of certain logical operations using a circuit based on TiO x -based memory devices 7 . The same memory devices were used simultaneously to store the inputs, perform the logic operation, and store the resulting output. Subsequently, more complex logic units based on this initial concept have been proposed 8 , 9 , 10 . In addition to performing logical operations, resistive memory devices, when arranged in a cross-bar configuration, can be used to perform matrix–vector multiplications in an analog manner. This exploits the multi-level storage capability as well as Ohm’s law and Kirchhoff’s law. Hardware accelerators based on this concept are now becoming an important subject of research 11 , 12 , 13 , 14 , 15 , 16 , 17 . However, in these applications, the cross-bar array of resistive memory devices serves as a non-von Neumann computing core and the results of the computation are not necessarily stored in the memory array. Besides the ability to perform logical operations and matrix–vector multiplications, another tantalizing prospect of computational memory is that of realizing higher-level computational primitives by exploiting the rich dynamic behavior of its constituent devices. The dynamic evolution of the conductance levels of those devices upon application of electrical signals can be used to perform in-place computing. A schematic illustration of this concept is shown in Fig. 1c . Depending on the operation to be performed, a suitable electrical signal is applied to the memory devices. The conductance of the devices evolves in accordance with the electrical input, and the result of the computation is imprinted in the memory array. One early demonstration of this concept was that of finding factors of numbers using phase change memory (PCM) devices, a type of resistive memory devices 18 , 19 , 20 . However, this procedure is rather sensitive to device variabilities and thus experimental demonstrations were confined to a small number of devices. Hence, a large-scale experimental demonstration of a high-level computational primitive that exploits the memristive device dynamics and is robust to device variabilities across an array is still lacking. In this paper, we present an algorithm to detect temporal correlations between event-based data streams using computational memory. The crystallization dynamics of PCM devices is exploited, and the result of the computation is imprinted in the very same memory devices. We demonstrate the efficacy and robustness of this scheme by presenting a large-scale experimental demonstration using an array of one million PCM devices. We also present applications of this algorithm to process real-world data sets such as weather data. Results Dynamics of phase change memory devices A PCM device consists of a nanometric volume of phase change material sandwiched between two electrodes. A schematic illustration of a PCM device with mushroom-type device geometry is shown in Fig. 2a ) 21 . In an as-fabricated device, the material is in the crystalline phase. When a current pulse of sufficiently high amplitude is applied to the PCM device (typically referred to as the RESET pulse), a significant portion of the phase change material melts owing to Joule heating. When the pulse is stopped abruptly, the molten material quenches into the amorphous phase because of the glass transition. In the resulting RESET state, the device will be in the low conductance state as the amorphous region blocks the bottom electrode. The size of the amorphous region is captured by the notion of an effective thickness, u a that also accounts for the asymmetric device geometry 22 . PCM devices exhibit a rich dynamic behavior with an interplay of electrical, thermal and structural dynamics that forms the basis for their application as computational memory. The electrical transport exhibits a strong field and temperature dependence 23 . Joule heating and the thermal transport pathways ensure that there is a strong temperature gradient within the PCM device. Depending on the temperature in the cell, the phase change material undergoes structural changes, such as phase transitions and structural relaxation 24 , 25 . Fig. 2 Crystallization dynamics. a Schematic of a mushroom-type phase change memory device showing the phase configurations. b Illustration of the crystallization dynamics. When an electrical signal with power P inp is applied to a PCM device, significant Joule heating occurs. The resulting temperature distribution across the device is determined by the thermal environment, in particular the effective thermal resistance, R th . The effective thickness of the amorphous region, u a , evolves in accordance with the temperature at the amorphous–crystalline interface, T int , and with the temperature dependence of crystal growth, v g . Experimental estimates of c R th and d v g Full size image In our demonstration, we focus on a specific aspect of the PCM dynamics: the crystallization dynamics capturing the progressive reduction in the size of the amorphous region due to the phase transition from amorphous to crystalline (Fig. 2b ). When a current pulse (typically referred to as the SET pulse) is applied to a PCM device in the RESET state such that the temperature reached in the cell via Joule heating is high enough, but below the melting temperature, a part of the amorphous region crystallizes. At the nanometer scale, the crystallization mechanism is dominated by crystal growth due to the large amorphous–crystalline interface area and the small volume of the amorphous region 24 . The crystallization dynamics in such a PCM device can be approximately described by $$\frac{{{\mathrm{d}}u_{\mathrm{a}}}}{{{\mathrm{d}}t}} = - v_{\mathrm{g}}\left( {T_{{\mathrm{int}}}} \right),$$ (1) where v g denotes the temperature-dependent growth velocity of the phase change material; T int = R th ( u a ) P inp + T amb is the temperature at the amorphous–crystalline interface, and \(u_{\mathrm{a}}(0) = u_{{\mathrm{a}}_0}\) is the initial effective amorphous thickness 24 . T amb is the ambient temperature, and R th is the effective thermal resistance that captures the thermal resistance of all possible heat pathways. Experimental estimates of R th and v g are shown in Figs. 2c, d , respectively 24 . From the estimate of R th as a function of u a , one can infer that the hottest region of the device is slightly above the bottom electrode and that the temperature within the device decreases monotonically with increasing distance from the bottom electrode. The estimate of v g shows the strong temperature dependence of the crystal growth rate. Up to approx. 550 K, the crystal growth rate is negligible whereas it is maximum at ~750 K. As a consequence of Eq. 1 , u a progressively decreases upon the application of repetitive SET pulses, and hence the low-field conductance progressively increases. In subsequent discussions, the RESET and SET pulses will be collectively referred to as write pulses. It is also worth noting that in a circuit-theoretic representation, the PCM device can be viewed as a generic memristor, with u a serving as an internal state variable 26 , 27 , 28 . Statistical correlation detection using computational memory In this section, we show how the crystallization dynamics of PCM devices can be exploited to detect statistical correlations between event-based data streams. This can be applied in various fields such as the Internet of Things (IoT), life sciences, networking, social networks, and large scientific experiments. For example, one could generate an event-based data stream based on the presence or absence of a specific word in a collection of tweets. Real-time processing of event-based data streams from dynamic vision sensors is another promising application area 29 . One can also view correlation detection as a key constituent of unsupervised learning where one of the objectives is to find correlated clusters in data streams. In a generic formulation of the problem, let us assume that there are N discrete-time binary stochastic processes arriving at a correlation detector (see Fig. 3a ). Let X i = { X i ( k )} be one of the processes. Then X i ( k ) is a random variable with probabilities $$P\left[ {X_i\left( k \right) = 1} \right] = p$$ (2) $$P\left[ {X_i(k) = 0} \right] = 1 - p,$$ (3) for 0 ≤ p ≤ 0.5. Let X j be another discrete-time binary stochastic process with the same value of parameter p . Then the correlation coefficient of the random variables X i ( k ) and X j ( k ) at time instant k is defined as $$c = \frac{{{\rm Cov}\left[ {X_i(k),X_j(k)} \right]}}{{\sqrt {{\rm Var}\left[ {X_i(k)} \right]{\rm Var}\left[ {X_j(k)} \right]} }}.$$ (4) Fig. 3 Temporal correlation detection. a Schematic of N stochastic binary processes, some correlated and the remainder uncorrelated, arriving at a correlation detector. b One approach to detect the correlated group is to obtain an uncentered covariance matrix. By summing the elements of this matrix along a row or column, we can obtain some kind of numerical weights corresponding to the N processes and can differentiate the correlated from the uncorrelated group based on their magnitudes. c Alternatively, the correlation detection problem can be realized using computational memory. Here each process is assigned to a single phase change memory device. Whenever the process takes the value 1, a SET pulse is applied to the PCM device. The amplitude or the width of the SET pulse is chosen to be proportional to the instantaneous sum of all processes. By monitoring the conductance of the memory devices, we can determine the correlated group Full size image Processes X i and X j are said to be correlated if c > 0 and uncorrelated otherwise. The objective of the correlation detection problem is to detect, in an unsupervised manner, an unknown subset of these processes that are mutually correlated. As shown in Supplementary Note 1 and schematically illustrated in Fig. 3b , one way to solve this problem is by obtaining an estimate of the uncentered covariance matrix corresponding the processes denoted by $$\hat R_{ij} = \frac{1}{K}\mathop {\sum}\limits_{k = 1}^K {X_i(k)X_j(k)} .$$ (5) Next, by summing the elements of this matrix along a row or column, we can obtain certain numerical weights corresponding to the processes denoted by \(\hat W_i = \mathop {\sum}\nolimits_{j = 1}^N {\hat R_{ij}} \) . It can be shown that if X i belongs to the correlated group with correlation coefficient c > 0, then $$E\left[ {\hat W_i} \right] = \left( {N - 1} \right)p^2 + p + \left( {N_{\mathrm{c}} - 1} \right)cp\left( {1 - p} \right).$$ (6) N c denotes the number of processes in the correlated group. In contrast, if X i belongs to the uncorrelated group, then $$E\left[ {\hat W_i} \right] = \left( {N - 1} \right)p^2 + p.$$ (7) Hence by monitoring \(\hat W_i\) in the limit of large K , we can determine which processes are correlated with c > 0. Moreover, it can be seen that with increasing c and N c , it becomes easier to determine whether a process belongs to a correlated group. We can show that this rather sophisticated problem of correlation detection can be solved efficiently using a computational memory module comprising PCM devices by exploiting the crystallization dynamics. By assigning each incoming process to a single PCM device, the statistical correlation can be calculated and stored in the very same device as the data passes through the memory. The way it is achieved is depicted schematically in Fig. 3c : At each time instance k , a collective momentum, \(M(k) = \mathop {\sum}\nolimits_{j = 1}^N {X_j(k)} \) , that corresponds to the instantaneous sum of all processes is calculated. The calculation of M ( k ) incurs little computational effort as it just counts the number of non-zero events at each time instance. Next, an identical SET pulse is applied potentially in parallel to all the PCM devices for which the assigned binary process has a value of 1. The duration or amplitude of the SET pulse is chosen to be a linear function of M ( k ). For example, let the duration of the pulse \(\delta t(k) = CM(k) = C\mathop {\sum}\nolimits_{j = 1}^N {X_j(k)} \) . For the sake of simplicity, let us assume that the interface temperature, T int , is independent of the amorphous thickness, u a . As the pulse amplitude is kept constant, \(v_{\rm{g}}{(T_{{\rm{i}} {\rm{n}} {\rm{t}}})} = {\mathscr G}\) , where \({\mathscr G}\) is a constant. Then from Eq. 1 , the absolute value of the change in the amorphous thickness of the i th phase change device at the k th discrete-time instance is $$\delta u_{{\mathrm{a}}_i}\left( k \right) = \delta t(k)v_{\mathrm{g}}(T_{{\mathrm{int}}}) = C{\mathscr G}\mathop {\sum}\limits_{j = 1}^N X_j(k).$$ (8) The total change in the amorphous thickness after K time steps can be shown to be $$\begin{array}{ccccc}\\ \Delta u_{{\mathrm{a}}_i}\left( K \right) = \mathop {\sum}\limits_{k = 1}^K \delta u_{{\mathrm{a}}_i}\left( k \right)X_i\left( k \right)\\ \\ = C{\mathscr G}\mathop {\sum}\limits_{k = 1}^K \mathop {\sum}\limits_{j = 1}^N X_i(k)X_j\left( k \right)\\ \\ = C{\mathscr G}\mathop {\sum}\limits_{j = 1}^N \mathop {\sum}\limits_{k = 1}^K X_i\left( k \right)X_j\left( k \right)\\ \\ = KC{\mathscr G}\mathop {\sum}\limits_{j = 1}^N \hat R_{ij}\\ \\ = KC{\mathscr G}\hat W_i.\\ \end{array}$$ (9) Hence, from Equations 6 and 7 , if X i is one of the correlated processes, then \(\Delta u_{{\mathrm{a}}_i}\) will be larger than if X i is one of the uncorrelated processes. Therefore by monitoring \(\Delta u_{{\mathrm{a}}_i}\) or the corresponding resistance/conductance for all phase change devices we can determine which processes are correlated. Experimental platform Next, we present experimental demonstrations of the concept. The experimental platform (schematically shown in Fig. 4a ) is built around a prototype PCM chip that comprises 3 million PCM devices. More details on the chip are presented in the methods section. As shown in Fig. 4b ), the PCM array is organized as a matrix of word lines (WL) and bit lines (BL). In addition to the PCM devices, the prototype chip integrates the circuitry for device addressing and for write and read operations. The PCM chip is interfaced to a hardware platform comprising two field programmable gate array (FPGA) boards and an analog-front-end (AFE) board. The AFE board provides the power supplies as well as the voltage and current reference sources for the PCM chip. The FPGA boards are used to implement the overall system control and data management as well as the interface with the data processing unit. The experimental platform is operated from a host computer, and a Matlab environment is used to coordinate the experiments. Fig. 4 Experimental platform and characterization results. a Schematic illustration of the experimental platform showing the main components. b The phase change memory array is organized as a matrix of word lines (WL) and bit lines (BL), and the chip also integrates the associated read/write circuitries. c The mean accumulation curve of 10,000 devices showing the map between the device conductance and the number of pulses. The devices achieve a higher conductance value with increasing SET current and also with increasing number of pulses. d The mean and standard deviation associated with the accumulation curve corresponding to the SET current of 100 μA. Also shown are the distributions of conductance values obtained after application of the 10th and 40th SET pulses Full size image An extensive array-level characterization of the PCM devices was conducted prior to the experimental demonstrations. In one experiment, 10,000 devices were arbitrarily chosen and were first RESET by applying a rectangular current pulse of 1 μs duration and 440 μA amplitude. After RESET, a sequence of SET pulses of 50 ns duration were applied to all devices, and the resulting device conductance values were monitored after the application of each pulse. The map between the device conductance and the number of pulses is referred to as accumulation curve. The accumulation curves corresponding to different SET currents are shown in Fig. 4c . These results clearly show that the mean conductance increases monotonically with increasing SET current (in the range from 50 and 100 μA) and with increasing number of SET pulses. From Fig. 4d , it can also be seen that a significant variability is associated with the evolution of the device conductance values. This variability arises from inter-device as well as intra-device variability. The intra-device variability is traced to the differences in the atomic configurations of the amorphous phase created via the melt-quench process after each RESET operation 30 , 31 . Besides the variability arising from the crystallization process, additional fluctuations in conductance also arise from 1/ f noise 32 and drift variability 33 . Experimental demonstration with a million processes In a first demonstration of correlation detection, we created the input data artificially, and generated one million binary stochastic processes organized in a two-dimensional grid (Fig. 5a ). We arbitrarily chose a subset of 95,525 processes, which we mutually correlated with a relatively weak instantaneous correlation coefficient of 0.1, whereas the other 904,475 were uncorrelated. The objective was to see if we can detect these correlated processes using the computational memory approach. Each stochastic process was assigned to a single PCM device. First, all devices were RESET by applying a current pulse of 1 μs duration and 440 μA amplitude. In this experiment, we chose to modulate the SET current while maintaining a constant pulse duration of 50 ns. At each time instance, the SET current is chosen to be equal to \(0.002 ^ \ast M(k)\) μA, where \(M(k) = \mathop {\sum}\nolimits_{j = 1}^N X_j(k)\) is equal to the collective momentum. This rather simple calculation was performed in the host computer. Alternatively, it could be done in one of the FPGA boards. Next, the on-chip write circuitry was instructed to apply a SET pulse with the calculated SET current to all PCM devices for which X i ( k ) = 1. To minimize the execution time, we chose not to program the devices if the SET current was less than 25 μA. The SET pulses were applied sequentially. However, if the chip has multiple write circuitries that can operate in parallel, then it is also possible to apply the SET pulses in parallel. This process of applying SET pulses was repeated at every time instance. The maximum SET current applied to the devices during the experiment was 80 μA. Fig. 5 Experimental results. a A million processes are mapped to the pixels of a 1000 × 1000 pixel black-and-white sketch of Alan Turing. The pixels turn on and off in accordance with the instantaneous binary values of the processes. b Evolution of device conductance over time, showing that the devices corresponding to the correlated processes go to a high conductance state. c The distribution of the device conductance shows that the algorithm is able to pick out most of the correlated processes. d Generation of a binary stochastic process based on the rainfall data from 270 weather stations across the USA. e The uncentered covariance matrix reveals several small correlated groups, along with a predominant correlated group. f The map of the device conductance levels after the experiment shows that the devices corresponding to the predominant correlated group have achieved a higher conductance value Full size image As described earlier, owing to the temporal correlation between the processes, the devices assigned to those processes are expected to go to a high conductance state. We periodically read the conductance values of all PCM devices using the on-chip read circuitry and the on-chip analog-to-digital convertor (ADC). The resulting map of the conductance values is shown in Fig. 5b . Also shown is the corresponding distribution of the conductance values (Fig. 5c ). This distribution shows that we can distinguish between the correlated and the uncorrelated processes. We constructed a binary classifier by slicing the histogram of Fig. 5c according to some threshold, above which processes are labeled correlated and below which processes are labeled uncorrelated. The threshold parameter can be swept across the domain, resulting in an ensemble of different classifiers, each with its own statistical characteristics (e.g., precision and recall). The area under the precision-recall curve (AUC) is an excellent metric for quantifying the performance of the classifier. The AUC is 0.93 for the computational memory approach compared to 0.095 for a random classifier that simply labels processes as correlated with some arbitrary probability. However, the performance is still short of that of an ideal classifier with AUC equal to one and this is attributed to the variability and conductance fluctuations discussed earlier. However, it is remarkable that in spite of these issues, we are able to perform the correlation detection with significantly high accuracy. Note that there are several applications, such as sensory data processing, where these levels of accuracy would be sufficient. Moreover, we could improve the accuracy by using multiple devices to interface with a single random process and by averaging their conductance values. This concept is also illustrated in the experimental demonstration on weather data that is described next. The conductance fluctuations can also be minimized using concepts such as projected phase change memory 34 . Note that the correlations need to be detected within a certain period of time. This arises from the finite conductance range of the PCM devices. There is a limit to the u a and hence the maximum conductance values that the devices can achieve. The accumulation curves in Fig. 4d clearly show that the mean conductance values begin to saturate after the application of a certain number of pulses. If the correlations are not detected within a certain amount of time, the conductance values corresponding to the correlated processes saturate while those corresponding to the uncorrelated processes continue to increase. Once the correlations have been detected, the devices need to be RESET, and the operation has to be resumed to detect subsequent correlations. The application of shorter SET pulses is one way to increase this time period. The use of multiple devices to interface with the random processes can also increase the overall conductance range. As per Eq. 6 , we would expect the level of separation between the distributions of correlated and uncorrelated groups to increase with increasing values of the correlation coefficient. We could confirm experimentally that the correlated groups can be detected down to very low correlation coefficients such as c = 0.01 (Supplementary Note 2 , Supplementary Movie 1 and Supplementary Movie 2 ). We also quantified the performance of the binary classifier by obtaining the precision-recall curves and could show that in all cases, the classifiers performed significantly better than a baseline, random classifier (Supplementary Fig. 2 ). Experiments also show that there is a potential for this technique to be extended to detect multiple correlated groups having different correlation coefficients (Supplementary Note 3 ). Experimental demonstration with weather data A second demonstration is based on real-world data from 270 weather stations across the USA. Over a period of 6 months, the rainfall data from each station constituted a binary stochastic process that was applied to the computational memory at one-hour time steps. The process took the value 1 if rainfall occurred in the preceding one-hour time window, else it was 0 (Fig. 5d ). An analysis of the uncentered covariance matrix shows that several correlated groups exist and that one of them is predominant. As expected, also a strong geographical correlation with the rainfall data exists (Fig. 5e ). Correlations between the rainfall events are also reflected in the geographical proximity between the corresponding weather stations. To detect the predominant correlated group using computational memory, we used the same approach as above, but with 4 PCM devices interfacing with each weather station data. The four devices were used to improve the accuracy. At each instance in time, the SET current was calculated to be equal to \(0.0013 \times M(k)\) μA. Next, the PCM chip was instructed to program the 270 × 4 devices sequentially with the calculated SET current. The on-chip write circuitry applies a write pulse with the calculated SET current to all PCM devices for which X i ( k ) = 1. We chose not to program the devices if the SET current was less than 25 μA. The duration of the pulse was fixed to be 50 ns, and the maximum SET current applied to the devices was 80 μA. The resulting device conductance map (averaged over the four devices per weather station) shows that the conductance values corresponding to the predominant correlated group of weather stations are comparably higher (Fig. 5f ). Based on a threshold conductance value chosen to be 2 μS, we can classify the weather stations into correlated and uncorrelated weather stations. This conductance threshold was chosen to get the best classifier performance (see Supplementary Note 2 ). We can also make comparisons with established unsupervised classification techniques such as k -means clustering. It was seen that, out of the 270 weather stations, there was a match for 245 weather stations. The computational memory approach classified 12 stations as uncorrelated that had been marked correlated by the k -means clustering approach. Similarly, the computational memory approach classified 13 stations as correlated that had been marked uncorrelated by the k -means clustering approach. Given the simplicity of the computational memory approach, it is remarkable that it can achieve this level of similarity with such a sophisticated and well-established classification algorithm (see Supplementary Note 4 for more details). Discussion The scientific relevance of the presented work is that we have convincingly demonstrated the ability of computational memory to perform certain high-level computational tasks in a non-von Neumann manner by exploiting the dynamics of resistive memory devices. We have also demonstrated the concept experimentally at the scale of a million PCM devices. Even though we programmed the devices sequentially in the experimental demonstrations using the prototype chip, we could also program them in parallel provided there is a sufficient number of write modules. A hypothetical computational memory module performing correlation detection need not be substantially different from conventional memory modules (Supplementary Note 5 ). The main constituents of such a module will also be a memory controller and a memory chip. Tasks such as computing M ( k ) can easily be performed in the memory controller. The memory controller can then convey the write/read instructions to the memory chip. In order to gain insight into the potential advantages of a correlation detector based on computational memory, we have compared the hypothetical performance of such a module with that of various implementations using state-of-the-art computing hardware (Supplementary Note 6 ). For this study, we have designed a multi-threaded implementation of correlation detection, an implementation that can leverage the massive parallelism offered by graphical processing units (GPUs), as well as a scale-out implementation that can run across several GPUs. All implementations were compiled and executed on an IBM Power System S822LC system. This system has two POWER8 CPUs (each comprising 10 cores) and 4 Nvidia Tesla P100 graphical processing units (attached using the NVLink interface). A detailed profiling of the GPU implementation reveals two key insights. Firstly, we find that the fraction of time computing the momentum M ( k ) is around \(2\% \) of the total execution time. Secondly, we observe that the performance is ultimately limited by the memory bandwidth of the GPU device. We then proceed to estimate the time that would be needed to perform the same task using a computational memory module: we determine the time required to compute the momentum on the memory controller, as well as the additional time required to perform the in-memory part of the computation. We conclude that by using such a computational memory module, one could accelerate the task of correlation detection by a factor of 200 relative to an implementation that uses 4 state-of-the-art GPU devices. We have also performed power profiling of the GPU implementation, and conclude that the computational memory module would provide a significant improvement in energy consumption of two orders of magnitude (Supplementary Note 6 ). An alternative approach to using PCM devices will be to design an application-specific chip where the accumulative behavior of PCM is emulated using complementary metal-oxide semiconductor (CMOS) technology using adders and registers (Supplementary Note 7 ). However, even at a relatively large 90 nm technology node, the areal footprint of the computational phase change memory is much smaller than that of CMOS-only approaches, even though the dynamic power consumption is comparable. By scaling the devices to smaller dimensions and by using shorter write pulses, these gains are expected to increase several fold 35 , 36 . The ultra-fast crystallization dynamics and non-volatility ensure a multi-time scale operating window ranging from a few tens of nanoseconds to years. These attributes are particularly attractive for slow processes, where the leakage of CMOS would dominate the dynamic power because of the low utilization rate. It can be shown that a single-layer spiking neural network can also be used to detect temporal correlations 30 . The event-based data streams can be translated into pre-synaptic spikes to a synaptic layer. On the basis of the synaptic weights, the postsynaptic potentials are generated and added to the membrane potential of a leaky integrate and fire neuron. The temporal correlations between the pre-synaptic input spikes and the neuronal-firing events result in an evolution of the synaptic weights due to a feedback-driven competition among the synapses. In the steady state, the correlations between the individual input streams can be inferred from the distribution of the synaptic weights or the resulting firing activity of the postsynaptic neuron. Recently, it was shown that in such a neural network, PCM devices can serve as the synaptic elements 37 , 38 . One could argue that the synaptic elements serve as some form of computational memory. Even though both approaches aim to solve the same problem, there are some notable differences. In the neural network approach, it is the spike-timing-dependent plasticity rule and the network dynamics that enable correlation detection. One could use any passive multi-level storage element to store the synaptic weight. Also note that the neuronal input is derived based on the value of the synaptic weights. It is challenging to implement such a feedback architecture in a computational memory unit. Such feedback architectures are also likely to be much more sensitive to device variabilities and nonlinearities and are not well suited for detecting very low correlations 37 , 39 . Detection of statistical correlations is just one of the computational primitives that could be realized using the crystallization dynamics. Another application of crystallization dynamics is that of finding factors of numbers, which we referred to in the introduction 20 . Assume that a PCM device is initialized in such a way that after the application of X number of pulses, the conductance exceeds a certain threshold. To check whether X is a factor of Y , Y number of pulses are applied to the device, re-initializing the device each time the conductance exceeds the threshold. It can be seen that if after the application of Y pulses, the conductance of the device is above the threshold, then X is a factor of Y . Another fascinating application of crystallization dynamics is to realize matrix–vector multiplications. To multiple an N × N matrix, A , with a N × 1vector, x , the elements of the matrix and the vector can be translated into the durations and amplitudes of a sequence of crystallizing pulses applied to an array of N PCM devices. It can be shown that by monitoring the conductance levels of the PCM devices, one obtains a good estimate of the matrix–vector product (Supplementary Note 8 ). Note that such an approach consumes only N devices compared to the existing approach based on the Kirchhoff’s circuit laws that requires N × N devices. In addition to the crystallization dynamics, one could also exploit other rich dynamic behavior in PCM devices, such as the dynamics of structural relaxation. Whenever an amorphous state is formed via the melt-quench process, the resulting unstable glass state relaxes to an energetically more favorable ideal glass state 25 , 40 , 41 , 42 (Supplementary Note 9 ). This structural relaxation, which codes the temporal information of the application of write pulses, can be exploited to perform tasks such as the detection of rates of processes in addition to their temporal correlations (Supplementary Note 9 ). It is also foreseeable that by further coupling the dynamics of these devices, we can potentially solve even more intriguing problems. Suggestions of such memcomputing machines that could solve certain non-deterministic polynomial (NP) problems in polynomial (P) time by exploiting attributes, such as the inherent parallelism, functional polymorphism, and information overhead are being actively investigated 43 , 44 . The concepts presented in this work could also be extended to the optical domain using devices such as photonic PCM 45 . In such an approach, optical signals instead of electrical signals will be used to program the devices. These concepts are also not limited to PCM devices: several other memristive device technologies exist that possess sufficiently rich dynamics to serve as computational memory 46 . However, it is worth noting that PCM technology is arguably the most advanced resistive memory technology at present with a very well-established multi-level storage capability 21 . The read endurance is assumed to be unlimited. There are also recent reports of more than 10 12 RESET/SET endurance cycles 47 . Note that in our experiments, we mostly apply only the SET pulses, and in this case the endurance is expected to be substantially higher. To summarize, the objective of our work was to realize a high-level computational primitive or machine-learning algorithm using computational memory. We proposed an algorithm to detect temporal correlations between event-based data streams that exploits the crystallization dynamics of PCM devices. The conductance of the PCM devices receiving correlated inputs evolves to a high value, and by monitoring these conductance values we can detect the temporal correlations. We performed a large-scale experimental demonstration of this concept using a million PCM devices, and could successfully detect weakly correlated processes in artificially generated stochastic input data. This experiment demonstrates the efficacy of this concept even in the presence of device variability and other non-ideal behavior. We also successfully processed real-world data sets from weather stations in the United States and obtained classification results similar to the k -means clustering algorithm. A detailed comparative study with respect to state-of-the-art von Neumann computing systems showed that computational memory could lead to orders of magnitude improvements in time/energy-to-solution compared to conventional computing systems. Methods Phase change memory chip The PCM devices were integrated into the chip in 90 nm CMOS technology 32 . The phase change material is doped Ge 2 Sb 2 Te 2 (d-GST). The bottom electrode has a radius of approx. 20 nm and a length of approx. 65 nm, and was defined using a sub-lithographic key-hole transfer process 48 . The phase change material is approx. 100 nm thick and extends to the top electrode. Two types of devices are available on-chip. They differ by the size of their access transistor. The first sub-array contains 2 million devices. In the second sub-array, which contains 1 million devices, the access transistors are twice as large. All experiments in this work were done on the second sub-array, which is organized as a matrix of 512 word lines (WL) and 2048 bit lines (BL). The selection of one PCM device is done by serially addressing a WL and a BL. A single selected device can be programmed by forcing a current through the BL with a voltage-controlled current source. For reading a PCM cell, the selected BL is biased to a constant voltage of 200 mV. The resulting read current is integrated by a capacitor, and the resulting voltage is then digitized by the on-chip 8-bit cyclic ADC. The total time of one read is 1 μ s. The readout characteristic is calibrated by means of on-chip reference poly-silicon resistors. Generation of 1M random processes and experimental details Let X r be a discrete binary process with probabilities P ( X r ( k ) = 1) = p and P ( X r ( k ) = 0) = 1 − p . Using X r as the reference process, N binary processes can be generated via the stochastic functions 39 $$\theta = P\left( {X_i(k) = 1\left| {X_r(k)} \right. = 1} \right) = p + \sqrt c \left( {1 - p} \right)$$ (10) $$\phi = P\left( {X_i(k) = 1\left| {X_r(k)} \right. = 0} \right) = p\left( {1 - \sqrt c } \right)$$ (11) $$P\left( {X_i(k) = 0} \right) = 1 - P\left( {X_i(k) = 1} \right).$$ (12) It can be shown that E ( X i ( k )) = p and Var ( X i ( k )) = p (1 − p ). If two processes X i and X j are both generated using Eqs. 10 – 12 , then the expectation of their product is given by: $$\begin{array}{ccccc}\\ E\left( {X_i\left( k \right)X_j\left( k \right)} \right) = P\left( {X_i\left( k \right) = 1,X_j\left( k \right) = 1} \right)\\ \\ = \mathop {\sum}\limits_{v \in \{ 0,1\} } {P\left( {X_i\left( k \right) = 1,X_j\left( k \right) = 1\left| {X_r\left( k \right)} \right. = v} \right)} P\left( {X_r\left( k \right) = v} \right).\\ \end{array}$$ Conditional on the value of the process X r , the two processes X i and X j are statistically independent by construction, and thus the conditional joint probability P ( X i ( k ) = 1, X j ( k ) = 1| X r ( k ) = v ) can be factorized as follows: $$\begin{array}{ccccc}\\ E\left( {X_i(k)X_j(k)} \right) = \mathop {\sum}\limits_{v \in \{ 0,1\} } P\left( {X_i(k) = 1\left| {X_r(k)} \right. = v} \right)P\left( {X_j(k) = 1\left| {X_r(k)} \right. = v} \right)\\ P\left( {X_r(k) = v} \right)\\ \\ = \theta ^2p + \phi ^2(1 - p)\\ \\ = p^2 + cp(1 - p),\\ \end{array}$$ where the final equality is obtained by substituting the preceding expressions for θ and ϕ , followed by some simple algebraic manipulation. It is then straightforward to show that the correlation coefficient between the two processes is equal to c as shown below: $$\begin{array}{ccccc}\\ Cov\left( {X_i(k)X_j(k)} \right) = E\left( {X_i(k)X_j(k)} \right) - E\left( {X_i(k)} \right)E\left( {X_j(k)} \right)\\ \\ = p^2 + cp(1 - p) - p^2\\ \\ \frac{{Cov(X_i(k)X_j(k))}}{{\sqrt {Var(X_i)Var(X_j)} }} = c\\ \end{array}$$ (13) For the experiment presented, we chose an X r where p = 0.01. A million binary processes were generated. Of these, N c = 95,525 are correlated with c > 0. The remaining 904,475 processes are mutually uncorrelated. Each process is mapped to one pixel of a 1000 × 1000 pixel black-and-white sketch of Alan Turing: white pixels are mapped to the uncorrelated processes; black pixels are mapped to the correlated processes. The seemingly arbitrary choice of N c arises from the need to match with the black pixels of the image. The pixels turn on and off in accordance with the binary values of the processes. One phase change memory device is allocated to each of the one million processes. Weather data-based processes and experimental details The weather data was obtained from the National Oceanic and Atmospheric Administration ( ) database of quality-controlled local climatological data. It provides hourly summaries of climatological data from approximately 1600 weather stations in the United States of America. The measurements were obtained over a 6-month period from January 2015 to June 2015 (181 days, 4344 h). We generated one binary stochastic process per weather station. If it rained in any given period of 1 h in a particular geographical location corresponding to a weather station, then the process takes the value 1; else it will be 0. For the experiments on correlation detection, we picked 270 weather stations with similar rates of rainfall activity. Data availability The data that support the findings of this study are available from the corresponding author upon request.
"In-memory computing" or "computational memory" is an emerging concept that uses the physical properties of memory devices for both storing and processing information. This is counter to current von Neumann systems and devices, such as standard desktop computers, laptops and even cellphones, which shuttle data back and forth between memory and the computing unit, thus making them slower and less energy efficient. Today, IBM Research is announcing that its scientists have demonstrated that an unsupervised machine-learning algorithm, running on one million phase change memory (PCM) devices, successfully found temporal correlations in unknown data streams. When compared to state-of-the-art classical computers, this prototype technology is expected to yield 200x improvements in both speed and energy efficiency, making it highly suitable for enabling ultra-dense, low-power, and massively-parallel computing systems for applications in AI. The researchers used PCM devices made from a germanium antimony telluride alloy, which is stacked and sandwiched between two electrodes. When the scientists apply a tiny electric current to the material, they heat it, which alters its state from amorphous (with a disordered atomic arrangement) to crystalline (with an ordered atomic configuration). The IBM researchers have used the crystallization dynamics to perform computation in place. "This is an important step forward in our research of the physics of AI, which explores new hardware materials, devices and architectures," says Dr. Evangelos Eleftheriou, an IBM Fellow and co-author of the paper. "As the CMOS scaling laws break down because of technological limits, a radical departure from the processor-memory dichotomy is needed to circumvent the limitations of today's computers. Given the simplicity, high speed and low energy of our in-memory computing approach, it's remarkable that our results are so similar to our benchmark classical approach run on a von Neumann computer." Credit: IBM Blog Research The details are explained in their paper appearing today in the peer-review journal Nature Communications. To demonstrate the technology, the authors chose two time-based examples and compared their results with traditional machine-learning methods such as k-means clustering: Simulated Data: one million binary (0 or 1) random processes organized on a 2-D grid based on a 1000 x 1000 pixel, black and white, profile drawing of famed British mathematician Alan Turing. The IBM scientists then made the pixels blink on and off with the same rate, but the black pixels turned on and off in a weakly correlated manner. This means that when a black pixel blinks, there is a slightly higher probability that another black pixel will also blink. The random processes were assigned to a million PCM devices, and a simple learning algorithm was implemented. With each blink, the PCM array learned, and the PCM devices corresponding to the correlated processes went to a high conductance state. In this way, the conductance map of the PCM devices recreates the drawing of Alan Turing. (see image above)Real-World Data: actual rainfall data, collected over a period of six months from 270 weather stations across the USA in one hour intervals. If rained within the hour, it was labelled "1" and if it didn't "0". Classical k-means clustering and the in-memory computing approach agreed on the classification of 245 out of the 270 weather stations. In-memory computing classified 12 stations as uncorrelated that had been marked correlated by the k-means clustering approach. Similarly, the in-memory computing approach classified 13 stations as correlated that had been marked uncorrelated by k-means clustering. "Memory has so far been viewed as a place where we merely store information. But in this work, we conclusively show how we can exploit the physics of these memory devices to also perform a rather high-level computational primitive. The result of the computation is also stored in the memory devices, and in this sense the concept is loosely inspired by how the brain computes." said Dr. Abu Sebastian, exploratory memory and cognitive technologies scientist, IBM Research and lead author of the paper. A schematic illustration of the in-memory computing algorithm. Credit: IBM Research
10.1038/s41467-017-01481-9
Medicine
Watching viruses fail to pass through face masks
Lea A. Furer et al, A novel inactivated virus system (InViS) for a fast and inexpensive assessment of viral disintegration, Scientific Reports (2022). DOI: 10.1038/s41598-022-15471-5 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-15471-5
https://medicalxpress.com/news/2022-12-viruses-masks.html
Abstract The COVID–19 pandemic has caused considerable interest worldwide in antiviral surfaces, and there has been a dramatic increase in the research and development of innovative material systems to reduce virus transmission in the past few years. The International Organization for Standardization (ISO) norms 18,184 and 21,702 are two standard methods to characterize the antiviral properties of porous and non-porous surfaces. However, during the last years of the pandemic, a need for faster and inexpensive characterization of antiviral material was identified. Therefore, a complementary method based on an Inactivated Virus System (InViS) was developed to facilitate the early-stage development of antiviral technologies and quality surveillance of the production of antiviral materials safely and efficiently. The InViS is loaded with a self-quenched fluorescent dye that produces a measurable increase in fluorescence when the viral envelope disintegrates. In the present work, the sensitivity of InViS to viral disintegration by known antiviral agents is demonstrated and its potential to characterize novel materials and surfaces is explored. Finally, the InViS is used to determine the fate of viral particles within facemasks layers, rendering it an interesting tool to support the development of antiviral surface systems for technical and medical applications. Introduction What started as a mysterious and unknown lung disease in Wuhan in December 2019 has evolved into a serious pandemic similar to the "Spanish flu": COVID-19. The trigger is the beta-coronavirus SARS-CoV-2, which is responsible for various symptoms such as inflammation of the throat and respiratory tract, coughing, shortness of breath, fatigue, fever, myalgia, conjunctivitis, loss of smell and impaired taste. In severe cases, it can lead to acute lung failure, multi-organ failure and death 1 . As of June 2022, the number of confirmed COVID-19 cases worldwide has reached 529 million and the number of deaths has risen to over 6 million 2 . Although effective vaccines and better treatment strategies have been developed in the meantime, new highly contagious mutations and low vaccination rates in poor countries continue to force the health care systems to their limits. Therefore, non-pharmaceutical interventions such as contact reduction, hygiene and facemasks remain important to keep the propagation of the virus at bay 3 . The overall goal of these measures is to slow down disease transmission between people. COVID-19 spreads mainly by respiratory droplets among people who are in close contact with each other 4 . However, also other mechanisms of disease transmission are also possible. For example, aerosol transmission can occur indoors in crowded and poorly ventilated spaces, and it is also possible to catch COVID-19 indirectly through touching surfaces or objects contaminated with the virus from infected people, followed by touching the eyes, nose or mouth (fomite transmission) 4 . It has been shown that SARS-CoV-2 is stable from a few hours to days depending on the chemistry of the surface on which the virus is deposited. For example, van Doremalen et al . demonstrated that SARS-CoV-2 remained stable on plastic and stainless steel surfaces for several hours. Viable virus could be detected up to 72 h after application to these surfaces, and the half-life of SARS-CoV-2 was estimated to be 5.6 h on stainless steel and to 6.8 h on plastic 5 . In contrast, no viable virus could be detected on cardboard surfaces after 24 h, and copper surfaces tended to inactivate the virus within 4 hours 5 . Chin et al . confirmed that SARS-CoV-2 is more stable on smooth and hydrophobic surfaces. While no infectious virus could be recovered from printing and tissue culture papers after 3 h, viable and infectious virus was still present on treated smooth surfaces (glass and banknotes) after 2 days, on stainless steel and plastic after 4 days and on the hydrophobic outer layer of a surgical mask after 7 days 6 . Since this high surface stability increases the risk of smear infection, it would be highly beneficial if virus particles that reach vulnerable surfaces such as touch screens, doorknobs, ventilation filters, textiles, train seats or handrails could be inactivated immediately without further disinfection processes. The same applies also to virus particles deposited on facemasks, since inappropriate mask handling is a commonly faced problem. Therefore, the development of antiviral materials, coatings and facemasks is of tremendous importance not only to fight COVID-19, but also to prevent similar pandemic or epidemic outbreaks in the future. There are several ways to inactivate or even destroy viruses. The most known are detergents or ethanol solutions which disintegrate the lipid and protein structure of the virus 7 , 8 , 9 , 10 , radiation such as ultraviolet light with wavelength between 200 and 300 nm (UVC) 11 , 12 , 13 , thermal treatment like autoclaving 14 , 15 , oxidation 16 , 17 or high positive surface charge 18 , 19 . When new antiviral materials are developed, it is crucial to investigate and quantify their potential to trap, inactivate and/or kill the viruses. Currently, there are two ISO norms available, which regulate the measurement of antiviral activity on plastics and other non-porous surfaces (ISO 21702) and the determination of antiviral activity of textile products (ISO 18184). In both norms, the infectious virus titer needs to be determined by either a plaque assay or the Median Tissue Culture Infectious Dose (TCID 50 ) method. Both of these assays require working with the real infectious virus and are based on the principle that cytopathic effects caused by the virus can be visibly assessed in vitro. Although these assays deliver reliable and comparable results on the antiviral activity of the materials, they have several disadvantages: working with real viruses such as e.g. SARS-CoV-2 requires special infrastructure (e.g. biosafety level 3 equipped labs) and trained employees. This prevents the widespread use of these analyses in most industrial and research settings and consequently delays the rapid development of novel antiviral materials as observed during the current COVID-pandemic. Consequently, novel characterisation methods are urgently needed to facilitate the fast, cheap and safe pre-screening of a large number of materials and surfaces for potential antiviral properties. In this work, an inactivated octadecylrhodamine R18 loaded A/Brisbane 59/2007 influenza virus system (InViS) is exploited as a surrogate for an infectious virus to assess viral disintegration by simple fluorescence measurements. This novel virus system allows the study of antiviral effects of different chemicals, cleaning agents and materials in a simple way: the fluorescence increases when virus particles disintegrate, since the fluorescent probe is self-quenched inside the intact virus lipid structure (Fig. 1 ). Figure 1 InViS, a fluorescent approach to detect viral envelope disintegration caused by antiviral materials. Full size image In this study, we explored and defined the application areas, predictive power and limitations of this novel InViS as an alternative pre-screening method by analyzing the effect of antiviral chemicals (70% ethanol, citric acid), different potential antiviral nanoparticles (NPs) and surface coatings. In addition, we exploited the use of InViS to assess and quantify the fate and localization of virus particles in facemask layers. Results InViS characterization The inactivation of viruses and loading with fluorescent dye 20 are established procedures applied in vaccine research. Here, we explore the potential of such an inactivated virus in the field of antiviral characterization. The Inactivated Virus System (InViS) used in this work consists of an inactivated octadecylrhodamine (R18) labeled A/Brisbane/2007 influenza virus solution, whose fluorescence increases upon membrane rupture or fusion 21 . The measured virus concentration was 1.5 × 10 13 (± 9.8 × 10 10 ) particles mL −1 and the virus particles exhibited a very homogeneous size of 110.6 ± 0.8 nm. The fluorescent label inside the virus was self-quenched and the fluorescence of the intact virus was only 23% of that after addition of the detergent octaethylene glycol monododecyl ether (OEG) (Fig. 2 A). Dynamic Ligth Scattering (DLS) measurements confirmed that the detergent damaged the virus particles and led to their disintegration. While the InViS system was monodisperse (polydispersity index (PdI) of 0.063 ± 0.023) before detergent administration, the addition of OEG resulted in a highly polydisperse sample (PdI of 0.443 ± 0.048) with smaller and larger peaks likely corresponding to virus fragments, and agglomerated viral structures and detergent micelles (Fig. 2 B). Therefore, the InViS system could be used to assess viral disintegrating properties of chemicals and materials by a fast and simple fluorescence readout. The detection limit (defined as the particle concentration at which the viral disintegration is no longer detectable by an increase in fluorescence from the instrument baseline response) was determined by serial dilutions and was 10 8 particles mL −1 . Figure 2 Effect of OEG on InViS. ( A ) The fluorescence intensity of InVis was monitored continuously over 300 s. After 280 s, OEG was added to the sample to induce viral disintegration and the release of the fluorescent R18 label from the viral membrane. ( B ) Particle size distribution measured by DLS before and after detergent administration. The intact InViS is highly monodisperse. When in contact with detergent, the virus disintegrates, leading to several populations of residual aggregates and a higher polydispersity index. Full size image Effect of known antiviral liquids To verify that InViS is not only sensitive to detergents, but can also detect other known antiviral compounds with different antiviral potency, we performed further experiments with 70% ethanol and citric acid (1 M). Both chemicals induced a significant increase in fluorescence signal intensity (Fig. 3 ). However, the mild antiviral agent citric acid 22 (pH 2.95) only induced a partial release of the R18 label, while an almost complete release similar to the positive detergent control was detected upon addition of 70% ethanol. Figure 3 Fluorescence intensity of InViS after incubation with citric acid (1 M) and 70% ethanol. InViS in PBS and InViS treated with OEG served as negative and positive control, respectively. Results represent the mean and corresponding standard deviations from three independent experiments with two replicates each. $ p < 0.01, £ p < 0.001 compared to negative controls. Full size image Assessment of potential antiviral NPs Different studies have shown that several NPs including copper oxide (CuO), zinc oxide (ZnO), titanium dioxide (TiO 2) , gold (Au), silver (Ag), selenium (Se) and graphene oxide (GO) possess antiviral properties 23 , 24 , 25 , 26 , 27 , 28 , 29 , yet the underlying toxicity mechanisms are often not fully understood. To investigate if the antiviral mechanisms of these NP types may include viral disintegration, we incubated the InViS with different NP concentrations for 2 and 24 h and measured the fluorescence intensities to detect the potential release of the fluorescent R18 label. None of the investigated NPs were able to destroy the InViS envelope, since fluorescence levels did not increase and were comparable to the negative control (intact InViS without NPs) (Fig. 4 ). At increasing NP concentrations, there was even a decrease in fluorescence intensity compared to untreated control samples. Although NPs were removed by centrifugation before the fluorescence measurements to avoid potential interference responses, we performed further interference studies, which confirmed the absence of NP autofluorescence signals or non-specific effects on the fluorescence measurements (data not shown). Figure 4 Fluorescence intensity after incubating InViS with NPs. NP suspensions were incubated for 2 and 24 h with InViS solution and suspensions were then centrifuged to remove NPs before fluorescence measurements. Negative control is InViS in PBS (0.4% v/v) and positive control is InViS in PBS (0.4% v/v) and OEG (1.25 mg mL −1 ). Results represent the mean and corresponding standard deviations from at least three independent experiments with two technical replicates per experiment. *p < 0.05, $ p < 0.01, £ p < 0.001 and # p < 0.0001 compared to negative control. Full size image Assessment of non-porous antiviral surfaces After evaluating the InViS system for the detection of antiviral effects of chemicals and NPs, we further assessed its suitability to detect the antiviral potential of surfaces and coatings. To obtain a flat and non-porous antiviral surface, sterile cell culture plates were coated with a novel antiviral coating solution (patent number PCT/EP2021/060580 30 ). The InViS was brought in contact with either the coated surface or the liquid form of the coating solution and a significant increase in fluorescence intensity could be detected in both cases (Fig. 5 ), clearly indicating virus capsule disintegration. Figure 5 Fluorescence intensity of InViS after contact with an antiviral coating. Both the liquid and coated form of an antiviral coating solution (patent number PCT/EP2021/060580 30 ) lead to a significant increase in fluorescence, indicating virus capsule disintegration. Fluorescence levels of the intact InViS in PBS and InViS incubated with OEG served as negative and positive control, respectively. Results represent the median and corresponding standard deviations from three independent experiments with two technical replicates. $ p < 0.01 and £ p < 0.001 compared to negative control. Full size image Localization of InViS within multilayer system Besides detecting virus disintegration, we hypothesized that the InViS system can also be suited to detect the fate and localization of virus particles in multilayered facemasks and textiles, which is highly valuable information to understand which layer(s) need to exhibit strong antiviral activity. We used a filtration efficiency test system to expose textile community masks, surgical masks and FFP2 (as defined in EN149 standard, corresponding to US-standard N95 respirator) masks to aerosols containing InViS. As a comparison, filtration efficiency testing was also performed with salt solutions, which represents the current European standard procedure (EN13274-7) to assess the filtration efficiency of textiles. After the filtration testing, the different mask layers were peeled apart and the localization of the particles was assessed by Scanning Electron Microscope (SEM) (salt and InViS; textile community masks only) and fluorescence measurements (InViS only; all mask types). Micrographs of the outer woven layer (Fig. 6 A,A',B) and the inner meltblown layer (Fig. 6 C,C',D,D',F–H) of a community mask are shown before and after filtration efficiency tests with salt or InViS particles. Additionally, SEM of InViS on a TEM grid (Fig. 6 E) was included, to facilitate the identification of viral particles in the mask layers. The InViS particles appeared monodisperse with a size of approximately 100 nm, while salt crystals appeared larger. Figure 6 Localization of NaCl and InViS particles in textile community masks after filtration efficiency tests. Micrographs after filtration efficiency test with NaCl show: ( A ) the outermost woven layer of a textile community mask, ( B ) a pore of the outermost woven layer after the filtration test with NaCl particles, ( C ) the inner meltblown fitration layer and ( D ) salt particles filtered by the inner meltblown filtration layer. ( A' – D' ) represent magnifications of the white quadrant in ( A – D ) (except C' which is from another area not visible in C ). Micrographs after filtration efficiency test with InVIS show: ( E ) SEM imaged of InViS on Transmission Electron Microscope (TEM) grid sample holder, ( F ) and ( G ) the inner meltblown filtration layer (salt crystals and InViS particles are highlighted with arrow heads and arrows, respectively). ( H ) is a reference fiber from the inner meltblown layer which did not undergo filtration experiments. Full size image During the filtration efficiency experiments, the air was first passed through the pores in the outer woven layer of the masks, where a preferential accumulation of salt particles near the pores was observed while most of the outer layer fibers were devoid of particles (Fig. 6 B,B'). InViS particles were not observed in the outer woven layer. Next, the air reached the inner meltblown layer, which showed a considerable accumulation of salt (Fig. 6 D) and InViS (Fig. 6 F,G) particles. To further corroborate the preferential localization of InViS particles to the inner meltblown layer in a semi-quantitative manner, complementary fluorescence measurements were carried out. Therefore, the different mask layers were treated with ethanol to release the rhodamine from the adsorbed InViS. The fluorescence intensity corresponding to each layer for surgical, textile and an FFP2 facemask is shown in Fig. 7 A,B. This wash out resulted in InViS disintegration, but still allowed the quantification of virus particles in each layer as the fluorescence intensity is proportional to the amount of filtered viral particles. Figure 7 Fluorescent intensity of the different layers of a textile, surgical ( A ) and FFP2 ( B ) mask after filtration efficiency testing with InViS. The air stream flowed from right to left (from outer to inner layer). EtOH control is the fluorescence value of the ethanol solution only. Results represent the mean and corresponding standard deviations from three independent experiments with two technical replicates. *p < 0.05, $ p < 0.01, £ p < 0.001 and # p < 0.0001 compared to controls. Full size image Discussion The severe COVID-19 pandemic has dramatically changed people's lives in the past 2.5 years. Things that used to be taken for granted, such as a regular job in the office, meeting friends, attending cultural events and travelling, were suddenly no longer possible. Lock-downs forced people all over the world to stay at home, and had a negative impact on the economy. Healthcare systems reached their capacity limits, and many people suffered or even died from a severe illness. To fight the pandemic, many researchers and clinicians worked under a high pressure to understand the disease and develop efficient vaccine and treatment options. In parallel, the textile industry intensively worked on the development of user-friendly, comfortable and effective facemasks to slow down disease transmission between people 31 . Soon, it was recognized that the development of antiviral materials and surfaces was recognized as an efficient mean to slow down the spread of SARS-CoV-2 and prevent infections by direct contact. However, the development of antiviral materials is a time- and cost-intensive process. One of the major issues was that the antiviral properties of newly developed coatings and materials could only be assessed by conducting tests with the real virus (ISO 21702 and ISO 18184). This required trained employees and a special infrastructure (e.g. biosafety level 3 equipped labs) which were difficult to access for most material developers and considerably delayed the development of innovative antiviral materials. Therefore, alternative test methods that are inexpensive as well as easy and safe to handle are highly valuable to facilitate the rapid pre-screening of novel antiviral material designs to advance innovation and rapidly identify promising candidates for further testing with the real virus. In this study, we present InViS, a novel alternative virus system that allows a fast, cheap and safe detection of virus disintegration activity of liquids, compounds and materials by simple fluorescence measurements. We used an inactivated Rhodamine-18 labeled A/Brisbane 59/2007 Influenza virus that has close structural similarities to SARS-CoV-2, namely constituting an enveloped RNA virus of ~ 100 nm in size with hemagglutinin on its surface. The virus has been inactivated with β-propiolactone, which preserves the antigenic virus structure, but renders the virus non-infectious due to alkylation of nucleic acid bases, suppression of genome replication, induction of genome degradation and protein and genome cross-linking 32 . The most interesting feature of InViS is that the fluorescent probe is self-quenched within the viral membrane. As a consequence, the fluorescence of InViS significantly increases when the virus particles disintegrate and release the fluorescent dye. It is known that ethanol can disintegrate virus particles by lipid membrane dissolution and protein denaturation 33 , 34 , 35 . Similarly, a low pH can lead to membrane degradation 36 , 37 , 38 . Therefore, we exposed InViS to 70% ethanol and citric acid to test its sensitivity against known antiviral liquids. Indeed, both chemicals led to a significant increase in fluorescence. When the InViS particles were in contact with the ethanol solution, complete viral disintegration occurred within 5 min. In the case of citric acid solution, the viral disintegration remained partial, indicating that the InViS system can deliver (semi-)quantitative data on virus disintegration. Because some NPs can have antiviral properties they are extensively explored for the development of novel antimicrobial materials and coatings 16 , 23 , 39 . Therefore, we investigated whether Ag, Au, CuO, ZnO, TiO 2 and graphene oxide NPs are able to disintegrate InViS. Although antiviral effects have been described for the investigated NP types, none of the tested NPs induced viral envelope disintegration even upon prolonged exposure for up to 24 h. We even observed a slight decrease in fluorescence signals with increasing NP concentrations, which could be due to particle adsorption to and/or penetration of the viral capsule surface and subsequent removal from the liquid sample during the centrifugation step. This would be in line with results published by Kim et al., where a co-precipitation of Au NPs and Influenza A virus during centrifugation was reported 24 . These results for the NPs are consistent with the existing literature, which suggests that antiviral activities of most NPs appear to rely on other effects than capsule disintegration. For example, Ag NPs were shown to bind or denature the viral capsid protein or inhibit the virus from binding to cell receptors, therefore preventing virus entry into the host cells 23 . NPs containing Cu could catalyze the generation of radicals via Fenton or Fenton-like reactions, oxidizing the capsid proteins and consequently blocking the viral infection at an early stage 23 . AuNPs have been shown to oxidize the disulfide bonds of the hemagglutinin glycoprotein on the viral surface, causing its inactivation and thus impeding the membrane fusion of the virus with host cells 23 . TiO 2 NPs may damage lipids and glycoproteins in the viral envelope 40 . Graphene nanosheets were able to interrupt hydrophobic protein–protein interactions and graphene oxide could adsorb virus particles, therefore preventing their interaction with the cell membrane. With this knowledge, the biggest limitation of our InViS system becomes evident: membrane dissolution is not the only inactivation method and therefore not all antiviral materials can be characterized using InViS. Nevertheless, many antiviral materials rely on viral envelope disintegration. The aim of the InViS is not to replace the existing and approved ISO tests. Rather, it was designed to offer a simple, cheap and safe alternative to assess viral disintegration (which prevents potential resistance development) at an early stage of research and development. The system is particularly interesting for researchers and manufacturers who want to assess the efficacy of materials that are designed to disintegrate the lipid envelope. During this study, we evaluated such an antiviral coating solution (patent number PCT/EP2021/060580 30 ) with InViS. We report a strong increase in fluorescence, and confirm that the coating solution resulted indeed in viral disintegration. Such rapid tests can be highly beneficial in the development of antiviral liquids and coatings. For example, they allow the screening of a large number of different substances, compositions and concentrations to identify the most promising solution for further development. Additionally, the InViS can be used to detect the fate and localization of NPs in multilayered structures, textiles and facemasks. We introduced InViS in a filtration efficiency system, where the performance of different types of facemasks was evaluated. This allowed the use of aerosols with biologically relevant virus particles instead of substances prescribed by the standards such as salt or oil. Due to the mixed presence of salt crystals and viral particles, it was not possible to measure the filtration efficiency directly from the virus aerosol using the particle analyzer. Nevertheless, we obtained important information regarding the accumulation of virus particles in the different mask layers and the relevance of using NaCl particles as a model for viral particles. Imaging by SEM of the different layers revealed that both salt and virus particles preferentially accumulated in the inner meltblown layer, suggesting that NaCl aerosols could be representative of virus particles despite their different characteristic in terms of size and shape. Nevertheless, an adaptation of the filtration efficiency bench could be implemented to conduct filtration efficiency tests with InViS. It would be interesting to quantify the filtration efficiency by collecting the residual viral particles that passed through the tested facemasks and filters, for example with a bubbler which allows remaining viral particles to be collected in a liquid solution. By comparing the residual fluorescence of mask samples with blank samples, variations in filtration efficiency may be detected. This would allow an evaluation of the filtration efficiency of face masks using virus particles and bring us one step closer to determining the filtration properties of medical and technical filtration systems against real viruses in a more relevant exposure scenario. In conclusion, we report the development of a novel method assessing potential antiviral compounds and surfaces with an inactivated virus system (InViS) for a fast, inexpensive and safe assessment of virus disintegration by simple fluorescence measurements. InViS can be further used to study the fate and localization of viral particles on non-porous as well as porous materials such as technical and medical textiles, rendering it a valuable tool to support the development of novel antiviral materials, coatings and facemasks. Materials and methods Virus inactivation and fluorescent dye loading Chemically inactivated and purified monovalent influenza virus A/Brisbane/59/2007 (H1N1) solution at GMP grade was obtained from Seqirus (Melbourne, Australia) hemagglutinin (HA) (1.6 mg mL −1 ) determined by Single Radial Immunodiffusion Assay, SRID. Rhodamine B octadecyl ester perchlorate (R18) was purchased from Merck KGaA (The Netherlands) and dissolved in HLPC grade anhydrous ethanol at a final concentration of 10 mM. R18 solution (40 µL) was added to an influenza solution (1 mL) dropwise at room temperature (RT) under continuous stirring at 200 rpm for 15 min. Non-incorporated R18 was removed by separation on a Sephadex G50 column (Merck KGaA). The void volume fraction was collected and further characterized. Virus characterization Particle size, size-distribution and concentration were analyzed by Nanoparticle Tracking Analysis on a Malvern NanoSight LM10 instrument. The sample was diluted 1:10,000 in HNE buffer (HEPES (10 mM) pH 7.4, NaCl (142.5 mM), EDTA (5 mM), filtered through a 0.1 µm syringe filter before use) and injected into the analysis chamber of the 405 nm laser module with a constant flow of 70 units at a controlled temperature of 25 °C and a viscosity setting of 0.975–0976 Cp. 5 captures were performed, with each capture having a duration of 60 s. Camera setting was level 15 with a detection threshold of 3. R18 fluorescence and fusion activity were determined to confirm fluorescence quenching and the presence of HA on the virus surface as described 20 , 41 . For the fusion of influenza virosomes with erythrocyte ghosts, the medium was acidified to pH 4.5. R18 fluorescence was measured continuously at excitation and emission wavelengths of 560 and 590 nm, respectively. For calibration of the fluorescence scale in fusion experiments, the initial fluorescence of the labeled membranes was set to zero and the fluorescence at infinite probe dilution at 100%. The latter value was obtained after addition of OEG (Merck KGaA, The Netherlands, final concentration 1 mM). To confirm that the virus remained intact after shipment, DLS measurements were performed to determine the hydrodynamic diameter of the virus particles and their polydispersity index in PBS before and after the addition of the OEG (Zetasizer Nanoseries, Nano-ZS90, Malvern, Worcestershire, UK, 1.25 mg mL −1 ). Furthermore, fluorescence measurements of the serially diluted virus with and without OEG (1.25 mg mL −1 , 5 min of incubation time) were performed with a Horiba FluoroMax SpectraFluorometer to establish the detection limit. Effects of known antiviral compounds 70% ethanol (CAS 64-17-5) and citric acid (1 M; CAS: 77-92-9) were incubated with InVis (0.4% v/v). The samples were homogenized with a vortex mixer and incubated for 5 min at RT. Fluorescence measurements were performed using a Horiba FluoroMax SpectraFluorometer. The excitation wavelength was 560 nm and emission spectra were measured between 580 and 650 nm. An InViS solution in PBS (0.4% v/v) was used as a negative control to provide the fluorescence intensity of the intact virus. As positive control, OEG (CAS: 3055-98-9; 2.5% v/v) was added to disintegrate the virus and indicate the corresponding fluorescence intensity. NP characterization and dispersion To study potential antiviral effects of NPs, we used particles that were used and fully characterized in previous studies. The most relevant properties of the particles are summarized in Table 1 . Table 1 Characteristics of the investigated NPs. Full size table Particles available as suspensions (Ag-COONa, Au-5-COONa) were diluted with ultrapure water to a stock suspension of 1 mg mL −1 and homogenized using a vortex mixer (1 min). Particles available as powder (CuO, graphene oxide, TiO 2 and ZnO) were suspended in ultrapure water to a stock suspension of 1 mg mL −1 using a probe sonicator operating at 230 V/50 Hz (Branson Sonifier 250, Branson Ultrasonic Co., Danbury, CT, USA, probe diameter of 6.5 mm, maximum peak-to-peak amplitude of 247 μm) for 5 min at 13 W. Assessing antiviral effects of NPs Different particle concentrations (0.1, 1, 10 and 100 µg mL −1 ) were incubated with a solution of InViS in PBS (0.4% v/v; stock concentration of InVis: 1.5 * 10 13 particles mL −1 ) for 2 and 24 h. The incubation was carried out at RT in the dark and with continuous shaking. The suspensions were centrifuged for 10 min at 4500× g to remove the NPs. Fluorescence of the supernatants was measured with a Horiba FluoroMax SpectraFluorometer. To quantify the antiviral effect, fluorescence was compared to the fluorescence of a virus suspension in PBS (negative control) and a virus suspension treated with OEG (1.25 mg mL −1 ; positive control where all the Rhodamine should be released). To exclude autofluorescence or fluorescence quenching of the NPs, fluorescence of pure NP suspensions (without virus) and fluorescence of NPs incubated with virus and detergent were also measured. Effects of antiviral coating on flat, non-porous surfaces An antiviral solution [patent number: PCT/EP2021/060580 30 ] was used to create an antiviral coating. First, the antiviral properties of the liquid form were characterized with a 5 min incubation with an InViS solution (0.4% v/v) and fluorescence measurements. To characterize the properties of the coated form, a volume of 0.5 mL was spread evenly in disposable petri dishes and incubated at RT for 24 h. The ISO norm 21,702, which covers the antiviral characterization of non-porous materials, was used as a guideline for sample preparation. The virus inoculum, 0.4 mL for each sample, was composed of a 20% v/v InViS stock solution in PBS. An inoculum with InViS (20% v/v) in PBS and an inoculum with InViS in PBS (20% v/v) and OEG (62.5 mg mL −1 ) in empty petri dishes were used as a negative and positive control, respectively. An inert polymer (low density polyethylene (LDPE)) film (40 mm × 40 mm) was used to cover the inoculum. The film was gently pressed down to form a sandwich structure and maximize the contact surface area between the inoculum and the antiviral sample, while preventing leakage beyond the edges of the inert film. The samples were stored for 24 h in the dark at RT. 20 mL of PBS were added to the petri dish and the dishes underwent agitation to ensure the homogenization of the freshly added PBS and the remaining inoculum. After mixing, 2 mL of the washout were collected and pipetted into transparent cuvettes. The emission fluorescence spectra of the washout was characterized using the Horiba FluoroMax SpectraFluorometer. The excitation wavelength was set to 560 nm and the emission intensity was measured between 580 and 650 nm. For each sample, two replicates were analyzed. Filtration efficiency bench To assess the fate and localization of InViS and salt particles in facemasks, the filtration efficiency set-up presented in Fig. 8 was used. By use of a pump system, a constant air flow of 8 L min −1 was generated through the specimen (based on EN13274-7), mimicking the human breathing volume at light physical exertion while maintaining a relative humidity in the final aerosol of 30–40% at room temperature. A circular piece of a facemask was mounted into a sample holder inside a small containment chamber and the particles diffusing through the specimen were quantified in real–time by using a particle analyzer (Cambustion DMS500). A solution of InViS in ultrapure water (10 11 particles mL −1 ) or NaCl [2 g mL −1 , aereosol concentration between 4 and 12 mgm −3 and median particle size between 60 and 100 nm] was fed to the aerosol generator (AGK 2000 Palas). As the virus stock solution was composed of virus in PBS, the aerosol solution contained 1% v/v of PBS. Therefore, not only InViS, but also PBS residues were present in the aerosol. Figure 8 Schematic description of the filtration efficiency bench. The virus or NaCl solution was placed in the aerosol generator. A constant airflow containing viral or salt particles was generated through the specimen based on DIN 14683. The particle size distribution was measured in the particle analyzer. Full size image Localization in facemask layers After filtration efficiency experiments, the mask layers were peeled apart and analyzed separately to localize the InViS particles. First, scanning electron microscopy (SEM) images of the inner and outer layers provided a qualitative analysis of the particle presence on the mask fibers. For this, a Hitachi S-4800 (Hitachi High-Technologies, Canada) SEM was used. Prior to imaging, the mask layers were mounted onto SEM stubs with a conductive double sided carbon tape and sputter coated with 7 nm of gold/palladium (LEICA EM ACE600) to reduce electron charging effects. The settings for SEM imaging were an accelerating voltage of 2 kV and current flow of 10 µA. Secondly, the amount of virus on each layer was evaluated by fluorescence measurements. For this, the different mask layers were placed in 8 mL of 70% ethanol for a period of 2 h. Afterwards, 2 mL of the solution were pipetted to transparent cuvettes and fluorescence measurements were conducted with a Horiba FluoroMax SpectraFluorometer (excitation wavelength 560 nm, emission spectra between 580 and 650 nm). Statistical analysis R was used for figures and statistical calculations. The statistical differences were assessed using the Student's t-test and the following symbols represent the corresponding p values: *p < 0.05, $ p < 0.01, £ p < 0.001 and # p < 0.0001. Data availability Data supporting this study are provided upon reasonable request to the corresponding authors.
Using a new analytical method, Empa researchers have tracked viruses as they pass through face masks and compared their failure on the filter layers of different types of masks. The new method should now accelerate the development of surfaces that can kill viruses, the team writes in the journal Scientific Reports. Using high pressure, the apparatus pushes artificial saliva fluid, colored in red, with test particles through a stretched mask. This is how the researchers simulate the process of a droplet infection. The new method established at Empa is currently used by certified test centers to ensure the quality of textile face masks because a safe and effective protective mask must meet demanding requirements: It must keep out germs, withstand splashing drops of saliva, and at the same time allow air to pass through. Now Empa researchers are going one step further: "Images taken using a transmission electron microscope show that a few virus particles manage to make their way into the innermost layer of the mask, close to the face. However, the images do not always reveal whether these viruses are still infectious," says Peter Wick of Empa's Particles-Biology Interactions lab in St. Gallen. The researchers' goal: They want to find out where exactly a virus particle is held back within a multilayer face mask during droplet infection, and which mask components should be more efficient. "We needed new analytical methods to precisely understand the protective function of newly developed technologies such as virus-killing coatings," says Empa researcher René Rossi of the Biomimetic Membranes and Textiles lab in St. Gallen. After all, this is precisely one of the goals of the ReMask project, in which research, industry and health care experts are teaming up with Empa in the fight against the pandemic to develop new concepts for better, more comfortable and more sustainable face masks. The new method is a fluorescent approach to detect viral disintegration caused by antiviral materials. Credit: Scientific Reports / Empa Dying beauty The new process relies on a dye, rhodamine R18, which emits colored light. Non-hazardous, inactivated test viruses are used, which are coupled to R18 and thus become dying beauties: They light up as soon as they are damaged. "The fluorescence indicates reliably, quickly and inexpensively when viruses have been killed," Wick says. Based on the intensity with which a mask layer glows, the team found that for fabric and hygiene masks, most viruses fail in the mid layer between the inner and outer layers of the mask. In FFP2 masks, the third of six layers glowed the most—again, the central layer trapped a particularly large number of viruses. These findings can now be used to optimize facial masks. In addition, the new process can accelerate the development of virus-killing surfaces. "Surfaces with antiviral properties must comply with certain ISO standards, which entail laborious standard tests," Wick explains. The Empa researchers' fluorescence method, on the other hand, could be a simpler, faster and more cost-effective way of determining whether a new type of coating can reliably kill viruses, as a supplement to current standards. This would be of interest both for smooth surfaces, such as on worktops or handles, and for coatings on textiles with a porous surface, such as masks or filter systems. And with the new method, this knowledge could already be integrated into the development process of technical and medical applications at a very early stage. According to Wick, this will speed up the introduction of new products, as only promising candidates will have to undergo the time-consuming and cost-intensive standardization tests.
10.1038/s41598-022-15471-5
Medicine
World's largest autism genome database shines new light on many 'autisms'
Whole genome sequencing resource identifies 18 new candidate genes for autism spectrum disorder, Nature Neuroscience (2017). DOI: 10.1038/nn.4524 Journal information: Nature Neuroscience
http://dx.doi.org/10.1038/nn.4524
https://medicalxpress.com/news/2017-03-world-largest-autism-genome-database.html
Abstract We are performing whole-genome sequencing of families with autism spectrum disorder (ASD) to build a resource (MSSNG) for subcategorizing the phenotypes and underlying genetic factors involved. Here we report sequencing of 5,205 samples from families with ASD, accompanied by clinical information, creating a database accessible on a cloud platform and through a controlled-access internet portal. We found an average of 73.8 de novo single nucleotide variants and 12.6 de novo insertions and deletions or copy number variations per ASD subject. We identified 18 new candidate ASD-risk genes and found that participants bearing mutations in susceptibility genes had significantly lower adaptive ability ( P = 6 × 10 −4 ). In 294 of 2,620 (11.2%) of ASD cases, a molecular basis could be determined and 7.2% of these carried copy number variations and/or chromosomal abnormalities, emphasizing the importance of detecting all forms of genetic variation as diagnostic and therapeutic targets in ASD. Main Autism is a term coined about a century ago, derived from the Greek root referring to 'self', and describes a wide range of human interpersonal behaviors 1 . Autistic tendencies may be recognized in many individuals as part of human variation 2 , but these features can be severe and therefore disabling 3 , 4 , 5 . The most recent Diagnostic and Statistical Manual of Mental Disorders , the fifth edition (DSM-5), uses the single omnibus classification 'autism spectrum disorder' (ASD) to encompass what once were considered several distinct diagnostic entities (such as autistic disorder, Asperger's disorder and pervasive developmental disorder not otherwise specified). The spectrum concept reflects both the diversity among individuals in severity of symptoms, from mild to severe, and the recognition of overlap among a collection of clinically described disorders 6 , 7 , 8 . Features of this neurodevelopmental disorder, which has a worldwide population prevalence of ∼ 1%, typically include impaired communication and social interaction, repetitive behaviors and restricted interests; these may also be associated with psychiatric, neurological or physical comorbidities and/or intellectual disability. Despite the unitary diagnostic classification, ASD is a heterogeneous spectrum, both in clinical presentation and in terms of the underlying etiology. Individuals with ASD are increasingly seen in clinical genetics services, and ∼ 10% have an identifiable genetic condition 4 , 9 . In fact, there are over 100 genetic disorders that can exhibit features of ASD (for example, Rett and Fragile X syndromes) 10 . Clearly, ASD is strongly associated with genetic factors. Dozens of susceptibility genes (for example, SHANK - and NRXN -family genes) 11 , 12 and copy number variation (CNV) loci (for example, 16p11.2 deletion and 15q11-q13 duplication) facilitate a molecular diagnosis in ∼ 5–40% of ASD cases. The variation largely depends on the cohort examined (for example, syndromic or idiopathic) and the technology used (i.e., karyotyping, microarray, whole-exome sequencing) 5 , 9 , 13 , 14 , 15 , 16 , 17 . In fact, the genetic predisposition toward ASD may be different for almost every individual 18 , making this a prime candidate for the coming age of precision medicine 6 , 7 , 19 . The first beneficiaries of a genetic diagnosis are young children, in whom formal diagnosis based on early developmental signs can be challenging but who benefit most from earlier behavioral intervention 3 , 8 . Understanding the genetic subtypes of ASD can also potentially inform prognosis, medical management and assessment of familial recurrence risk, and in the future, it may facilitate pharmacologic-intervention trials through stratification based on pathway profiles 14 . The vast heterogeneity also means that meticulous approaches are needed to catalog all the genetic factors that contribute to the phenotype and to consider how these interact with one another and with nongenomic elements. To move forward toward the goal of understanding all of the genetic factors involved in ASD, we recognize the need to scan the genome in its entirety using whole genome sequencing (WGS) 14 , 18 , 20 , 21 on thousands of samples, if not tens of thousands (or more) 13 , 22 , 23 , 24 . Risk variants that remain undiscovered to this point are expected to be individually rare 9 , 18 , 22 , possibly involved in complex combinations 18 , 25 and include single nucleotide variants (SNVs), small insertions and deletions (indels), more complex CNVs 14 , 18 , 20 and other structural alterations 15 , 26 . Some will reside in the ∼ 98% noncoding genome largely unexplored by other microarray and exome sequencing studies 21 , 27 , 28 . Abundant genome sequences may help to resolve the role of common variants in ASD 2 , 29 , and integrating these data with those on rare variants will aid understanding of the issues of penetrance, variable expressivity and pleiotropic effects 4 , 6 . Such research brings us to the realm of 'big data': massive sequence datasets from multitudes of individuals, requiring fast and intensive searches for meaningful patterns 30 . This is where cloud-based computing excels. Its capacity for bulk data storage, with efficient processing and built-in security, is ushering in a new model for data sharing, enabling access and collaboration across continents 31 , 32 . In our MSSNG initiative (where omission of letters from the name represents the information about autism that is missing and yet to be uncovered), we are collecting whole genome sequences and detailed phenotypic information from individuals with ASD and their families and making these data widely available to the research community ( Fig. 1 ). Here we describe the MSSNG infrastructure, new analyses of the first 5,205 genomes and examples of how to use the resource. Figure 1: Schematic of sample and data processing in MSSNG. An executive committee oversees the project. The parameters for DNA sample selection and (genetic and phenotypic) data are managed by the committee, including consenting and ethics protocols. Coded identifiers for samples selected for WGS are posted as they are identified at MSSNG portal ( ), so the ASD research community can monitor progress. Phenome data include subject information (identity number, year of birth, sex), family code (proband, parent, sibling), results of diagnostic tests (for example, Autism Diagnostic Observation Schedule (ADOS), Autism Diagnostic Interview–Revised (ADI-R), age at diagnosis, functional assessments, intelligence tests, body measurements and dysmorphic features). The database accommodates as much of this information as is available for each sample, although this varies widely. Future plans include incorporation of fields for comorbidities, related conditions, exposures, extended family history, interventions and other parameters as they become apparent. WGS technologies were Complete Genomics and Illumina HiSeq (2000 and X). WGS data are transferred to Google Genomics for data processing through the Google cloud. Ref-blocked genome Variant Call Format files (gVCFs) were generated and stored in Google cloud storage, and they were also processed for de novo mutation detection in the local cluster (for Complete Genomics data using filtering methods) and Google compute engine (for Illumina data using DenovoGear). Ref-blocked gVCFs and de novo mutations were annotated through the Google compute engine (using Annovar), which can be accessed through the BigQuery tables. Quality controls (QC) for the genomic data were performed in the local cluster and the Google cloud. The processed genetic data and the phenotypic data are accessible through the MSSNG Portal interface. The MSSNG database is designed to support incremental addition of data without changes in architecture, scaling to at least tens of thousands. New WGS and phenotype data are continually added to MSSNG as new batches of 1,000 samples are processed. DACO, data access committee; UPD, uniparental disomy; Ti/Tv, Transition to transversion ratio; IBS, identity by state; ID, identity; FASTQ, a text format for storing sequencing data with quality scores; BAM, binary alignment/map; IGV, Integrative Genomics Viewer. Full size image Results Samples and phenotypes Our pilot work 14 , 18 , 21 established four principles guiding the prioritization of the samples selected for WGS ( Table 1 ). (i) DNA from whole blood is preferred for detecting de novo mutations (especially for the proband) rather than DNA from cell lines, which may acquire variants in vitro . (ii) For a comprehensive ASD resource, it is important to sample families with different genetic characteristics in order to delineate the full spectrum of relevant variation (for example, heritable variants may differ from those arising de novo , and ascertainment biases can influence the frequency of genetic variants identified). (iii) Families with extensive phenotype data who are accessible to participate in further study are most informative for ASD. (iv) For an individual's genomic data to be used in ASD genomic research on this scale, consent must be in place, or obtainable, for WGS and for the data to be stored in a cloud-based platform. Table 1 ASD studies contributing samples for WGS Full size table Here we report on the WGS and analysis of 5,205 samples (5,193 unique individuals; 12 individuals were sequenced on two different platforms for technical replication or were from different DNA sources). From nine collections, these samples included 2,626 samples from 2,620 individuals (2,066 unique families) diagnosed with ASD ( Table 1 ). Of the total, 3,100 samples (3,090 individuals) are from simplex (one child with ASD) and 2,105 samples (2,103 individuals) are from multiplex families (two or more affected siblings); 1,745 samples (1,740 individuals) are from probands and 879 samples (878 individuals) are from affected siblings (with the exception of two affected individuals within this cohort who are the father and paternal grandfather of a proband). The samples from individuals with ASD include 2,067 from males (2,062 individuals) and 559 from females (558 individuals), a 3.7:1 male-to-female ratio. For 339 samples (46 probands and 293 parents) only cell-line DNA was available. Based on self-reports and confirmed with genotypes from WGS or microarrays, the majority (72.6%) of participants are of European ancestry ( Supplementary Table 1 and Supplementary Fig. 1 ). We obtained informed consent for all individuals, as approved by the respective research ethics boards. We have also developed a prospective consent form for WGS in persons with ASD ( Supplementary Note ). An ASD diagnosis was of research quality when it met criteria on one ( n = 437) or both ( n = 1,361) of the diagnostic measures ( Table 2 ) Autism Diagnostic Interview–Revised and Autism Diagnostic Observation Schedule; it was considered a clinical diagnosis ( n = 819) when given by an expert clinician according to DSM-IV or -5. Additionally, many participants were assessed with standardized measures of intelligence (IQ), language and general adaptive function. Out of the 1,102 individuals with IQ data available, 216 (19.6%) had scores within the range for intellectual disability (full scale IQ < 70). Physical measurements are also available for some individuals ( n = 1,022). Most samples of affected individuals ( n = 1,658) were genotyped on high-resolution microarrays (see below) and some by karyotyping or gene-specific assays. Table 2 Sample phenotype summary Full size table WGS We used different WGS platforms as they became available and were tested to assess data quality characteristics. We present data generated using Complete Genomics ( n = 1,233), Illumina HiSeq 2000 ( n = 561) and HiSeq X ( n = 3,411). The different WGS approaches and tools used for mapping and variant-calling yield data with different characteristics ( Fig. 2 ), but all platforms reliably called SNVs and smaller indels (up to 100 bp; larger CNVs are described below). Relative to the human reference sequence (hg19), the average coverage across all samples on three platforms was 93%, with an average of 40.4× sequence depth ( Table 1 and Supplementary Table 1 ). On average, we detected 3,654,992 SNVs and 722,816 indels per sample ( Supplementary Table 1 ). Figure 2: Characteristics and quality of WGS from different sequencing platforms. ( a ) Number of SNVs detected per genome. ( b ) Number of indels detected per genome. ( c ) Number of rare coding SNVs detected per genome after quality filtering. ( d ) Number of rare coding indels detected per genome after quality filtering. Orange, genomes sequenced by Complete Genomics pipeline version 2.0; brown, genomes sequenced by Complete Genomics version 2.4; green, genomes sequenced by Complete Genomics version 2.5; blue, genomes sequenced by Illumina HiSeq 2000; purple, genomes sequenced by Illumina HiSeq X. Details of individual sample quality can be found in Supplementary Table 1 . Full size image Systematic detection of sequence-level de novo mutations and candidate ASD-risk genes Identification of multiple de novo mutations occurring in the same gene from unrelated individuals highlighted candidate ASD-risk genes 13 , 22 . Modifying our previous approaches 14 , 18 , 21 (Online Methods ), we studied those 1,239 families (1,627 parents + child trios) for which child and parental WGS data were available (excluding children whose DNA was derived from cell lines). We identified 86.4 spontaneous events per genome (73.8 SNVs and 12.6 indels) ( Supplementary Tables 2 and 3 ), including 1.3 de novo exonic variants per genome 14 , 18 , 21 . Experimental validation rates for selected de novo SNVs and indels were 88.2% (494 of 560) and 85.1% (103 of 121), respectively. Most (58.3%) of the nonvalidated calls were caused by false-negative detection in the parents. In total, we detected 230 experimentally validated de novo loss-of-function (LOF) mutations ( Supplementary Tables 4 and 5 ). To increase the power for ASD-risk gene identification, we combined our data with the de novo mutations detected from other large-scale systematic whole-exome or WGS studies, which included 2,864 de novo missense mutations and 599 de novo LOF mutations in 4,087 trios 13 , 23 , 33 , 34 , 35 . To identify candidate ASD-risk genes, we initially considered genes likely to be mutation-intolerant based on the ExAC database 36 (with a probability of LOF intolerance rate (pLI) > 0.9 for LOF mutations; with z -score of z > 0.2 for missense mutations) and higher-than-expected mutation rate (false discovery rate (FDR) < 15%). This approach yielded 54 putative ASD-risk genes ( Fig. 3a ). Figure 3: ASD-susceptibility genes and loci. ( a ) ASD-risk genes with higher-than-expected mutation rates from MSSNG, integrated with other large-scale, high-throughput sequencing projects. ASD-risk genes are ranked in descending order of the number of mutations found for each gene. Other LOF mutations, including inherited LOF mutations and LOF mutations with unknown inheritance (where parents are unavailable for testing), as well as CNVs found in the MSSNG cohort, are also indicated (except for genes found by higher-than-expected de novo missense mutation rates). MSSNG data are in green and published data are in yellow. Putative ASD-risk genes identified in this study carry an asterisk. Δ indicate genes with druggable protein domains identified ( Supplementary Table 6 ). ( b ) Pathogenic chromosomal abnormalities and CNVs identified as falling into one of four categories: chromosomal abnormalities; DECIPHER loci and other genomic disorders associated with ASD; large rare CNVs between 3–10 Mb and CNVs disrupting ASD candidate genes not described above in Figure 3a . Red, deletions; blue, duplications; purple, complex variants; #, CNVs shared between affected siblings; ‡, a CNV carried by an individual with a second pathogenic CNV; †, a CNV shared between individuals within an extended pedigree. Details can be found in Supplementary Table 8 . Examples of CNVs affecting the NRXN1 and CHD8 genes, and the PTCHD1-AS noncoding gene identified from the WGS, are shown in Figure 4 . Full size image In addition to the de novo LOF mutations, we also combined the de novo or maternally inherited LOF mutations on the X chromosome in the affected males. We identified seven genes ( MECP2 , AFF2 , FAM47A , KIAA2022 , NLGN3 , NLGN4X and PCDH11X ) with multiple LOF mutations and with pLI > 0.65 ( Fig. 3a ). Taken together, 112 of the 2,620 subjects (4.3%) bear de novo LOF or missense mutations or inherited LOF mutations in the 61 ASD-risk genes identified ( Fig. 3a and Supplementary Table 5 ). Among these, 43 were found to be ASD-risk genes in a previous meta-analysis of exome sequencing 24 or in other CNV studies 10 , 15 , 17 . Detection of CNVs and chromosome abnormalities We examined CNVs detected from WGS using two calling algorithms for samples sequenced in Illumina platforms or provided by Complete Genomics, and for a subset of samples we examined CNVs using additional microarray data (Online Methods ). From the WGS derived CNVs, we detected 401.4 CNVs (>2 kb) per genome. We validated these using laboratory-based methods and/or WGS read-depth comparisons ( Figs. 3b and 4 and Online Methods ). We found that 189 of 2,620 (7.2%) subjects carry one or more pathogenic chromosomal abnormalities ( n = 21), megabase CNVs ( n = 25), CNVs involving genomic disorder loci ( n = 69) or CNVs affecting previously reported ASD-risk genes ( n = 58), all determined by standard diagnostic reporting criteria 16 , 17 , 37 and many associated with known syndromes of which ASD can be a component feature 5 , 9 , 10 . There were also 22 CNVs that overlapped with the ASD-risk genes found in this study ( Fig. 3a ). Three of the CNVs were around or less than 10 kb, which were only detectable using WGS, and five were noncoding ( Fig. 4c ). Figure 4: CNV characterization via WGS reads in the MSSNG Portal. ( a , b ) Visualization of CNVs in WGS data. ( a ) A heterozygous 246-kb deletion of three exons of NRXN1 at chromosome 2p16.3 in subject 2-1428-003 (average 50% decrease in sequence read-depth); ( b ) a 31.1-kb duplication within CHD8 at chromosome 14q11.2 in subject 2-1375-003 (average of 50% increase in sequence read-depth); and ( c ) a 125-kb deletion of exon 3 of the noncoding gene PTCHD1-AS at Xp22.11 in subject 1-0277-003 (no reads apparent, other than a small stretch of likely misaligned repetitive sequences). Left and right panels show the proximal and distal breakpoints of the CNVs, respectively. Aligned reads viewed from the BAM files in the MSSNG browser are shown, indicating the read depth. Genome coordinates are shown above and impacted genes below. The predicted CNVs visible from the WGS data and high-resolution microarray are shown in red (deletion) and blue (duplication) bars. For 32 CNVs described in Figure 3b plus 17 additional CNVs, we derived a more accurate estimate of the breakpoints by visual inspection of read depth from the BAM file in the MSSNG browser. On average, the size difference between the CNV predicted by microarray data and the estimated size from WGS data was 6.9 kb, and for 31 of 49 CNVs (63%), the size of the CNV was smaller in the microarray data than WGS. For four CNVs, the WGS-resolved breakpoints altered the exons of genes being annotated as deleted or duplicated. In another case, this resulted in a CNV from microarray no longer being classified as pathogenic, as the revised breakpoints no longer included the coding sequence. Full size image Medical genetics and functional properties Among these 61 ASD-risk genes with sequence-level mutations, 18 had not previously been reported in the literature ( CIC, CNOT3, DIP2C, MED13, PAX5, PHF3, SMARCC2, SRSF11, UBN2, DYNC1H1, AGAP2, ADCY3, CLASP1, MYO5A, TAF6, PCDH11X, KIAA2022 and FAM47A ). For two of these putative novel ASD-risk genes, mutations were found in at least three families from our data ( Supplementary Fig. 2 ); MED13 , which is related to the intellectual disability gene MED13L 38 , carried putative damaging mutations in three families. PHF3 mutations, related to PHF2 , which is known to be involved in ASD 24 , were found in four families; this gene encodes a PHD finger protein that regulates transcription by influencing chromatin structure 39 , a mechanism increasingly being implicated in ASD 17 , 21 , 40 . Other mutation-intolerant genes implicated in three or more families included PER2 and HECTD4 ( Supplementary Fig. 2 ). While they did not meet the statistical significance criteria shown in Figure 3a , they may still represent interesting functional candidates for ASD or associated complications in these individuals. Notably, of these 61 ASD susceptibility genes, 36 (59%) were associated with known syndromes and/or phenotypes in Online Mendelian Inheritance in Man (OMIM), of which CHD8 , SHANK2 and NLGN3 are associated only with ASD. Most (78%) of the known syndromes and phenotypes involved were intellectual disability or other related disorders, which may highlight the pleiotropic effects of these genes ( Supplementary Table 6 ). Combining the list of 61 genes with the CNV data identified in the WGS analysis yielded a framework map of ∼ 100 ASD-linked loci or chromosomal abnormalities (all listed in Fig. 3 ) for molecular diagnostic comparisons, accounting for 11.2% (294 of 2,620) of the subjects included in this study. Consistent with our previous findings 18 , ASD-relevant mutations were often different in affected siblings ( Supplementary Fig. 2 ). To assess the functional impact of genotypes, we compared the phenotypes ( Table 2 ) of participants with de novo LOF mutations, participants with mutations in ASD-risk genes and participants with pathogenic CNVs and no identified mutation in ASD-risk genes or CNVs. Only the differences in Vineland Adaptive Behavior Score (FDR = 0.04) and IQ full scale standard score (FDR = 5 × 10 −4 ) were significant after multiple testing corrections using Benjamini Hochberg approach. Consistent with the previous studies 41 , 42 , we found that individuals with pathogenic CNVs had significantly lower IQ ( P = 2 × 10 −3 , median difference: −8.5, 95% CI: −16 to −3; Fig. 5a ). Similarly, individuals with mutations in ASD-risk genes also had a trend of lower IQ ( P = 0.02, median difference: −11, 95% CI: −15 to −1.6 × 10 −6 ). More strikingly, however, we found that those individuals carrying mutations in ASD-risk genes had significantly lower Vineland adaptive ability scores ( P = 6 × 10 −4 , median difference: −6.5, 95% CI: −10 to −3; Fig. 5b ). Given that the Vineland adaptive score captures adaptive functioning better than cognitive ability, it may suggest that the ASD-risk genes identified here are more specific to ASD behavioral traits than to general cognitive deficits 43 . Figure 5: Phenotype comparison for the samples with and without identified mutations. Standard scores in ( a ) IQ full scale and ( b ) Vineland Adaptive Behavior evaluations were compared between samples with pathogenic CNVs, de novo LOF mutations, mutations in ASD-risk genes and other samples without any of these mutations. Full size image Many of the ASD-risk genes (80%; 49 of 61) identified were connected to gene networks ( Fig. 6 ). These genes are enriched in synaptic transmission, transcriptional regulation and RNA processing functions, consistent with previous findings 17 , 21 . We found that genes associated with transcriptional regulation and RNA processing are more often expressed in the brain prenatally, while synaptic-function-related genes are expressed in brain throughout development 44 . Our extended gene network revealed more interactions of genes. The candidate ASD-risk genes identified here, such as SRSF11 , may closely interact with known ASD-risk genes, such as UPF3B ( Fig. 6 ). Figure 6: Interaction similarity network of ASD-risk genes. Connections represent gene similarity based on physical protein interactions and pathway interactions. Connection thickness is proportional to the fraction of interaction partners shared by the connected genes. The size of the node for each gene is proportional to the total mutation count ( Fig. 3 ). Circles, genes associated with LOF mutations; diamonds, genes associated with missense mutations. Node colors correspond to the BrainSpan brain expression principal component 1 (yellow, prenatal; blue, postnatal; light blue, balanced; gray, undetermined). The labels of ASD-risk genes identified in this study are displayed in red. The network was visualized using Cytoscape. Full size image Data access and processing All data are available in the MSSNG Google cloud or linked databases, with Autism Speaks as the MSSNG data custodian. A web-based portal was also developed ( Figs. 1 and 4 and Supplementary Fig. 3 ). Example queries include those for retrieving predicted damaging variants for one (or more) genes of interest or for retrieving all damaging de novo variants in a subject. In addition, variant annotations, sequence–read pile-ups (using the Integrative Genomics Viewer plugin) and psychometric measurements can be accessed. Researchers receive authorization from the MSSNG's data access committee via an online application ( ). Autism Speaks uses the Public Population Project in Genomics and Society to independently recommend access according to guidelines established by Autism Speaks and Public Population Project in Genomics and Society, based on consents provided by the data donors or on research ethics board-approved waivers of consent for retrospective collections. Discussion Considering the breadth of data in our pilot WGS studies 14 , 18 , 21 and the global impact of ASDs, it became evident that an autism WGS project, encouraging use of data in a manner as unrestricted as possible for wide-ranging research questions, would be beneficial. We could move quickly because investments had already been made in developing biosample repositories from individuals with ASD and their families who had provided consent for their material to be used for genetic research. The resources generated and managed through MSSNG support ASD research in three related areas: (i) new gene discovery and diagnostics, (ii) genetic disease pathways, mechanisms, and pharmacologic development and trials, and (iii) open-science queries of any type, including exploring the significant heterogeneity underlying ASD, as well as the noncoding genome, most of which can only begin to be conceptualized now that the resource exists. First, using the statistical framework defining genes with higher-than-expected mutation rates, we have identified 18 new candidate genes for ASD or associated complications ( Fig. 3 highlights ∼ 100 diagnostic loci for ASD). Some of these newly detected mutations could reasonably be considered pathogenic and/or have possible implications for clinical management or genetic counseling for the subject or family members 4 , 8 . Examples include screening for cardiac defects or maturity-onset diabetes of the young in cases with 1q21.1 or 17q12 deletions, respectively; secondary prevention to avoid obesity in those with 16p11.2 microdeletion 4 , 45 ; and monitoring the use of growth factors (for example, IGF-1) in PTEN mutation carriers who may react negatively 8 . In numerous other cases, including all instances of CNV and chromosomal abnormalities, detection of the mutation would lead to prioritization of these individuals for comprehensive clinical assessment and referral for earlier intervention 3 , 4 and could end long-sought questions of causation 8 , 16 . The roles of other mutations in ASD need to be closely followed in the literature. Having the data accessible in a browser portal will continue to enable diagnosticians worldwide to remotely perform genotype–phenotype explorations of new testing results against the latest WGS research data. Second, 80% of the 61 ASD-risk genes on our refined list are connected in networks representing potential targets for pharmacologic intervention 19 . Sixteen genes contained subdomains that could be targeted by pharmaceutical intervention and seven contained subdomains for which specific drug–gene interactions are known ( Supplementary Table 6 ). For example, individuals with mutations in SCN2A identify carriers as potential candidates for drug trials involving allosteric modulators of GABA receptors 46 . By extending the search to genes affected by CNVs and/or to proteins that interact with or regulate these genes, the list of potential targets for modulating the pathways impacted in ASD expands. Additionally, the focus here was on gene products that could be pharmacologically modulated with small molecules, but the use of technologies such as oligonucleotide-based therapeutics or gene therapy further increases the list of potential interventions that could be used in addressing the biological deficits created by loss of function in these genes. Third, efforts at solving the problem of the significant heterogeneity involved in ASDs will be furthered by expanding this initiative, including by partnering with other WGS projects and coordinating all information in a single open-science platform, for which MSSNG provides a foundation. Regarding genotypic heterogeneity, using established criteria 17 , 47 , in this study we were able to resolve a molecular basis in 11.2% of ASD cases; this tally should rise as we acquire more rare variant data to compare against 22 . A notable outcome of our study was validation of the findings that CNV and chromosomal alterations contribute significantly to ASD ( Fig. 3b ). These genetic alterations also often encompass multiple genes ( Fig. 3b ), isoforms of single genes 48 and their regulatory elements 27 , 49 , and they can include noncoding genes such as the known ASD-risk gene PTCHD1-AS ( Fig. 4c ) and combinations of mutations (seven cases have both ASD-relevant SNVs and CNVs), necessitating the use of a comprehensive technology like WGS. Regarding phenotypic heterogeneity, our previous analysis of a subset of multiplex families in the MSSNG resource showed that siblings with discordant mutations tended to demonstrate more between-sibling variability than those who shared a risk variant 18 . In this study, with a larger sample size and access to richer phenotypic measures, our data reveal that participants bearing mutations in ASD susceptibility genes had lower adaptive ability than participants without such identified risk variants. Adaptive functioning as measured here using the Vineland Adaptive Behavior score was composed of estimates of socialization, communication, daily living skills and motor skills 50 . This finding needs to be further dissected to determine whether the association with risk variants is specific to one of these subdomains or is more linked to the composite. The same is true for the association with IQ. Large-scale computing for this project can be done from within the MSSNG cloud and/or using the investigator's local resources. Our intent is that researchers will move new code to the data (i.e., to access data for analysis with the cloud platform), in particular for massive WGS and phenotypic queries, including performing meta-analyses incorporating their own data. Ultimately, the new information arising should then be broadly shared. MSSNG researchers, for example, can use open-standard tools supported by the Global Alliance for Genomics and Health application programming interfaces 31 so that tools developed by individual groups can be applied to data published elsewhere. This kind of continued interactive participation in shared open-access research will continue to enable a better understanding of ASD and set a course for other genomic initiatives in neuroscience. Methods Samples for WGS and data access policy. We collected 5,205 unique samples (5,193 individuals) from 2,066 unique families with children diagnosed with ASD. The cohort consists of 2,618 children with ASD (1,740 probands and 878 affected siblings). Details on the collections the samples were drawn from are described in Supplementary Table 7 . Data collection and analysis were not performed blind to the conditions of the experiments. We recruited other siblings and members of the family across generations whenever possible. We obtained informed consents, or waivers of consent, which were approved by the Western Institutional Review Board, Montreal Children's Hospital–McGill University Health Centre Research Ethics Board, McMaster University–Hamilton Integrated Research Ethics Board, Eastern Health Research Ethics Board, Holland Bloorview Research Ethics Board and The Hospital for Sick Children Research Ethics Board. According to the consent or waiver of consent forms, participants agreed to make their coded genetic and phenotypic information available to researchers to help in the discovery of the DNA alterations underlying ASD and ASD-related disorders. Their coded data were uploaded to the MSSNG Google cloud database. Based on the current approved consent form, genomic and phenotypic data can be submitted to this type of online database, provided that all data is coded, that access to data is controlled and that specific data access policies are in place. The data access policy generated by the legal team at Autism Speaks was modeled on accepted practices in international research consortia, such as the International Cancer Genome Consortium (ICGC). A researcher seeking to gain access to the data and perform their analyses in the cloud-based environment or download the data to use their own analysis tools must apply for access following the process outlined in the Data Access Policy ( Supplementary Note ). Sequencing data is coded and access to the data is controlled and governed via an research ethics boards/IRB-approved data access policy (Western Institution Review Board for use of AGRE data and other review boards for specific sites contributing data). At the time of writing, 7,214 samples from individuals with ASD or their family members were analyzed by WGS and available. The goal of this project was to collect a large cohort of families to facilitate genetic analysis as previously described 22 . No statistical methods were used to predetermine sample sizes. Researchers can access data at multiple stages and levels of analysis: (i) through the MSSNG portal, which provides an interface for searching, filtering and browsing the final, curated variants, annotations and statistics via a web application; (ii) using BigQuery tables (a petabyte-scale distributed data warehousing (storage) and analytics (query) service) under the user's own account to perform custom queries, which allows flexibility for development of new analyses and applications; and (iii) via the user's own Google cloud storage (GCS) bucket on request for raw sequencing data and results of primary mapping (BAM files) and variant calling (MasterVar, VCF, gVCF) processes. At the time of writing, 75 researchers from 17 institutions in four countries (Canada, South Korea, UK and USA) were approved for access to MSSNG data. WGS and data storage. We extracted DNA from whole-blood or lymphoblast-derived cell lines (LCLs). We assessed the DNA quality using PicoGreen and gel electrophoresis. We sequenced the 5,205 genomes using Complete Genomics ( n = 1,233), Illumina HiSeq 2000 ( n = 561) and HiSeq X ( n = 3,411) technology. WGS by Complete Genomics (Mountain View, CA) and Illumina HiSeq 2000 were performed as previously described 14 , 18 , 21 . For WGS by Illumina HiSeq X, we used between 100 ng and 1 μg of genomic DNA for genomic library preparation and WGS. We quantified DNA samples using a Qubit High Sensitivity Assay and checked sample purity using the Nanodrop OD260/280 ratio. We used the Illumina TruSeq Nano DNA Library Prep Kit following the manufacturer's recommended protocol. In brief, we fragmented the DNA into 350-bp average lengths using sonication on a Covaris LE220 instrument. The fragmented DNA was end-repaired, A-tailed and indexed using TruSeq Illumina adapters with overhang-T added to the DNA. We validated the libraries on a Bioanalyzer DNA High Sensitivity chip to check for size and absence of primer dimers and quantified them by qPCR using a Kapa Library Quantification Illumina/ABI Prism Kit protocol (KAPA Biosystems). We pooled the validated libraries in equimolar quantities and sequenced the paired-end reads of 150-bp lengths on an Illumina HiSeq X platform following Illumina's recommended protocol. For samples sequenced on Illumina platforms, raw reads were uploaded to GCS. For samples sequenced by Complete Genomics, only analyzed results from the Complete Genomics pipeline were uploaded to GCS. Results of variant calling and filtering pipelines were also uploaded to GCS for permanent archiving and sharing with MSSNG researchers, and they were processed into BigQuery tables for access via the portal. Alignment and variant calling. Alignment and variant-calling for genomes sequenced by Complete Genomics were performed as previously described 18 . We processed genomes sequenced by Illumina platforms on Google cloud using Google Genomics APIs with a pipeline that follows the best practices recommended by the Broad Institute 51 . We inputted primarily the paired FASTQ files (with a few samples processed from binary alignment maps (BAMs)). We aligned the reads to the reference genome (build GRCh37) using the Burrows-Wheeler Aligner (BWA, version 0.7.10). We removed duplicated reads using Picard (version 1.117). We performed local realignment and quality recalibration with the Genome Analysis Toolkit (GATK; version 3.3) on each chromosome. We detected single SNVs and indels using GATK with HaplotypeCaller. We extracted nonvariant segments (reference intervals) that were emitted by HaplotypeCaller using a custom Java program (NonVariantSiteFilter.jar). The output file was generated in the universal variant call format (VCF). Both the VCF output by this process and the calls from Complete Genomics samples (MasterVar) were converted to separate variants and reference blocks in VCF and saved in GCS. The variants and reference blocks were imported into Google Genomics and then exported to a BigQuery table. Sample quality controls. We performed quality control checks for samples using codes from Google Genomics Codelab following the methodology developed previously 52 . We performed (i) duplicate samples, (ii) samples per platform, (iii) genome call rate, (iv) missingness rate, (v) singleton rate, (vi) heterozygosity rate, (vii) homozygosity rate, (viii) Ti/Tv ratio, (ix) inbreeding coefficient and (x) sex inference checks. To reduce the batch and cross-platform effects for analysis, we applied additional quality filters to remove variants caused by technical issues. We required variants to have genotype quality scores (GQ for Illumina; VAF for Complete Genomics) of at least 99. Since our analyses focused on rare variants, we required variants to be found in the population less than 1% of the time. We also required the variant to be called more than 95% of the time as a reference allele and less than 1% of time as a variant in the parents. Batch and cross-platform biases were substantially reduced after filtering ( Fig. 2 ). Detailed procedures and findings can be found in the Supplementary Note . Detection of de novo SNVs and indels. As described previously 14 , 18 , 21 , we considered a variant to be a potential de novo mutation when it was inconsistent with Mendelian inheritance (present in the offspring, but not in either parent). For a variant in the autosomal region, we considered it to be a potential de novo mutation when there was a heterozygous alternative genotype in the offspring and homozygous reference genotypes in both parents. For a variant in the X chromosome, we considered male and female offspring with different criteria: in sex-specific regions of male offspring, we considered it to be a de novo variant when there was a hemizygous alternative genotype in the offspring and a homozygous reference genotype in the mother. We considered X-linked variants in female offspring and X-linked variants in pseudo-autosomal regions in male offspring as for autosomal regions. We considered a variant in the Y chromosome to be de novo when a hemizygous alternative variant was present in the male offspring but absent at the same position in the father. We performed de novo SNV and indel detection from Complete Genomics data as previously described 18 , except that here we considered both parents with each offspring in the same family as separate trios. We used DenovoGear (version 0.5.4) for de novo SNV and indel detection on genomes sequenced by Illumina platforms, running the program on each chromosome. We also extracted high-quality variants (i.e., those that passed the quality filter) that were inconsistent with Mendelian inheritance based on GATK with allelic frequency among parents less than 1%. We defined a putative de novo SNV as an SNV with a pp_DNM from DenovoGear greater than 0.95 and overlap with GATK calls (GQ of at least 99). We defined a putative de novo indel as an indel found by both DenovoGear and GATK methods with the same start site. In addition, we performed a manual inspection on the quality of variants by inspecting reads from the BAM for variants found to be de novo by DenovoGear or GATK for ASD-risk genes. We used Primer 3 to design primers spanning at least 100 bp upstream and downstream of a putative variant. In designing primers, we avoided regions of repetitive elements, segmental duplication or known SNPs. We randomly selected putative de novo SNVs from the Illumina WGS data of two probands (2-1266-003 and 3-0141-000) in the trio families and from Complete Genomics WGS data of one proband (2-1292-003) in a quartet family for Sanger sequencing ( Supplementary Table 3 ). In addition, we validated all the de novo LOF SNVs and indels and reported pathogenic variants from all families by Sanger sequencing, using DNA from whole blood. Candidate regions were amplified by PCR for all families and assayed by Sanger sequencing ( Supplementary Table 3 ). No substantial difference in de novo mutation detection rate (average number of de novo mutation for CG: 88.9; Illumina: 85.2) or distribution ( Supplementary Fig. 4 ) was found between platforms. There was a difference in the validation rate for de novo LOF mutations between two platforms (CG: 78%; Illumina: 92%), but samples from CG only constituted 23.7% of the total samples ( Table 1 ). We found that 27.9% of total exonic de novo mutations were contributed by CG, which is proportional to the given number of samples. Identification ASD-risk genes. We performed a meta-analysis of de novo mutations for identification of ASD-risk genes. We concatenated the de novo mutations detected here with those detected from five previous large-scale, systematic whole-exome or WGS studies (from a total of 4,087 trios) 13 , 23 , 33 , 34 , 35 . To avoid sample duplication, we checked through registries to ensure that none of the samples from MSSNG were studied in the previous large-scale exome or WGS studies. Since the raw data for the previous studies were not easily accessible, we could not identify duplicate samples based on genotypes. However, we checked the possibility of duplicated samples based on the de novo mutation profiles given by each study. Focusing on exonic de novo mutations examined, there were only four pairs of samples sharing the same de novo variant in the 4,087 trios examined. Two of these pairs were found within same studies. While these pairs could be derived from the same samples, they only constituted a small portion (<0.1%) of the cohort. The variants were reannotated using our custom annotation pipeline (see below). There were a total of 2,864 de novo missense mutations and 599 de novo LOF mutations reported. Combining them with the de novo mutations detected in the present study ( Supplementary Table 3 ), we performed a statistical analysis to identify genes with higher than expected mutation rate based on the model framework described previously 47 . Observed rate of de novo mutation for each gene was compared with its expected rate using a binominal test. To address potential bias on mutation rate between observed data and expected simulation, we rescaled the statistics with a constant, k , which was derived from the ratio of the overall de novo mutation rate observed to that expected. For LOF mutations, we required genes to have at least two de novo LOF mutations and a probability of loss-of-function intolerant rate (pLI) > 0.9. For missense mutations, we required genes to have at least four de novo missense mutations and a missense z -score > 0.2 (derived based on scores from known ASD-risk genes and comparable gene number distributions with pLI > 0.9). We corrected P -values with the Benjamini-Hochberg procedure and defined candidate ASD-risk genes as having a false discovery rate (FDR) < 0.15. We also analyzed X-linked LOF mutations. We defined candidate ASD-risk genes as having at least two LOF mutations found in males or de novo LOF in females, and we required genes to have pLI > 0.65 (since X-linked genes and autosomal genes have different constraints, we derived the score for X-linked genes from the score of MECP2 : pLI = 0.69). SNV and indel annotation. We annotated the variants on the Google cloud engine using a custom pipeline based on Annovar, as previously described 14 , 18 , 21 , 53 . The annotation process infrastructure includes a separate internal portal, which automates distribution of annotation jobs in parallel over a dedicated virtual machine (VM)-based computing infrastructure cluster. The variant annotations were then exported to a BigQuery table. Variant information was downloaded from databases for the allele frequency (using the Exome aggregation Consortium 36 , 1000 Genomes 54 , NHLBI-ESP 55 and internal Complete Genomics control databases), genomic conservation (UCSC PhyloP and phastCons for placental mammals and 100 vertebrates 56 ), variant impact predictors (SIFT 57 , PolyPhen2 58 , Mutation Assessor 59 and CADD 60 ) and implication in human genetic disorders (Human Phenotype Ontology 61 , Human Gene Mutation Database 62 and Clinical Genomics Database 63 ). Detailed descriptions of the annotation effort can be found in the Supplementary Note . Genetic network construction. For each of the 61 ASD-risk genes, we retrieved the top 200 closely interacting gene neighbors using GeneMANIA 64 . We generated an aggregate interaction network in GeneMANIA, based on physical protein interaction and pathways with the 'gene ontology biological process' weighting option. We then computed a pairwise-weighted Jaccard index to model the similarity of the genes' interacting neighborhoods, resulting in the final gene network ( Fig. 6 ). Finally, we performed hierarchical clustering and manually optimized the weighted Jaccard index cutoff for displaying the gene network in Cytoscape 65 , so the gene clusters suggested by the network layout algorithm were similar to the clusters suggested by hierarchical clustering. CNV analysis. For samples sequenced on Illumina platforms, we detected CNVs from WGS for each sample using two algorithms, CNVnator 66 and ERDS 67 , as previously described 14 , 21 . Algorithms were run using their default parameters. We used 500 bp as the window size for CNVnator. For CNVnator, we removed calls with >50% of q0 (zero mapping quality) reads within the CNV regions (q0 filter), except for the homozygous autosomal deletions or hemizygous X-linked deletions in males (with normalized average read depth; NRD < 0.03). We defined stringent calls as those that were called by both algorithms (with 50% overlap). For samples sequenced by Complete Genomics, CNV calls were taken as provided as described previously 18 . Sixty-five samples had a total number of CNVs ≥ 2 s.d. of the average number, including 28 affected individuals. We retained CNVs with size > 2 kb. We defined a rare CNV as one found less than 1% of the time in the parents, in less than 0.1% in the population from microarray data and overlapping with a region that is at least 75% copy-number-stable according to the CNV map of the human genome 68 . We also performed a manual inspection on the quality of CNVs by inspecting reads from the BAM for confirmation. We also analyzed CNV data for 1,658 affected individuals genotyped on one or more of the following microarrays: Illumina 1M single or duo; Affymetrix CytoScan HD; Affymetrix single-nucleotide polymorphism 6.0; Illumina OMNI 2.5M; Agilent 1M CGH array; Affymetrix GeneChip Human Mapping 500K Array ( Supplementary Table 8 ). We defined rare, stringent CNVs as previously described 17 and also required them to overlap with a region that is at least 75% copy-number-stable according to the CNV map of the human genome 68 . We determined pathogenic CNVs as those resulting in chromosome abnormities; large rare CNVs between 3 and 10 Mb in size; genomic disorders with recurrent breakpoints (including all DECIPHER loci and other loci known to be associated with ASD) 10 , 17 and CNVs impacting coding exons of known ASD-risk genes or noncoding exons of PTCHD1-AS or MBD5 . All pathogenic CNVs found by microarray were found by WGS, except CNVs that were filtered out based on the quality issues or size difference ( Supplementary Table 8). Statistical tests. We compared the scores for phenotype tests ( Table 2 ) available for four groups of samples: (i) samples with pathogenic CNVs ( n = 177), (ii) de novo LOF mutations ( n = 170), (iii) mutations in ASD-risk genes ( n = 116) and (iv) other samples without any of these mutations ( n = 2,153). Samples included in each category were mutually exclusive to each other and there were no replicates (randomization not applicable). Phenotype tests investigated included (i) Vineland Adaptive Behavior standard score, (ii) Repetitive Behavior Scale–Revised overall score, (iii) Repetitive Behavior Scale overall score, (iv) Social Responsiveness total T -score, (v) Social Communication Questionnaire total score, (vi) language OWLS total standard score, (vii) Language PLS total standard score, (viii) IQ full scale standard score and (ix) nonverbal IQ standard score. Data distribution was assumed to be normal, but this was not formally tested. We performed ANOVA for the mean difference of the four groups in each of these tests: (i) degree of freedom (df) = 3, (ii) df = 3 in, (iii) df = 2, (iv) df = 3, (v) df = 3, (vi) df = 3, (vii) df = 3, (viii) df = 3 and (ix) df = 1. The differences between samples with mutations and samples without mutations were further tested using Wilcoxon signed-rank tests (one-sided) since they were not normally distributed. A Supplementary Methods Checklist is available. Gene-based drug targets. The 61 genes listed in Figure 3a were annotated using D.A.V.I.D. 69 for a number of gene-ontology categories and structural elements including PFAM subdomains. The PFAM labels were compared to lists of protein families generally considered to be druggable 70 , 71 . To identify previously validated gene–drug interactions, the 61-gene list was used to search the Drug Gene Interaction Database 72 ( ). Only those results with associated peer-reviewed publications were reported. Data Availability. Sequence data can be accessed via the European Genome-phenome Archive (EGA) using the accession codes EGAS00001001023 and EGAS00001000850 , and all sequence data can be accessed through the MSSNG database on Google Genomics (for access, see ). Code availability. Codes used in MSSNG database can be found in the Supplementary Note . Others are available upon reasonable request.
The newest study from the Autism Speaks MSSNG project - the world's largest autism genome sequencing program - identified an additional 18 gene variations that appear to increase the risk of autism. The new report appears this week in the journal Nature Neuroscience. It involved the analysis of 5,205 whole genomes from families affected by autism - making it the largest whole genome study of autism to date. The omitted letters in MSSNG (pronounced 'missing') represent the missing information about autism that the research program seeks to deliver. "It's noteworthy that we're still finding new autism genes, let alone 18 of them, after a decade of intense focus," says study co-author Mathew Pletcher, Ph.D., Autism Speaks' vice president for genomic discovery. "With each new gene discovery, we're able to explain more cases of autism, each with its own set of behavioral effects and many with associated medical concerns." To date, research using the MSSNG genomic database has identified 61 genetic variations that affect autism risk. The research has associated several of these with additional medical conditions that often accompany autism. The goal, Dr. Pletcher says, "is to advance personalized treatments for autism by deepening our understanding of the condition's many subtypes." The findings also illustrate how whole genome sequencing can guide medical care today. For example, at least two of the autism-associated gene changes described in the paper were associated with increased risk for seizures. Another has been linked to increased risk for cardiac defects, and yet another with adult diabetes. The findings illustrate how whole genome sequencing for autism can provide additional medical guidance to individuals, families and their physicians, the investigators say. The researchers also determined that many of the 18 newly identified autism genes affect the operation of a small subset of biological pathways in the brain. All of these pathways affect how brain cells develop and communicate with each other. "In all, 80 percent of the 61 gene variations discovered through MSSNG affect biochemical pathways that have clear potential as targets for future medicines," Dr. Pletcher says. Increasingly, autism researchers are predicting that personalized, more effective treatments will come from understanding these common brain pathways - and how different gene variations alter them. "The unprecedented MSSNG database is enabling research into the many 'autisms' that make up the autism spectrum," says the study's senior investigator, Stephen Scherer, Ph.D. For instance, some of the genetic alterations found in the study occurred in families with one person severely affected by autism and others on the milder end of the spectrum, Dr. Scherer notes. "This reinforces the significant neurodiversity involved in this complex condition," he explains. "In addition, the depth of the MSSNG database allowed us to identify resilient individuals who carry autism-associated gene variations without developing autism. We believe that this, too, is an important part of the neurodiversity story." Dr. Scherer is the research director for the MSSNG project and directs The Centre for Applied Genomics at the Hospital for Sick Children (SickKids), in Toronto. MSSNG is a collaboration between the hospital, Autism Speaks and Verily (formerly Google Life Sciences), which hosts the MSSNG database on its cloud platform. Traditional genetic analysis looks for mutations, or "spelling changes," in the 1 percent of our DNA that spells out our genes. By contrast, the MSSNG database allows researchers to analyze the entire 3 billion DNA base pairs that make up each person's genome. In their new study, the investigators went even further - looking beyond DNA "spelling" variations to find other types of genetic changes associated with autism. These included copy number variations (repeated or deleted stretches of DNA) and chromosomal abnormalities. Chromosomes are the threadlike cell structures that package and organize our genes. The researchers found copy number variations and chromosomal abnormalities to be particularly common in the genomes of people affected by autism. In addition, many of the copy number variations turned up in areas of the genome once considered "junk DNA." Though this genetic "dark matter" exists outside of our genes, scientists now appreciate that it helps control when and where our genes switch on and off. The precise coordination of genetic activity appears to be particularly crucial to brain development and function. Through its research platform on the Google Cloud, Autism Speaks is making all of MSSNG's fully sequenced genomes directly available to researchers free of charge, along with analytic tools. In the coming weeks, the MSSNG team will be uploading an additional 2,000 fully sequenced autism genomes, bringing the total over 7,000. Currently, more than 90 investigators at 40 academic and medical institutions are using the MSSNG database to advance autism research around the world.
10.1038/nn.4524
Biology
The bovine heritage of the yak
Whole-genome analysis of introgressive hybridization and characterization of the bovine legacy of Mongolian yaks, nature.com/articles/doi:10.1038/ng.3775 Journal information: Nature Genetics
http://nature.com/articles/doi:10.1038/ng.3775
https://phys.org/news/2017-01-bovine-heritage-yak.html
Abstract The yak is remarkable for its adaptation to high altitude and occupies a central place in the economies of the mountainous regions of Asia. At lower elevations, it is common to hybridize yaks with cattle to combine the yak's hardiness with the productivity of cattle. Hybrid males are sterile, however, preventing the establishment of stable hybrid populations, but not a limited introgression after backcrossing several generations of female hybrids to male yaks. Here we inferred bovine haplotypes in the genomes of 76 Mongolian yaks using high-density SNP genotyping and whole-genome sequencing. These yaks inherited ∼ 1.3% of their genome from bovine ancestors after nearly continuous admixture over at least the last 1,500 years. The introgressed regions are enriched in genes involved in nervous system development and function, and particularly in glutamate metabolism and neurotransmission. We also identified a novel mutation associated with a polled (hornless) phenotype originating from Mongolian Turano cattle. Our results suggest that introgressive hybridization contributed to the improvement of yak management and breeding. Main Hybridization is not unusual in nature. Although interspecific hybrids are rare at the population level, around 10% of animals and 25% of plants are known to occasionally hybridize with other species 1 . Evaluation of the genome-wide magnitude of this phenomenon has only recently become possible. The first results from such evaluations show that limited introgressions of the genome are widespread with a potentially important role in environmental adaptation, as is suggested by incorporation of genetic material from local species into the genomes of colonizing species (for example, incorporation of Neanderthal DNA into the genome of non-African humans 2 , 3 and Zea mays mexicana DNA into the maize genome 4 ). Successful analyses have identified genes under selection, but, because of some limitations, they were rarely able to determine precisely the nature of the selective pressure, identify genetic pathways under selection and pinpoint causative polymorphisms. Yak and cattle diverged approximately 4.9 million years ago 5 . Despite anatomical and physiological differences, both species are raised in mixed herds in Central Asia and are maintained using similar husbandry practices. Recent studies have reported several examples of gene flow from cattle to yaks 6 , 7 , 8 , and the existence of polled (hornless) animals in both species that do not carry the previously reported Celtic and Friesian POLLED locus mutations 9 , 10 , 11 , 12 ( Supplementary Table 1 ) raises the possibility of a common origin for this phenotype. For this reason, and because of the large genomic data sets available for cattle, analysis of bovine introgression in Mongolian yaks represents an appealing model to identify exchange of traits of interest between domesticated species. To assess bovine introgression in Mongolian yaks, we initially sequenced two individual genomes (YAK13, homozygous for the POLLED mutation and YAK40, horned) and plotted the frequency of yak and bovine alleles for all positions of yak-specific SNPs in 70-kb sliding windows (Online Methods and Supplementary Fig. 1 ). Considering yak-specific variants to be variants that were (i) homozygous for the alternate allele in the yak reference genome 5 (hereafter referred to as YAKQIU) and (ii) absent from 235 bovine genomes ( Supplementary Table 2 ) (ref. 13 ), we estimated that at least 1.73% and 1.22% of the YAK13 and YAK40 genomes were of bovine origin ( Supplementary Fig. 1 ). To identify introgression in YAKQIU, we used the number of yak-specific SNPs per 70-kb interval as an indicator and estimated a bovine proportion of 1.06% even in the yak reference sequence. To validate our whole-genome sequencing (WGS) approach for inference of local ancestry, two regions suggestive of homozygous cattle introgression in YAKQIU were PCR amplified and sequenced in YAK13 and in 12 species related to yak ( Supplementary Table 3 ). Multiple-sequence alignment and phylogenetic analyses of 2,191 bp of sequence clustered YAK13 with gaur, banteng, bison and wisent, in accordance with the phylogeny of the Bovini tribe 14 , whereas YAKQIU clustered with Mongolian Turano cattle, thus confirming cattle introgression in the yak reference genome ( Fig. 1 and Supplementary Table 4 ). Figure 1: Phylogenetic analyses of sequence data on chromosomes 9 and 25 confirm bovine introgression in the yak that was sequenced to generate the yak reference genome. ( a ) Bovine introgression plots constructed on the basis of WGS data. Blue and red dots show the relative frequencies of homozygous and heterozygous genotypes, respectively, for yak-specific polymorphisms from the sequencing results used to generate the yak reference genome. Each dot represents the number of genotypes in a 70-kb sliding window divided by the expected (genome-wide) number of polymorphisms in a 70-kb window. The background of each interval is shaded according to read depth, the mean number of reads per 70-kb window, ranging from 0 (dark gray) to >40 (white). Genotype frequencies from intervals with a white background are likely to be affected by artifacts from repeat expansions in yaks and those from dark-shaded intervals are likely to be affected by poor mappability or deletions; intervals with a neutral gray background are regarded as more robust. Introgressed intervals are identified by a break in the red line (circular binary segmentation of mean homozygous genotype frequency) and a drop in the frequency of homozygous genotypes for yak-specific alleles. Heterozygous genotypes for yak-specific alleles and the yellow line (circular binary segmentation of mean heterozygous genotype frequency) serve as a control to distinguish homozygous from heterozygous cattle introgression in the yak genome. These statistics suggest homozygous cattle introgression in two regions (BTA09:68.495–70.115 Mb and BTA25:17.345–19.995 Mb) in the individual that was sequenced to generate the yak reference genome. ( b ) Magnification of these two regions and details for five exons (blue rectangles) and flanking sequences from four genes ( L3MBTL3 , SAMD3 , ACSM2B and MGC134577 ) that were sequenced in 14 Bovini animals for validation of our introgression analysis based on WGS data ( Supplementary Table 4 ). The regions shown correspond to the intervals highlighted by dashed lines in a . ( c ) Neighbor-joining phylogeny of 14 haplotypes representing yaks, cattle and 10 Bovidae species, supporting homozygous introgression of cattle genes in the reference yak genome 5 . This analysis was based on sequence data from the five regions on chromosomes 9 and 25 presented in a and b , totaling 2,191 nt. The reliability (–) of the tree branches (shown at nodes) was tested by 1,000 bootstrap replicates. Full size image For a systematic analysis of cattle introgression in the Mongolian yak population, we investigated Illumina BovineHD BeadChip genotyping data ( ∼ 777,000 SNPs) for 76 animals originating from different localities ( Supplementary Table 1 ). Analysis of SNPs mapping to mitochondrial DNA ( n = 245) identified two yaks with deviating matrilineal ancestors, whereas analysis of SNPs mapping to the Bos taurus Y chromosome (BTAY; n = 921) showed an absence of bovine Y-chromosome SNPs in this panel ( Supplementary Figs. 2 and 3 ). We then applied a robust forward–backward algorithm (RFMix) 2 , 15 to screen for the presence of cattle haplotypes in the autosomal genomes of this panel, excluding the major histocompatibility complex (MHC) locus (for an explanation of why, see the Supplementary Note ), using (i) WGS data from three yaks to determine alleles present in yaks, (ii) a six-Bovini consensus sequence to determine ancestral states for all SNPs, and (iii) additional genotyping data from 384 cattle ( Supplementary Table 5 ) as a reference panel assumed to harbor no yak ancestry ( Supplementary Fig. 4 ). The proportion of the genome inferred to be of cattle ancestry ranged between 0.67% and 2.82% (mean ± standard error (SE) = 1.31 ± 0.36%; false discovery rate (FDR) = 0.05) per animal ( Supplementary Table 6 and Supplementary Note ), a result consistent with a severe restriction of introgression by the culling of most backcross-derived calves and the persistence of hybrid male sterility up to the third or fourth generation of backcross 16 . In total, as much as 33.2% of the bovine genome was recovered from our panel of 76 yaks, with noticeable variations between chromosomes ( Fig. 2a and Supplementary Tables 6 and 7 ). In agreement with the 'large X-effect' on hybrid male sterility (for a review, see Presgraves 17 ), BTAX was one of the least introgressed chromosomes and displayed the lowest median and maximum sizes for introgressed segments. Figure 2: Analysis of the size distribution of introgressed intervals in the yak genome reveals three major introgressions events. ( a ) The minimal, maximal, average and median lengths of introgressed intervals on each of the 30 chromosomes are plotted for 76 yaks genotyped with the Illumina BovineHD SNP chip. The genome-wide average and median lengths of the introgressed intervals are represented by green and red dashed lines, respectively. ( b ) Distribution of the lengths of the bovine DNA segments introgressed into the yak genomes as estimated by our RFMix procedure. Absolute counts of bovine-derived fragments observed in (i) all 76 yaks (black curve, white data points); (ii) 26 yaks sampled in Mongolia (black curve); (iii) 50 yaks of Mongolian descent sampled in Europe (blue curve); (iv) simulated three-date admixture with cattle in 76 deintrogressed yaks with a proportion of cattle DNA of 0.0005 at 250, 0.011 at 150, and 0.0045 at 37 generations ago (red curve); and (v) continuous admixture between cattle and yaks with a proportion of cattle DNA of 0.00045 every fifth generation in a period from 40 to 220 generations ago (red curve, white data points) were divided by the number of considered haploids in each of the five groups. The lengths of the detected introgressed segments varied between 108 kb and 24.63 Mb with a median of 601 kb. The 10-Mb interval (chr. 23: 22.0–32.0 Mb) comprising the MHC region was not considered in this distribution. The figure presents intervals up to a maximal length of 5,000 kb. Longer intervals had frequencies of 0% or 1% and are not all shown here for reasons of clarity. Full size image Phylogenetic analysis revealed a close genetic relationship between the admixture source and the Mongolian Turano cattle group (Online Methods and Supplementary Fig. 5 ). Simulation results from one- and multiple-date admixture followed by segment retrieval by RFMix supported nearly continuous admixture throughout the last 1,500 years with a low proportion of cattle gametes (around 1/11,000 per generation; Fig. 2b and Supplementary Note ). While hybridization between yaks and cattle was already a common practice 1,800 years ago 16 , we could not detect admixture older than 1,500 years because of the limitations of the methods. Introgression was more intense during two periods (897–1121 CE and 1695–1828 CE), which coincide with the Medieval Climate Anomaly (900–1200 CE) 18 and the Dzungar–Qing Wars (1687–1758 CE) 19 . These periods of intense introgression are most likely due to increased mortality of livestock during these difficult times that forced yak herders to breed all of the females available to restore their herds, including backcross-derived animals ( Supplementary Note ). To identify phenotypes that have undergone positive selection, we next mined the gene content of 365 intervals defined by the smallest exogenous segment that was shared for each region showing introgression in at least 1% of the investigated haplotypes ( Supplementary Table 8 ). Functional annotation of the resulting 1,311 transcripts using DAVID identified a major enrichment for genes involved in sensory perception, cognition and neurological system processes (Benjamini-corrected P < 1.0 × 10 −8 ; Supplementary Table 9 ), which are known to be key domestication targets 20 , 21 . Furthermore, similar results were obtained with different thresholds for the percentage of introgression and interval size, indicating that selection on these genes, which most likely contributed to taming the ferocious temper of yaks, has been a common and continuous process since the first hybridizations ( Supplementary Table 9 ). In total, we were able to retrieve 443 genes involved in nervous system development and function in 208 intervals after performing complementary gene set enrichment analyses and literature review (Online Methods ). These comprised genes related to nervous system development and function, synaptic transmission, sensory perception and a large variety of disorders affecting learning ability, social behavior, fear response and orientation in space in humans and animals ( Fig. 3a and Supplementary Tables 8–12 ). Among these genes, we highlight ITGA9 , a susceptibility gene for bipolar affective disorder 22 , which showed the highest level of introgression in the yak genome with 56% (85/152) bovine alleles. We also highlight the presence of nine genes from the canonical glutamate receptor signaling pathway, including each of the four subtypes of receptor for this molecule, which is the principal excitatory neurotransmitter in the brain 23 ( Fig. 3b,c ). Significantly ( P 0.01) enriched canonical pathways, according to Ingenuity Pathway Analysis, also included (i) NAD biosynthesis from tryptophan, (ii) lysine degradation II and V, which produce L -glutamate, (iii) the visual cycle involved in sensory transduction of light in the retina, (iv) sphingosine-1-phosphate signaling, which participates to neuromodulation 24 , (v) neuropathic pain signaling in dorsal horn neurons, and (vi) Huntington's disease ( Fig. 3b ). Figure 3: Bovine introgressed segments in yaks show a major enrichment for genes related to nervous system development and function. ( a ) Word cloud illustrating major enrichment in bovine introgressed segments for genes that are related to nervous system development and function, behavior, neurological diseases and psychological disorders in yaks, as shown by Ingenuity Pathway Analysis. We considered a total of 1,311 genes that were associated with 365 intervals showing at least 1% (i.e., 2 alleles) bovine genome introgression in our panel of 76 yaks ( Supplementary Table 8 ) for Ingenuity Pathway Analysis. A unique keyword was attributed to each significantly enriched pathway in “Diseases and Bio Function analysis” according to “Diseases or Functions Annotation” (Online Methods ). Keywords referring to ubiquitous cellular or organismal processes are not represented here to avoid overloading the cloud. Font size is proportional to the number of occurrences of each keyword. ( b ) Venn-like diagram presenting the canonical pathways that are significantly enriched ( P < 0.01) in introgressed segments according to Ingenuity Pathway Analysis and lists of associated genes ( Supplementary Table 11 ). The group of enriched canonical pathways consists of five pathways that are related to nervous system development, function or pathologies and two pathways resulting in the production of L -glutamate, which is the principal excitatory neurotransmitter in the brain. ( c ) Localization at the neuron synapse level of the main proteins involved in the canonical glutamate receptor signaling pathway (adapted from Ingenuity Pathway Analysis). Proteins encoded by the genes listed in b are highlighted in pink. GRIA4, glutamate receptor, ionotropic, AMPA 4; GRIK3, glutamate receptor, ionotropic, kainate 3; GRIN2A, glutamate receptor, ionotropic, NMDA 2A; GRIN3A, glutamate receptor, ionotropic, NMDA 3A; GRIP1, glutamate-receptor-interacting protein 1; GRM4, glutamate receptor, metabotropic 4; CAMK4, calcium/calmodulin-dependent protein kinase IV; DLG4, discs, large homolog 4 ( Drosophila ); NMDA, N -methyl- D -aspartate; AMPA, α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid; CALM, belonging to the calcium/calmodulin-dependent protein kinase group, and PSD-95, postsynaptic density protein 95, are respectively encoded by CAMK4 and DLG4 ; EPSPs, excitatory postsynaptic potentials. Note the presence of each of the four subtypes of glutamate receptor (ionotropic, AMPA; ionotropic, kainate; ionotropic, NMDA; metabotropic) among these proteins. Full size image At the individual level, each yak carried bovine alleles for numerous genes involved in nervous system development and function (mean ± SE = 33.03 ± 10.05 alleles; Supplementary Table 6 ), although most of them had moderate allele frequencies (median = 0.0461; Supplementary Table 8 ). Moreover, none of the genes we investigated exhibited deleterious mutations ( Supplementary Note ). These results are in line with previous studies showing that affective disorders in humans and anxiety behaviors in different animal species have a polygenic basis and rely in part on the same genes (for example, see refs. 25 , 26 , 27 , 28 , 29 ). They further support our assumption that the specific gene enrichment observed in introgressed regions in the yak genome is due to selection on behavioral traits. Finally, with the exception of two regions encompassing ABHD4 and MYO6 , none of the 365 segments introgressed in our panel colocalized with 182 recently reported signatures of domestication in yak 30 , confirming that introgressed segments constitute a source of favorable polymorphism, especially for genes that do not possess similar variants in yak. This is, for example, the case for a KIT duplication causing color-sidedness in cattle 7 , 8 , which segregates in Mongolian yaks ( Supplementary Fig. 6 ), and presumably is the case for a new mutation associated with the polled phenotype. To verify this hypothesis, we modeled polledness as a quantitative trait in our panel ( Supplementary Fig. 7 ) and mapped the locus associated with this phenotype to the beginning of chromosome 1 ( P = 9.7 × 10 −9 ; 95% confidence interval (CI) = 1.88–2.20 Mb; Fig. 4a ) within a bovine introgressed segment ( Fig. 4b and Supplementary Table 8 ). Between positions 1,809,313 and 2,627,891 bp, we identified 1,024 sequence variants that were homozygous in YAK13, which is homozygous polled, and absent from the horned YAK40. Nearly all of these variants were retrieved in the genome of one polled Mongolian Turano cow (TM29), confirming the bovine origin of the polled-associated mutation in yak. Figure 4: Introgression of a novel and complex mutation at the POLLED locus from bovines causes polledness in Mongolian yaks. ( a ) Mapping of the POLLED locus with Illumina BovineHD BeadChip genotyping data from 36 polled and 40 horned animals. Polledness is modeled as a quantitative trait. The x axis shows genomic positions, and the y axis presents significance measured using the likelihood-ratio test (LRT) statistic. ( b ) Bovine introgression plot for the first 20 Mb of chromosome 1 based on WGS data. Orange and turquoise dots show the mean frequency of bovine and yak alleles at positions of yak-specific variants in a 70-kb sliding window. The background of each interval is shaded according to read depth, the mean number of reads per 70-kb window, ranging from 0 (dark gray) to >40 (white). Introgressed intervals are identified by a break in the red line (circular binary segmentation of mean allele frequency) and a drop of yak allele frequency below 0.5. Note that YAK13 is homozygous for a bovine introgressed segment encompassing the mapping interval of the POLLED locus ( P = 9.7 × 10 −9 ; 95% CI = 1.88–2.20 Mb). This result is independently supported by a reduction in the divergence of the YAK13 genome sequence from the UMD3.1 bovine reference sequence in the POLLED region (0.28%) between positions 1,809,313 and 2,627,891 bp as compared with the average divergence of 1.08% exhibited by both yaks at the genome level. ( c ) Schematic presenting the nature and location of the three different mutations identified at the POLLED locus in the bovine allele as compared with the wild-type allele. The region displayed ranges from 1,690,000 bp to 2,090,000 bp on chromosome 1 (indicated by dashed lines in b). Red boxes represent segments that are duplicated in the Celtic, Friesian and Mongolian alleles, whereas light and dark gray boxes represent the original segments. Note that none of the three POLLED mutations affect coding regions and that the molecular mechanism underlying polledness remains unknown at the present time. ( d ) Details of the complex POLLED Mongolian mutation that results in duplication of an 11-bp motif that is entirely conserved among Bovidae and well conserved among vertebrates ( Supplementary Figs. 10–13 ). The region displayed ranges from 1,975,420 bp to 1,976,530 bp on chromosome 1 (indicated by dashed lines in c). Boxes of different color are used to represent segmental duplications. Full size image Genotyping of 12 indels in 604 animals originating from 2 yak and 21 cattle subpopulations ( Supplementary Tables 1, 13 and 14 ) refined the POLLED locus interval to a 121-kb segment (1,889,854–2,010,574 bp) containing 238 variants. Contrasting these variants with ones found in the genomes of 234 bovines originating from Europe ( Supplementary Table 2 ) (ref. 13 ), one horned Japanese Turano bull 20 and TM29, we excluded all but two variants originating from the same microhomology-mediated break–induced replication event: a complex 219-bp duplication–insertion ( P 219ID ) beginning at 1,976,128 bp and a 7-bp deletion and 6-bp insertion ( P 1ID ) located 621 bp upstream of this position ( Supplementary Figs. 8 and 9 ). This rearrangement results in duplication of an 11-bp motif (5′-AAAGAAGCAAA-3′) that is entirely conserved among Bovidae ( Supplementary Figs. 10–13 ) and that is also duplicated in the 80-kb duplication responsible for Friesian polledness 11 . Finally, genotyping of the P 219ID P 1ID rearrangement in yaks and cattle showed perfect association with polledness of Mongolian Turano origin, thus adding this polymorphism as a third allele contributing to the reported allelic heterogeneity at the POLLED locus ( Fig. 4c,d ) 9 . In conclusion, we present the first characterization of bovine introgression in yaks at the genomic scale. We report (i) that Mongolian yaks inherit on average 1.31% of their genome from bovine ancestors after nearly continuous admixture over at least the last 1,500 years and (ii) that introgressed segments are significantly enriched in genes involved in nervous system development and function, which probably have contributed to the taming of yaks. We also show introgression of a new mutation that determines a phenotype of primary interest in bovine and yak husbandry: the genetic absence of horns. This study contributes to the emerging picture of the genes and pathways that have been the most affected by domestication and highlights the beneficial role of introgressive hybridization in transferring favorable polymorphisms from one domestic species to another. Methods Animals. In total, 120 yaks and 1,025 cattle from a wide array of breeds originating from Eurasia and Africa, as well as representatives of nine other bovid species, were considered in at least one of the analyses performed in this study. Briefly, they consisted of animals used for mapping of the POLLED locus in yak and Mongolian Turano cattle ( Supplementary Table 1 and Supplementary Fig. 7 ); sets of whole-genome sequences of yak and cattle ( Supplementary Table 2 ) used for introgression analyses ( Supplementary Fig. 1 ) and filtering of candidate mutations; bovid species used for target sequencing and phylogenetic analyses ( Supplementary Tables 3 and 4 ); and sets of Illumina BovineHD chip genotypes used for admixture and mapping analyses ( Supplementary Table 5 ). A large proportion of genotypes and DNA samples used in this study were collected in the course of previous projects that were published in peer-reviewed journals. These studies cited here included a statement of compliance with ethical regulations. Sampling for this study was approved by the ‘Regierung von Oberbayern’ (the Regional Government of Upper Bavaria, Permission No. 55.2-1-54-2532.3-24-12). Furthermore, experiments reported in this work complied with the ethical guidelines of the French National Institute for Agricultural Research (INRA). All samples and analyzed data were obtained with the permission of the breeders, breeding organizations, artificial insemination centers, zoological institutions and research group providers. Horned/polled phenotypes and derived genotypes. The polled phenotype is an autosomal dominant trait in cattle 31 and yaks 12 , readily measurable on any animal older than 6 months. Artificial dehorning of yak and cattle is not practiced in the sampling area in Central Asia. Therefore, any polled yak descending from one polled and one horned parent is necessarily heterozygous, i.e., Pp, at the underlying POLLED locus. One horned offspring with confirmed paternity is sufficient to declare a polled parent as Pp. Animals having two polled parents and consecutively ten or more polled offspring with horned mates are declared as homozygous polled, or PP. Similar animals having fewer than ten offspring (all polled) with horned mates are either PP or Pp and were declared as P•. Finally, all horned animals were declared pp. Derived genotypes of the yak animals used for mapping are presented in Supplementary Figure 7 . Whole-genome sequencing of two Mongolian yaks and one Mongolian Turano cow. The genomes of one heterozygous polled Mongolian Turano cow (TM29), one homozygous polled yak (YAK13) and one horned yak (YAK40) were sequenced with Illumina technology. Paired-end libraries were generated according to the manufacturer's instructions using the Rapid DNA library system (NuGen) for animal TM29 and the NEXTflex PCR-Free DNA Sequencing Kit (Bioscientific) for YAK13 and YAK40. Libraries were quantified using the KAPA Library Quantification Kit (Cliniscience), controlled on a High Sensitivity DNA Chip (Agilent) and sequenced on an Illumina HiSeq 1500 with 2 × 110-bp reads (TM29) or on a HiSeq 2000 with 2 × 101-bp reads (YAK13 and YAK40). The average sequence coverage was 8.7×, 13.4× and 14.9×, respectively. Reads were mapped on the UMD3.1 bovine sequence assembly using BWA 32 . Reads with multiple alignments were removed. SNPs and small indels were called using the SAMtools pileup option 33 . Only variants with a quality score (QUAL) of ≥30 and a mapping quality (MQ) score of ≥30 were kept. Discovery of larger indels was achieved with Pindel 34 . Variants supported by only one read or found in the homozygous state in the three animals were considered possible artifacts and were eliminated. Detection of copy number variation was performed according to Medugorac et al . 9 by calculating coverage ratios between pairs of individuals in dynamic bin sizes of 5,000 reads. YAK13 and YAK40 were each compared to three different cattle WGS sequences in order to call possible CNVs in introgressed regions. Significant CNV results were kept if they overlapped in all three comparisons of one yak and three cattle WGS sequences. Signals that were caused by mapping of apparently repetitive sequences that had very high coverage (>1,000-fold) were manually removed. The reliability and borders of the retained CNVs were verified using the Integrative Genomics Viewer (IGV) 35 and paired-end information. In the end, only one polymorphism was considered as a true introgressed CNV: the mutation responsible for color-sidedness in bovines that is presented in Supplementary Figure 6 . For this CNV, the log 2 ratios of sequence coverage per 5,000-bp window between the solid colored YAK40 and the color-sided YAK13 were plotted using R and the average ratios were segmented using the circular binary segmentation implementation in the DNAcopy package (v1.14.0) from Bioconductor. Introgression analysis in WGS data of three yaks and one Mongolian Turano cow. The detection of bovine genome segments in WGS data from YAK13 and YAK40 and the reference genome 5 (YAKQIU) was conducted as follows. First, variants that were homozygous for the alternate allele in YAKQIU and absent from TM29 and 234 additional bovine genomes 13 were identified and considered as yak specific. Then, the mean frequency of yak and bovine alleles for each of these variants was estimated for sliding windows of 70 kb along the genomes of YAK13, YAK40 and TM29. This window size corresponds to half of the expected mean size of segments that would have been introgressed from the earliest possible hybridization between cattle and yak ( Supplementary Note ). To detect introgression in the yak reference sequence itself, the number of homozygous and heterozygous genotypes for yak-specific polymorphisms was estimated in 70-kb windows and compared to the expected numbers based on genome-wide observations ( Fig. 1 ). Introgressed intervals were identified by circular binary segmentation (CBS) of mean allele frequency and a drop of yak allele frequency below 0.5. CBS is implemented in the R package DNAcopy (v1.14.0) from Bioconductor. Frequencies of yak-specific alleles in both TM29 and YAKQIU served as a control. Conventional Sanger sequencing of target genomic regions with cattle ancestry in the reference yak genome. Two regions suggesting homozygous cattle introgression in the yak reference genome 5 (chr9:68,495,000–70,115,000 and chr25:17,345,000–19,995,000; Fig. 1 and Supplementary Fig. 1 ) were selected to test the reliability of our approach. For each region, two PCR products were amplified in 13 animals representing 12 bovid species ( Supplementary Tables 3 and 4 ). PCR was performed using the Go-Taq Flexi system (Promega) according to the manufacturer's instructions on a Mastercycler pro thermocycler (Eppendorf). Amplicons were purified and bidirectionally sequenced by Eurofins MWG using conventional Sanger sequencing. The resulting sequences were aligned with the corresponding sequences from the yak reference genome using ClustalW software ( ) 36 , which is part of MEGA software 37 package version 6.06 ( ). Then, the sequences were trimmed to get equal lengths for most animals and fragments. Finally, phylogeny was inferred based on a total of 2,191 nt of sequence using the neighbor-joining methods 38 implemented in MEGA software 37 . The percentage of replicate trees in which the associated taxa clustered together was determined by the bootstrap test 39 (1,000 replicates). A similar approach was used to study the MHC locus and to estimate the false discovery rate of the RFMix analysis, as described in the Supplementary Note . Analysis of Illumina HD genotypes. General information. Illumina BovineHD BeadChip genotypes from 467 Bovidae animals were considered. These consisted of 76 yaks (36 polled and 40 horned; Supplementary Fig. 7 ), a panel of 384 individuals representative of the worldwide diversity of cattle and assumed to harbor no yak ancestry ( Supplementary Note and Supplementary Table 5 ), and representatives of six Bovini species (two gaur, one wood bison, one European bison, one banteng, one water buffalo and one nilgai; Supplementary Table 3 ). Of note, the panel of 384 cattle comprised 11 polled and 14 horned Turano cattle from Mongolian and Yakutian breeds ( Supplementary Tables 1 and 5 ). A total of 697,172 SNP markers were successfully genotyped in three to six Bovini species. Only 42,230 SNPs (5.43%) were informative in yaks. Haplotypes were inferred, and missing genotypes were imputed using hidden Markov models (software package Beagle) 40 and three cohort types: trios (two parents, one offspring), pairs (one parent, one offspring) and unrelated animals. Marker order was based on release UMD3.1 of the Bos taurus genome ( ftp://ftp.cbcb.umd.edu/pub/data/assembly/Bos_taurus/ ). Inferring maternal and paternal phylogenies. To avoid artifacts, only SNPs from the mitochondrial genome and Y chromosome showing high call rates (>99%) and complete homozygosity within each single animal ( n = 245/343 and 921/1,224 markers, respectively) were used. Moreover, only animals with less than 5% missing genotypes for mitochondrial or Y-chromosome markers were considered. Maternal and paternal phylogenies were constructed with the neighbor-joining methods 38 implemented in MEGA software 37 version 6.06. The percentage of replicate trees in which the associated taxa clustered together was determined by the bootstrap test 39 (1,000 replicates). Introgression analysis in 76 Mongolian yaks. Yak-specific alleles were inferred from homozygous SNPs located in genomic regions of YAK13, YAK40 or YAKQIU that were free of cattle ancestry, based on previous analyses of WGS data ( Supplementary Fig. 1 ). Since WGS data did not provide clear introgression status in two specific regions (chr22:31,682,450–31,842,000 and chr23:24,661,105–29,153,851; Supplementary Fig. 1 ), we used the genotypes of 76 yaks to define the major allele (frequency ≥0.90) as yak specific. For all remaining SNPs (<1.00%), the major allele (frequency ≥0.75) in six Bovini species was considered as ancestral and yak specific. Then, a rapid and robust forward–backward algorithm implemented in the software package RFMix 2 , 15 was used to screen for the presence of cattle haplotypes in 76 Mongolian yaks. This algorithm uses designated reference haplotypes to infer local ancestry in designated admixed haplotypes, which requires the inclusion of pure yak and pure cattle in the analysis. Since there is neither genetic nor historical support for introgression of yak genes into the cattle genome, we considered the 384 cattle ( Supplementary Table 5 ) as a reference panel assumed to harbor no yak ancestry. On the other hand, we were not able to find a complete yak genome without cattle ancestry, but we detected complete chromosomes or large chromosomal fragments with pure yak ancestry. These chromosomal regions as well as yak-specific alleles were used to create a synthetic pure yak genome (YAKYAK) that served as reference yak in initial RFMix analyses 2 . For each chromosome, we started two rounds of RFMix analyses. The first round used the 384 cattle genomes as a reference cattle population and only YAKYAK as a reference yak population. The admixed sample consisted of all 76 yak genomes. For each chromosome, initial RFMix analyses detected different subsets of yak haploids as pure. These pure yak chromosomes supplemented the YAKYAK chromosome in the second round of RFMix analyses to produce final results for a specific chromosome. The RFMix program performs forward–backward analyses in non-overlapping windows of predefined size. In some situations, like for short segments in an unfavorable location (window transition), RFMix occasionally detected signatures only in the more informative half or even in none of the two windows. To deal with these problems, we set the window size at 0.2 cM and performed four overlapping RFMix analyses ( Supplementary Fig. 4 ). Source, date and number of admixture events. ChromoPainter 41 was used to decompose the chromosomes of each of the 76 Mongolian yaks as a series of haplotypic chunks inferred to be shared with at least one of the 384 cattle representing 24 breeds. In theory, given a single admixture event, ancestry chunks inherited from each source have an exponential size distribution, resulting in an exponential decay of these co-ancestry curves 41 , 42 . The shape of the decay curve in different groups enables estimation of admixture dates 42 and determination of the recipient and donor groups involved in asymmetric admixture events. Multiple admixture times result in a mixture of exponentials 42 , which can be tested by comparing the fit of a single exponential decay rate to a mixture of rates. Inferences of the haplotypic makeup of admixing source groups as well as of the admixture date(s) were carried out using the GlobeTrotter 42 , 43 method and complemented by simulation studies as described in the Supplementary Note . Inference of the source of admixture was complemented by phylogenetic analyses of pure and admixed haploids. For 139 chromosomal segments introgressed in 10 or more yaks ( Supplementary Table 8 ), 10 pure and 10 introgressed haploids were randomly selected to constitute two yak groups. Similarly, 20 cattle groups representing 20 breeds with four or more animals genotyped with the BovineHD chip were constructed ( Supplementary Table 5 ). These segments were divided into a total of 3,076 non-overlapping blocks comprising four SNPs for which the inter-marker distance was less than 25 kb with neighboring SNPs. Each block was considered as a multiallelic marker in phylogenetic analysis to reduce ascertainment bias 44 . The proportion of shared alleles between individuals, PS 45 , was converted to a genetic distance ( D PS = ln(PS)). A neighbor-joining tree ( Supplementary Fig. 5 ) reflects the averaged individual distances between groups and was constructed with the SplitsTree4 program 46 . Annotation of the gene content of the introgressed segments. For 365 regions showing a minimum of two introgressed segments among the 76 animals studied, we defined the smallest region shared by these segments. To assess the gene content of the resulting intervals, we used the “RefSeq Genes” track from the UCSC Genome Browser ( ) as a primary resource. We also used the “Non-cow RefSeq genes,” “Cow mRNAs from GenBank” and “Cow ESTs that have been spliced” tracks to recover protein-coding genes that might have been missed during annotation of the UMD3.1 bovine sequence assembly. These consisted of genes annotated in at least human and mouse with no bovine RNA alignments in the orthologous region or genes with bovine RNA alignments corresponding to at least one gene annotated in human or mouse in the orthologous region. Intervals that did not contain genes were attributed the name of the closest gene in 5′–3′ orientation and located at a maximum of 500 kb downstream of its borders according to the same orientation. Gene set enrichment analyses. Gene set enrichment analyses were carried out with five software packages using different methods and sources of information, i.e., Gene Ontology classes for DAVID 6.7 ( ) and PANTHER ( ), bibliographic and experimental data for Genetrail2 ( ) and Ingenuity Pathway Analysis ( ), and Mammalian Phenotype ontology (level 3) from Mouse Genome Informatics, for the specific analysis we performed with Enrichr ( ) 47 , 48 . Since these analyses produced comparable results, and for the sake of simplicity, we selected only two of them to be presented in this study. To provide a first overview of the over-represented groups of genes and to test their reliability, we performed different Gene Ontology (GO) term enrichment analyses with DAVID using different lists of genes located in chromosomal regions detected as introgressed from cattle to yaks by RFMix analyses (results are presented in Supplementary Table 8 ). Then, we used Ingenuity Pathway Analysis for the precision of its annotations. We focused on “Top Canonical Pathways” and on “Diseases and Bio Functions.” Only canonical pathways or annotations with P < 10 −2 were retained. Annotations related to cancer and drug metabolism, which were not relevant for this study, were not considered. In addition to the IPA annotations, we attributed a unique keyword to each significantly enriched pathway according to “Diseases or Functions Annotation” in order to draw a word cloud. Particular attention was paid to attribute keywords related to subcellular portions, cell types and organs rather than to general processes. Keywords that appeared only once were finally regrouped with higher-order items (for example, cell type was changed for organ or process was changed for the category defined by IPA) or with the predefined IPA categories (results are presented in Supplementary Table 10 ). Finally, although they are not presented in detail, results from the three other analyses were used to complete the list of genes involved in nervous system development and function presented in Supplementary Tables 8 and 12 . Mapping of the POLLED locus in yaks sampled in Europe and Mongolia. Mapping of the POLLED locus was performed using a combined linkage disequilibrium and linkage analysis (cLDLA) with horn status modeled as a quantitative trait (pp = 0, Pp = 1, PP = 2 and P• = 1.5). A genomic relationship matrix ( G ) 49 was estimated and its inverse ( G −1 ) was used to correct for population structure and possible polygenic effects in the model of the later QTL mapping. Identical-by-descent (IBD) probabilities for pairs of haplotypes 50 were estimated for sliding windows of 40 SNPs and summarized into a diplotype relationship matrix ( D RM ), which is computed in a similar way to the additive genotype relationship matrix ( G RM ) 51 . cLDLA mapping of polledness was carried out with a procedure similar to that reported in Meuwissen et al . 52 , which considers random QTL and polygenic effects. Variance component analysis in the middle of each of the 40-SNP sliding windows was performed with the ASReml package ( ) and a mixed linear model where y is a vector of horn status, β is a vector of fixed effects (including overall mean μ ), u is a vector of n random polygenic effects for each animal with u ∼ N ( ), q is a vector of random additive genetic effects due to the POLLED locus with q ∼ N ( ) and D RM p is the diplotype relationship matrix at position p of the putative POLLED locus, and e is a vector of random residual effects with e ∼ N ( ) and I is an identity matrix. The random effects u , q and e were assumed to be uncorrelated and normally distributed, and their variances ( ) were simultaneously estimated using ASReml. Using the logarithm of the likelihood estimated by ASReml for the model with (log( L P )) and without (log( L 0 ); corresponding to the null hypothesis) POLLED locus effects, we calculated the likelihood-ratio test statistic (LRT = −2(log( L 0 ) − log( L P )), which is known to be χ 2 distributed with 1 degree of freedom 53 . Appropriately, an LRT value higher than 10.8 was considered statistically significant (equivalent to P < 0.001). Fine-mapping and identification of the Mongolian POLLED mutation. The first step consisted of selecting sequence variants that were homozygous in the homozygous polled YAK13, absent from the horned YAK40, and located between positions 1,809,313 and 2,627,891 bp on chromosome 1. This region comprises the 95% confidence interval (1.88–2.20 Mb) obtained with the QTL mapping approach and corresponds to a bovine chromosomal segment introgressed in yak ( Fig. 4b and Supplementary Table 8 ). Then, to narrow down the candidate region, 120 yaks and 484 Eurasian taurine cattle ( Supplementary Table 1 ) were genotyped for 12 indels using standard PCR and agarose gel or capillary (ABI PRISM 377 and 3100 Genetic Analyzer, Applied Biosystems) electrophoresis ( Supplementary Tables 13 and 14 ). Of note, the same animals were also genotyped for the Celtic ( P 202ID ) and Friesian ( P 80kbID ) polled-associated mutations 9 , 10 , 11 , and these were excluded as possible candidates for polledness of Mongolian origin. Genotyping for 12 indels ( Supplementary Tables 13 and 14 ) excluded all but two indels (LMP04 and LMP12) as candidate mutations, and haplotype analyses reduced the POLLED locus interval to a 121-kb segment (1,889,854–2,010,574 bp) containing 238 variants. Considering that the Mongolian polled-associated mutation occurred in Turano cattle and was absent even in European polled cattle ( Supplementary Note ), these variants were subsequently filtered to retain only those that were heterozygous in the heterozygous polled Mongolian Turano cow TM29 and absent from the genomes of one horned Japanese Turano bull 54 and 234 bovines originating from Europe ( Supplementary Table 2 ) (ref. 13 ). Finally, to ensure that we did not miss any candidate variants for polledness, we performed two independent verifications. We performed new detection of structural variants in the refined 121-kb polled interval using DELLY 55 and a visual examination of the whole-genome sequences of YAK13, YAK40 and TM29 in the same interval using IGV 35 . We did not detect new candidate polymorphisms. Considering that there is no gap in the UMD3.1 bovine genome sequence assembly and in the WGS data for the homozygous polled yak (YAK13) in this interval, we can claim that we did not miss any candidate variant with our approach. Analysis of sequence conservation around the Mongolian POLLED mutation in mammals. Regions orthologous to the segments duplicated in the Mongolian POLLED mutation were retrieved for 34 eutherian mammals using EPO multiple-sequence alignment from Ensembl. A consensus sequence and a sequence logo were generated using Multalin ( ) 56 and WebLogo ( ) 57 , respectively. After identification of a well-conserved 11-bp motif, a novel consensus sequence and a novel sequence logo were generated. Details on the 11-bp orthologous sequences are presented in Supplementary Figures 11 and 12 . Analysis of sequence conservation around the Mongolian POLLED mutation in Bovidae. The region encompassing the Mongolian POLLED mutation was PCR amplified from the genomic DNA samples of nine bovid species ( Supplementary Fig. 3 and Supplementary Table 3 ). Two individuals were used for each species. PCR primers were manually designed in regions that were conserved between the bovine UMD3.1 and sheep Oar_v3.1 genome assemblies ( Supplementary Table 15 ). PCR reactions and Sanger sequencing were performed as previously described. The corresponding regions in cattle and yak were obtained from the bovine UMD3.1 genome assembly and from YAK40 whole-genome sequencing data, respectively. Multispecies alignment was generated with ClustalW software (version 2.0.1) (ref. 36 ). Data availability. Data are deposited in the NCBI Sequence Read Archive (SRA) under project accession PRJNA279385 . Accession codes Primary accessions BioProject PRJNA279385
Though placid enough to be managed by humans, yaks are robust enough to survive at 4000 meters altitude. Genomic analyses by researchers of Ludwig-Maximilians-Universitaet (LMU) in Munich show that yak domestication began several millennia ago and was promoted by repeated crosses with cattle. The first systematic genome-wide comparison of the genetic heritage of yaks and cattle shows that about 1.5% of the genome of Mongolian yaks is derived from domesticated cattle. While male hybrids are sterile, hybrid females can be backcrossed to male yaks for several generations, which allows for the stable introgression of short regions of bovine chromosomes into the yak genome. The results of the new study suggest that yak hybridization began thousands of years ago. Dr. Ivica Medugorac, who heads a research group in population genomics at the Chair of Animal Genetics and Husbandry at LMU, is the first and corresponding author on the new paper, which appears in the journal Nature Genetics. "Our results indicate that hybridization between yaks and cattle began more than 1500 years ago, and has continued with varying intensity ever since," Medugorac says, and points out that written records also testify to early hybridization of yaks by Mongolian breeders. In collaboration with Dr. Aurélien Capitan of the Université Paris-Saclay, Dr. Stefan Krebs of the Laboratory for Functional Genome Analysis at LMU's Gene Center and colleagues from other European, American and Mongolian institutions, Medugorac has mapped the distribution of cattle genes in the yak genome. "Many of the genetic variants in the yak that can be traced back to cattle are found at gene loci that are known to play roles in the development and function of the nervous system. They have an impact on sensory perception, cognition and social behavior. Evidently, over a period of several thousands of years, Mongolian breeders succeeded in speeding up the domestication of the yak by crossing them with cattle, which had been domesticated thousands of years before," he explains. Furthermore, the traits that enable yaks to survive at high altitudes, in mountain ranges such as the Altai, the Pamirs and the Himalayas have obviously been retained during this process. In the course of the study, the researchers identified a gene variant in Mongolian cattle and yaks that is responsible for the loss of horns. "We were able to show that this variant had been introduced into yaks from the domesticated Mongolian Turano cattle long ago," Medugorac says. Lack of horns (known as 'polledness') is, however, only one of the traits with which yak breeders attempted to tame the ferocious temper of the yaks. Interestingly, the polled variant in the Mongolian Turano cattle differs from the mutations known to be responsible for polledness in European cattle, which had previously been molecularly characterized by Medugorac's group in 2012 and 2014. These findings are already being exploited by breeders worldwide to select for polled cattle in order to avoid the painful procedure of dehorning.
nature.com/articles/doi:10.1038/ng.3775
Biology
Flying-foxes' extraordinary mobility creates key challenges for management and conservation
Justin A. Welbergen et al. Extreme mobility of the world's largest flying mammals creates key challenges for management and conservation, BMC Biology (2020). DOI: 10.1186/s12915-020-00829-w Journal information: BMC Biology
http://dx.doi.org/10.1186/s12915-020-00829-w
https://phys.org/news/2020-08-flying-foxes-extraordinary-mobility-key.html
Abstract Background Effective conservation management of highly mobile species depends upon detailed knowledge of movements of individuals across their range; yet, data are rarely available at appropriate spatiotemporal scales. Flying-foxes ( Pteropus spp.) are large bats that forage by night on floral resources and rest by day in arboreal roosts that may contain colonies of many thousands of individuals. They are the largest mammals capable of powered flight, and are highly mobile, which makes them key seed and pollen dispersers in forest ecosystems. However, their mobility also facilitates transmission of zoonotic diseases and brings them in conflict with humans, and so they require a precarious balancing of conservation and management concerns throughout their Old World range. Here, we analyze the Australia-wide movements of 201 satellite-tracked individuals, providing unprecedented detail on the inter-roost movements of three flying-fox species: Pteropus alecto , P . poliocephalus , and P . scapulatus across jurisdictions over up to 5 years. Results Individuals were estimated to travel long distances annually among a network of 755 roosts ( P . alecto , 1427–1887 km; P . poliocephalus , 2268–2564 km; and P . scapulatus , 3782–6073 km), but with little uniformity among their directions of travel. This indicates that flying-fox populations are composed of extremely mobile individuals that move nomadically and at species-specific rates. Individuals of all three species exhibited very low fidelity to roosts locally, resulting in very high estimated daily colony turnover rates ( P . alecto , 11.9 ± 1.3%; P . poliocephalus , 17.5 ± 1.3%; and P . scapulatus , 36.4 ± 6.5%). This indicates that flying-fox roosts form nodes in a vast continental network of highly dynamic “staging posts” through which extremely mobile individuals travel far and wide across their species ranges. Conclusions The extreme inter-roost mobility reported here demonstrates the extent of the ecological linkages that nomadic flying-foxes provide across Australia’s contemporary fragmented landscape, with profound implications for the ecosystem services and zoonotic dynamics of flying-fox populations. In addition, the extreme mobility means that impacts from local management actions can readily reverberate across jurisdictions throughout the species ranges; therefore, local management actions need to be assessed with reference to actions elsewhere and hence require national coordination. These findings underscore the need for sound understanding of animal movement dynamics to support evidence-based, transboundary conservation and management policy, tailored to the unique movement ecologies of species. Background Conventional conservation approaches, which typically view species as organized around discrete local populations, are inadequate for highly mobile species [ 1 ], particularly in the context of environmental change [ 2 ]. Highly mobile species often require multiple habitats to obtain different resources at different stages of their life cycles, and their persistence depends on the availability and accessibility of the requisite suite of habitats [ 3 , 4 ]. The unpredictable movements of nomadic species make it particularly difficult to decide where and how to act to mitigate threatening processes [ 5 ]. This can be further complicated when such species cross jurisdictional boundaries within or between countries [ 6 ], making a unified program of conservation management much more difficult to achieve. For effective conservation management, it is essential to have a robust understanding of the movement ecology of highly mobile species, but this can only be accomplished by following numerous individuals within a population, across multiple habitats within the species’ range [ 7 , 8 ]. Australian flying-foxes ( Pteropus spp.) are large bats that forage by night on floral resources and rest by day in arboreal roosts that may contain colonies of many thousands of individuals [ 9 ] with a complex social architecture [ 10 , 11 ]. Roost locations can be stable for decades [ 12 ], and while “traditional” sites are mostly occupied seasonally, more recent, urban roosts are occupied permanently [ 13 ], albeit with great seasonal variation in local numbers [ 14 ]. The prevailing assumption is that flying-foxes are organized around local “resident” populations that show (seasonal) fidelity to a particular site [ 13 ]. However, like other large pteropodids elsewhere (e.g., [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]), Australian flying-fox individuals can be highly mobile, with movements ranging from small relocations within roosts and foraging sites [ 10 ] to nightly foraging trips of up to 80 km [ 23 , 24 ] and long-distance movements of several thousand kilometers [ 25 , 26 ]. Therefore, how flying-fox populations are locally organized is critically dependent on the extent and seasonal dynamics of movements among roosts. To date, as for the other large pteropodids elsewhere (e.g., [ 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 ]), movement studies of Australian flying-foxes are limited to small samples of radio- [ 23 , 27 , 28 , 29 ] and satellite-tracked [ 21 , 25 , 26 ] individuals, so the extent and seasonal dynamics of movements among roosts have never been formally assessed, hampering effective conservation and management of these ecologically important species. The mobility of flying-foxes is thought to enable them to exploit Australia’s ephemeral floral resources [ 30 ] and makes them key long-distance pollen and seed dispersers [ 31 , 32 , 33 ]. Long-distance seed and pollen dispersal by all four Australian mainland Pteropus species ( Pteropus alecto , P . poliocephalus , P . scapulatus , and P . conspicillatus ) would be of crucial conservation significance as it promotes gene flow between impoverished forest patches and facilitates range shifts of forage trees under climate change [ 34 , 35 ]. Knowledge on the extent and seasonal dynamics of movements among roosts is thus key for understanding the linkages that flying-foxes provide in Australia’s contemporary fragmented landscape. The mobility of flying-foxes is also thought to underpin their role in the ecology of several emerging infectious diseases. In Australia, flying-foxes are the recognized natural hosts for various viral agents that threaten livestock and/or human health, including Australian bat lyssavirus (ABLV) [ 36 ], Hendra virus [ 37 , 38 ], and Menangle virus [ 39 ]. The maintenance of infection in natural host populations depends on a source of infection, a continuous supply of susceptible individuals, and adequate contact between infected and susceptible individuals. Thus, the extent and seasonal dynamics of flying-fox movements are expected to shape infection and transmission dynamics at the roost and metapopulation level; further, they define the spatiotemporal scales of exposure and infection potential for susceptible livestock species and humans [ 40 ]. The mobility of flying-foxes further puts them in frequent conflict with humans. Over the last 20 years, Australian flying-foxes have increasingly exploited urban foraging and roosting resources [ 23 , 41 , 42 ]. Many urban areas in eastern Australia now have permanent flying-fox colonies [ 13 ], and this increased urban presence translates to increased interaction with humans, and can provoke negative community sentiment due to objectionable noise, soiling and smell, and impacts on human health [ 43 , 44 , 45 ]. The result is often public demands to local councils and elected members of state and federal electorates for aggressive management of urban flying-fox populations, ranging from roost vegetation modification to colony dispersal. Dispersals in particular are predicated on the notion that resident individuals can learn to avoid locations where they are not wanted; however, if colonies are in fact composed of highly mobile individuals that turnover at high rates, this could explain why dispersal actions are commonly met with very limited success [ 46 ]. In summary, despite their key importance for Australia’s fragmented forest ecosystems, flying-foxes are contentious in terms of zoonosis and human-wildlife conflict and so require a precarious balancing of conservation, animal welfare, and human health and amenity concerns. However, the conservation and management of flying-foxes is complicated by their trans-jurisdictional distributions and by conventional notions that they are organized around discrete local populations (colonies). A comprehensive understanding of the extent and seasonal dynamics of flying-fox movements is thus vital for effective trans-jurisdictional conservation and management of the species. In this study, we capitalize on recent advances in satellite tracking technology to investigate the broad-scale inter-roost movement patterns of an unprecedented 201 flying-foxes in eastern Australia. We describe in detail the nature of the continental-scale movements of P . alecto , P . poliocephalus , and P . scapulatus and the differences between these species in terms of local site fidelity and the spatiotemporal extents of their movements among roosts and local jurisdictions. We discuss the implications of our findings for the ecosystem services and zoonotic dynamics of flying-fox populations and for current practices in flying-fox conservation and management. Results A total of 201 transmitters was deployed on 80 P . alecto , 109 P . poliocephalus , and 12 P . scapulatus , and tagged individuals were tracked over a maximum period of 60 months (Additional file 1 : Table S1; see Additional file 2 : Video S1 for the animated movements of all 201 tracked individuals, and for each species separately (Additional file 3 : Video S2, Additional file 4 : Video S3, Additional file 5 : Video S4)). { "name": "12915_2020_829_MOESM2_ESM.mp4", "description": "12915_2020_829_MOESM2_ESM.mp4", "thumbnailUrl": " "uploadDate": "2020-08-14T05:12:18.000+0000", "contentUrl": " "duration": "PT1M35S", "@context": " "@type": "VideoObject" } Additional file 2: Video S1. Movements of all satellite-tracked ( n = 201) individuals, color-coded by species. Straight-line movements between recorded fixes are interpolated. The box in the top right shows the month and the year; whereas the box in the top left shows the number of individuals being tracked concurrently for each species. { "name": "12915_2020_829_MOESM3_ESM.mp4", "description": "12915_2020_829_MOESM3_ESM.mp4", "thumbnailUrl": " "uploadDate": "2020-08-14T05:14:42.000+0000", "contentUrl": " "duration": "PT42.32S", "@context": " "@type": "VideoObject" } Additional file 3: Video S2. Movements of all satellite-tracked P . alecto ( n = 80) individuals only. { "name": "12915_2020_829_MOESM4_ESM.mp4", "description": "12915_2020_829_MOESM4_ESM.mp4", "thumbnailUrl": " "uploadDate": "2020-08-14T05:19:52.000+0000", "contentUrl": " "duration": "PT1M30.76S", "@context": " "@type": "VideoObject" } Additional file 4: Video S3. Movements of all satellite-tracked P . poliocephalus ( n = 109) individuals only. { "name": "12915_2020_829_MOESM5_ESM.mp4", "description": "12915_2020_829_MOESM5_ESM.mp4", "thumbnailUrl": " "uploadDate": "2020-08-14T05:22:50.000+0000", "contentUrl": " "duration": "PT13.72S", "@context": " "@type": "VideoObject" } Additional file 5: Video S4. Movements of all satellite-tracked P . scapulatus ( n = 12) individuals only. Roost sites Following release from eight colonies, tracked flying-foxes used a total of 755 roost sites, of which 458 (61%) were previously unrecorded. Of these new sites, 123 (26%) were used by multiple tracked individuals and we thus considered them to accommodate previously unidentified flying-fox “colonies” (see the “ Methods ” section). Roost sites spanned a north-south distance of 2698 km (23.7 degrees of latitude) and an east-west distance of 1099 km. P . alecto was identified roosting at 173 sites, P . poliocephalus at 546 sites, and P . scapulatus at 89 sites. One roost site (Hervey Bay Botanic Gardens) was used by tracked individuals of all three species; 47 roost sites were used by only P . alecto and P . poliocephalus , one roost site was used by only P . poliocephalus and P . scapulatus , and three roost sites were used by only P . alecto and P . scapulatus (Fig. 1 ). Fig. 1 Daytime roost sites used by satellite-tracked individuals. a Pteropus alecto . b P . poliocephalus . c P . scapulatus . Dots are colored to indicate which species of tracked animal used the roost sites. See legend for more details. Insets: Maps with shaded areas indicating the IUCN species range in Australia; lines indicate state boundaries Full size image Jurisdictions Tracked flying-foxes roosted in a total of 101 local government areas (LGAs; also known as “councils”) within 131 state electorates and 74 federal electorates. P . alecto individuals roosted in a total of 36 LGAs (average 12.2 year −1 , range 1–9) within 57 (average 13.2 year −1 , range 1–9) state electorates and 33 (average 12.0 year −1 , range 1–8) federal electorates; P . poliocephalus individuals roosted in a total of 85 LGAs (average 8.1 year −1 , range 1–37) within 109 (average 8.2 year −1 , range 1–32) state electorates and 68 (average 6.7 year −1 , range 1–24) federal electorates; P . scapulatus individuals roosted in a total of 21 LGAs (average 23.8 year −1 , range 1–9) within 16 (average 21.1 year −1 , range 1–9) state electorates and 6 (average 16.2 year −1 , range 1–4) federal electorates (Fig. 2 ). Fig. 2 The numbers of satellite-tracked individuals found within Australian jurisdictions. a – c Local government areas. d – f State electorates. g – i Federal electorates. Colors denote species: black: Pteropus alecto ; blue: P . poliocephalus ; red: P . scapulatus . Insets: Maps with shaded areas indicating the IUCN species range in Australia; lines indicate state boundaries Full size image Movements among roost sites There was a significant difference in site fidelity (i.e., the inverse of the probability of moving between roosts) between the three species ( P . alecto vs. P . poliocephalus : p = 0.002; P . alecto vs. P . scapulatus : p < 0.001; P . poliocephalus vs. P . scapulatus : p < 0.001), with the best fitting model including the additive effect of species and days since last daytime fix (Additional file 6 : Table S2). P . scapulatus had the highest daily propensity (and thus the lowest daily site fidelity) for moving between roost sites (0.364 ± 0.065 SE), followed by P . poliocephalus (0.175 ± 0.013) and P . alecto (0.119 ± 0.013) (Fig. 3 ). Fig. 3 The probability that an individual changes roost location after 1 day (± 1 SE) for each species (this provides an estimate of the average daily colony turnover rate for each species, assuming the behavior of tracked individuals was representative of that of all individuals within the species). There was a significant difference in the probability that an individual changed roost location after 1 day between the species ( P . alecto vs. P . poliocephalus : p = 0.002; P . alecto vs. P . scapulatus : p < 0.001; P . poliocephalus vs. P . scapulatus : p < 0.001) Full size image Distances moved between roost sites The mean estimated distance moved between roost sites was greatest for P . scapulatus at 13.57 ± 1.79 km day −1 SE (range 0–162 km day −1 ), followed by 4.26 ± 0.14 km day −1 for P . poliocephalus (range 0–270 km day −1 ), and 1.68 ± 0.14 km day −1 for P . alecto (range 0–92 km day −1 ) (Additional file 7 : Fig. S1), suggesting that the species travel 4956, 1554, and 612 km on average among roost sites annually, respectively. Nevertheless, some individuals are clearly capable of traveling much greater annual distances among roosts. For example (representing the maximum distances traveled by each species), P . alecto (#112209) covered 1551 km between 38 roosts (within 2 LGAs, 2 state electorates, and 2 federal electorates) across 289 tracking days (5.36 km day −1 , and could be scaled up to 1959 km year −1 ); P . poliocephalus (#114099) covered 12,337 km between 123 roosts (within 37 LGAs, 30 state electorates, and 21 federal electorates) across 1629 tracking days (7.57 km day −1 ; 2764 km year −1 ); and P . scapulatus (#112212) covered 3255 km between 36 roosts (within 9 LGAs, 9 state electorates, and 4 federal electorates) across 194 tracking days (16.78 km day −1 , and could be scaled up to 6124 km year −1 ). In reality, flying-foxes likely traveled much greater distances between roosts than the straight-line distances inferred from tracking data suggest, because fixes were only obtained once every 3–10 days and any roosts visited on these “off days” were missed. To account for such missed intervening roost visits, we modeled the expected daily distances moved between roost sites by taking advantage of the variation in the time elapsed between fixes (see the “ Methods ” section). From this, we derived daily inter-roost movement distances of 13.50 ± 3.138 km (x̅ ± 95% CI) for P . scapulatus (= 311–499 km/month; 3782–6073 km year −1 ), 6.62 ± 0.405 km day −1 for P . poliocephalus (= 186–211 km/month; 2268–2564 km year −1 ), and 4.54 ± 0.630 km day −1 for P . alecto (= 117–155 km/month; 1427–1887 km year −1 ) (Additional file 8 : Fig. S2). While much of the travel distances represent movements among nearby roosts, some individuals covered extensive latitudinal distances, (repeatedly) traversing substantial proportions of their entire species range. For example, one P . alecto individual (#117723) covered 4.13 degrees of latitude between 23 roosts (within 8 LGAs, 6 state electorates, and 7 federal electorates) across 260 tracking days (Fig. 4 a); one P . poliocephalus individual (#114111) covered 13.78 degrees of latitude between 182 roosts (within 25 LGAs, 24 state electorates, and 17 federal electorates) across 2093 tracking days (Fig. 4 b); and one P . scapulatus individual (#112212) covered 11.77 degrees of latitude between 36 roosts (within 9 LGAs, 9 state electorates, and 4 federal electorates) across 197 tracking days (Fig. 4 c). Fig. 4 Straight-line connections between successive roost fixes of satellite-tracked individuals. a Pteropus alecto . b P . poliocephalus . c P . scapulatus . Paths highlighted by thick lines indicate the tracks of the single individual of each species covering the greatest latitudinal range: black Pteropus alecto individual (#117723), tracked for 7 months from 25 June 2013 to 12 March 2014; blue P . poliocephalus individual (#114111), tracked for 21 months from 11 May 2012 to 12 November 2014; and red P . scapulatus individual (#112212), tracked for 6.5 months from 03 May 2012 to 16 November 2012. Insets: Maps with shaded areas indicating the IUCN species range in Australia; lines indicate state boundaries Full size image Directional movements Evidence of concerted directional movements of animals of each species was mixed. When monthly directional movements among roosts were examined within species, we found that P . alecto individuals were significantly oriented (in a single direction) in 1 of 10 months; P . poliocephalus were significantly oriented in 19 of 41 months, with a single preferred direction occurring in 9 of those months. P . scapulatus were significantly oriented (in a single direction) in both of the months where the sample size exceeded 5 (Additional file 9 : Fig. S3). Despite the lack of uniformity of monthly inter-roost movement directions (see above), P . poliocephalus exhibited a significant seasonal north-south signal in their movements overall (dev = 47.1; df = 12, P < 0.001). No significant seasonal movement was detected for P . alecto (dev = 0; df = 0, P = 1). As data for P . scapulatus were limited to a single year, no test for seasonality could be performed; however, like P . poliocephalus , P . scapulatus tended to spend more time further north on average in winter than in summer (Additional file 10 : Fig. S4). Discussion Fundamentally, movement creates challenges for the conservation and management of species, in part because animal movements may transcend the jurisdictional boundaries of single agencies or countries [ 47 , 48 ]. The extreme mobility of flying-foxes vividly illustrates these challenges and highlights the need for a sound understanding of the mechanisms underpinning movement dynamics to support evidence-based wildlife management policy and infectious disease risk mitigation [ 49 ]. Further, we identified a clear spatiotemporal component of movement, roost occupancy and by extension, resource utilization, requiring conservation management and potential disease risk mitigation to be tailored to the unique movement ecology of each species. The scale and scope of our study provides unprecedented detail on the mobility of P . alecto , P . poliocephalus , and P . scapulatus in eastern Australia over more than 23 degrees of latitude and up to 5 years. These findings extend those of previous, smaller studies [ 21 , 25 , 26 ] by demonstrating that flying-foxes undertake frequent inter-roost movements at a regional level as well as longer-range, and at times seasonal, movements. The annual inter-roost distances reported in our study rank all three Pteropus among the most mobile mammals on earth, above large-bodied ungulates and most cetaceans, and in the same range as migratory birds [ 50 ], despite our results necessarily underestimating flying-fox movement distances. Our findings further show that the three Pteropus species are composed of highly dynamic populations of individuals moving among roosts in different directions, at different rates (see Electronic SI 1–4). This extreme inter-roost mobility is consistent with genetic work that shows that the species are panmictic across their ranges [ 51 , 52 ], and has important implications for the ecosystem services and zoonotic dynamics of flying-fox populations and for current management practices in flying-fox conservation and human-wildlife conflict mitigation. Implications for the role of flying-foxes in Australia’s fragmented landscape Flying-foxes are thought to be pivotal to forest ecosystems as pollinators and seed dispersers [ 31 , 53 ], providing linkages between habitat fragments across anthropogenic [ 32 ] and natural barriers [ 21 , 54 ]. Australia has lost approximately 38% of native forests since European settlement [ 33 ], and the number and geographic span of roosts identified in this study, together with the scale of movement among them, graphically illustrates the extent of the linkages that flying-foxes provide in Australia’s contemporary fragmented landscape. In Australia, the spatiotemporal distribution of resources is often unpredictable and animals must either be generalists and survive scarcity without relocating or be highly mobile and track resource availability across large spatial scales [ 30 ]. Our finding of no ( P . alecto ) or weakly ( P . poliocephalus and P . scapulatus ) concerted monthly movement directions suggests that at least at these large spatial scales flying-foxes do not track resources using environmental cues or memory; rather, individuals appear to move in a quasi-random, or Lévy flight-like fashion, which is thought to be optimal for searching sparsely and randomly distributed targets in the absence of memory [ 55 ]. In this view, individuals wander freely across the species range but slowdown in more attractive or “sticky” areas where foraging resources are temporarily plentiful. Here, they combine with other individuals that encounter the resources from elsewhere, and when local resources are depleted, individuals again diffuse nomadically across the range. While largely speculative at this stage, this scenario could account for the local build-up of individuals during mass flowering events [ 56 ] and for the recent increase in the stability of urban roosts [ 41 ], phenomena for which the mechanisms are currently unexplained (but see, [ 57 ]). Implications for infection and transmission dynamics of zoonotic agents The differential movement behavior among species is important for better understanding Hendra virus infection and transmission dynamics, and spillover risk. Hendra virus, associated with around 100 fatal equine cases [ 58 ] and four fatal human cases in QLD and NSW, appears to be primarily excreted by P . alecto and P . conspicillatus [ 59 , 60 , 61 , 62 ]. Virus excretion has not been detected in P . poliocephalus or P . scapulatus to date, although anti-Hendra virus antibodies have been reported in both species [ 62 , 63 ]. One explanation for this is that infection is not maintained in P . poliocephalus or P . scapulatus , but that they are periodically exposed to the virus. Urine is the primary route of Hendra virus excretion in P . alecto [ 43 , 64 ], and co-roosting P . poliocephalus or P . scapulatus will have repeated exposure to P . alecto urine. Thus, our findings of extensive movements by P . poliocephalus and P . scapulatus and the co-roosting of both with P . alecto suggest a mechanism for interspecies viral exposure. Further, given the lack of evident Hendra virus excretion in P . poliocephalus and P . scapulatus , our findings illustrate the potential for high risk roosts (in terms of virus excretion and equine exposure risk) where P . alecto are present and low risk roosts where only P . scapulatus or P . poliocephalus are present (Fig. 1 ). However, such roost risk profiles are not static and are likely determined by roost species composition and modulated by geographic location or latitudinal factors. Indeed, the reported southern range expansion of P . alecto [ 65 ] suggests the likelihood of higher Hendra virus risk roosts further south in coming years. Roost fidelity of P . alecto was relatively higher compared to the other species, which initially appears inconsistent with its Hendra virus reservoir role; however, P . alecto colonies were still expected to turnover at approximately 12% per day (Fig. 3 ), providing enormous potential for transmission between roosts. Thus, the potential for infection to disseminate across the geographic range of the species is clear and underscored by the geographic occurrence of equine cases [ 58 ]. Implications for conservation management We found that roosting at unknown sites was common (458 out of a total of 755 sites used), and we identified 123 previously unknown sites that hosted multiple tracked individuals (and so were classified as “colonies” by our definition). Currently, changes in the abundance and distribution of P . alecto , P . poliocephalus , and P . scapulatus are estimated through Australia’s National Flying-Fox Monitoring Program [ 66 ], and roosting away from known roosts is identified as the major contributor to uncertainty around flying-fox population trend estimates [ 67 , 68 ]. We suggest that the accuracy of the monitoring could thus be substantially improved by the annual inclusion of tracked individuals to help reveal previously unidentified roosts. Our findings have particular relevance for the conservation management of P . poliocephalus as this species used 30% of new roosts and 70% of all roosts. P . poliocephalus is classified as “vulnerable to extinction” in The Action Plan for Australian Bats [ 69 ] and listed as “vulnerable” on the IUCN Red List of Threatened Species. Threats include loss of foraging habitat [ 70 ], extreme temperature events [ 71 ], and human persecution [ 41 ]. None of these threats have abated and have recently been compounded by the unprecedented bush fires during 2019–2020 that burnt an estimated 5.8 Mha of temperate broadleaf forest within P . poliocephalus ’ range [ 72 ]. It is clear from the vast spatial extent of inter-roost movements reported here (e.g., Fig. 4 b) that successful conservation management of P . poliocephalus (and other flying-foxes) must be enacted across the entire species range. Implications for human-wildlife conflict mitigation Our findings show that a flying-fox colony comprises a highly fluid subset of highly nomadic individuals from across the species range, and the size of a colony at any given time would thus reflect the net outcome of opposing influx and outflux of such mobile individuals. This contrasts with the conventional portrayal of a roost as being inhabited by flying-foxes with a “strong fidelity” to a roost, and our findings require a reappraisal of the concept of a “local population” in a “single locality” that is used, for example, in the assessment of impacts of management actions on the species [ 73 ]. Flying-fox roost management actions range from roost vegetation modification to colony dispersal [ 74 , 75 ], but these actions often inadvertently exacerbate the human-wildlife conflict they aim to resolve [ 46 ]. “Dispersal” actions implicitly assume that the individuals that are present at the time of active management are those that are “dispersed.” However, our results indicate that locally, individuals in fact turnover at extremely high rates (Fig. 3 ). This explains why repeat “maintenance dispersals” are required in the majority of actions [ 76 ] because naïve individuals continue to arrive at a site without knowledge of previous dispersal activities. Further, flying-foxes tend to arrive at a roost around dawn and are extremely reluctant to cover great distances during daylight hours possibly due to increased risk of predation [ 77 ] and thermophysiological limitations [ 78 ]; therefore, they have no choice but to attempt to roost in the nearest available site where they provide a “seed” around which a new “splinter colony” can form. This can explain the local proliferation of human-wildlife conflict that is commonly observed following dispersal actions [ 76 ]. It is thus essential that the extreme mobility of flying-foxes and the highly dynamic nature of their colonies now become integral components of the local management of the species. In Australia, flying-fox management actions are currently implemented locally at the level of councils without adequate coordination at both state and federal levels. Yet, the extreme mobility of tracked flying-foxes among the large number of councils (101; Fig. 2 ) clearly indicates that local management actions are likely to affect, and complicate, the management of flying-foxes by councils elsewhere. Furthermore, councils often enact dispersals in response to top-down pressure from members from state and federal electorates. Yet, the extreme mobility of tracked flying-foxes among the large number of state (131) and federal electorates (74) (Fig. 2 ) clearly indicates that such pressure can have negative implications for flying-fox management across other jurisdictions and so is not without political cost. Moreover, current lack of coordinated state and federal oversight means that management actions can be implemented locally by councils without reference to the impacts on the species from management actions elsewhere. Yet, in the case of vulnerable P . poliocephalus , tracked individuals on average visited 8.1 council areas, and 8.2 state and 6.7 federal electorates per year, clearly demonstrating the high potential for cumulative impacts from local management actions on the conservation of this species. Conclusions Our work shows that a flying-fox roost forms a “node” in a network of “staging posts” through which highly nomadic individuals travel far and wide across their species range, which has profound implications for the ecosystem services and zoonotic dynamics of flying-fox populations. In addition, the extreme inter-roost mobility reported here also means that impacts from local management actions can readily reverberate across jurisdictions; hence, local management actions should be formally assessed in light of the impacts of actions undertaken elsewhere, urgently necessitating more holistic coordination at the national scale. As such, our study provides a warning of how management at inappropriate scales can potentially have unforeseen widespread consequences for population processes and ecological functioning in mobile species. Methods Capture and transmitter deployment We deployed transmitters at eight roosts in the Australian states of Queensland (QLD) and New South Wales (NSW) between January 2012 and May 2015, as a component of three discrete studies. In QLD, we caught and released in situ flying-foxes at Boonah (− 28.0° S,152.7° E; n = 56 P . alecto ), Charters Towers (− 20.1° S, 146.3° E; n = 4 P . alecto ), Duaringa (− 23.7° S, 149.7° E; n = 4 P . scapulatus ), Gayndah (− 25.6° S, 151.7° E; n = 4 P . alecto , 8 P . scapulatus ), Loders Creek (− 28.0° S, 153.4° E; n = 4 P . alecto ), Parkinson (− 27.6° S, 153.0° E; n = 10 P . poliocephalus ) and Toowoomba (− 27.6° S, 151.9° E; n = 10 P . alecto ), and in NSW at the Royal Botanic Garden, Sydney (− 33.9° S, 151.2° E; n = 2 P . alecto , 100 P . poliocephalus ). We caught flying-foxes returning to roost pre-dawn using mist-nets (12–18 m wide and 2.4–4.8 m deep) hoisted between two 15–20 m masts situated adjacent to the target roost. We continuously attended nets and immediately lowered them when a bat became entangled. The bat was physically restrained and placed in an individual cotton bag [ 79 ]. The criteria for recruitment for transmitter deployment were health (no evident injury or illness) and body mass (> 550 g for P . alecto and P . poliocephalus ; > 350 g for P . scapulatus ). The accepted proportion of bodyweight of the device is 5% or less [ 80 ], and we aimed to minimize the proportion of bodyweight where possible. In NSW, deployment was limited to P . poliocephalus individuals ≥ 650 g. We sequentially anesthetized all captured bats using the inhalation agent isoflurane [ 81 ] and estimated age (juvenile or adult) from dentition [ 82 ] and the presence or absence of secondary sexual characteristics [ 43 , 83 , 84 ]. Bats meeting the criteria were fitted with collar-mounted transmitters immediately prior to recovery from anesthetic. All bats were recovered from anesthesia, offered fruit juice, and released at their capture location within 5 h of capture. Platform terminal transmitter specifications, application, and operation Microwave Telemetry 9.5 g ( n = 150) and GeoTrak 12 g ( n = 52) solar platform terminal transmitter (PTT) units were mounted on lightweight flexible collars. The QLD collar was a modified nylon webbing proprietary small dog collar whose overlapping ends were secured with an absorbable suture material, allowing the collar to drop off after an estimated 4–6 months. The NSW collar was neoprene–lined leather whose overlapping ends were secured by a ferrous rivet, providing extended deployment time. The combined transmitter/collar weight was < 20 g, translating to < 3.7% of the minimum recruited body mass for P . alecto and P . poliocephalus , and < 5.7% for P . scapulatus . The majority of PTTs had a duty cycle of 72 h off and 10 h on, providing multiple positional fixes every fourth day. Initial QLD deployments also trialed 48 h off, 10 h on, and 96 h off, 10 h on. The PTTs fitted to male P . poliocephalus in NSW had the longest duty cycle of 254 h off, 10 h on. A sparse duty cycle was chosen to maximize battery recharge and transmitter functionality based on the outcomes of previous studies [ 26 , 85 ]. During on periods, the PTTs transmitted locational data to orbiting NOAA satellites, which relayed the data via ARGOS. Data handling and analysis We analyzed all data in the R environment for statistical computing [ 86 ]. We managed data from deployed PTTs in a standardized format in Movebank ( ). Prior to analysis, we examined the datasets for inconsistencies, and fixes with ARGOS code Z, along with fixes with longitudes > 140 or latitudes < 0, were removed. We used daytime fixes (between 10 am and 4 pm) to assign animals to a “roost site” (as mainland Australian flying-foxes do not forage during the day). If high resolution (ARGOS location code 3) daytime fixes occurred within 3.5 km of a “known colony” [ 66 , 87 ], we assumed animals were roosting at that site. Where accurate daytime fixes were more than 3.5 km from a known roost location, we manually assigned animals to a new “roost site” located at the center of the cluster of fixes. If multiple tracked individuals roosted at the same location, this new roost site was confidently considered to be a previously unidentified “colony” of flying-foxes. Jurisdictions There are three levels of government in Australia: local, state, and federal, each with their own elected decision-making bodies and responsibilities [ 88 ], and each with different implications for flying-fox management (see Discussion). The local level of government is usually called the city council or shire council (local council) headed by a Mayor or Shire President. The state level of government is subdivided in “state electorates” with elected representatives known as “Members” of the Legislative Assembly; the federal level of government is subdivided in “federal electorates” with elected representatives known as “Members” of the House of Representatives. To examine the movements of tracked flying-foxes among local councils, and state and federal electorates, we used roost locations to extract jurisdictional boundary data from shapefiles representing local government areas (LGAs), and state and federal electorates, using the R package “sp” [ 89 ]. Shapefiles were downloaded from the Australian Bureau of Statistics website ( ). Movements between roost sites To test whether there were differences in roost site fidelity (i.e., the inverse of the probability of moving between roosts) between species, we constructed candidate generalized linear mixed effects models [ 90 ], including individual identity as a random effect. The global model had a binary response variable of 1 if an animal switched roosts between successive positional fixes and included the interaction between species and time between daytime fixes (in days) as explanatory variables. The variation in time between fixes was caused by differences in duty cycle, missed fixes, or a lack of positional fixes during daylight hours. The best fitting model was selected on the basis of AICc [ 91 ]. Distance moved between roost sites To test whether there were differences in the distance moved between roosts for the different species, we constructed candidate linear mixed effects models [ 90 ] with individual identity as a random factor. The global model had the natural log of the distance between fixes as the response variable and the interaction between species and the natural log of time (in days) between daytime fixes as explanatory variables. The best fitting model was selected on the basis of AICc and included a significant interaction between species and time between daytime fixes (days) (Additional file 11 : Table S3). We took the coefficient “ p ” from the best fitting model for each species separately and used this to estimate the constant “ a ” to model the distance moved between daytime fixes using a power function [ f ( x ) = ax p ]. This was necessary as when time between successive daytime fixes was longer, it was more likely that roost locations were missed and therefore that the observed straight-line distance between fixes was shorter than the actual straight-line distance moved between roost locations. We used this to model the expected average distance between roosts that individuals from each species would be likely to move in a single day. Directional movements To test whether animals coincided in the direction of their movement on a monthly basis, the bearing between each individual’s first and last monthly location was determined. These monthly bearings were plotted for each species separately. These data were used to examine whether they fell into one or more “preferred directions” using the Hermans-Rasson test [ 92 ]. The Bonferroni correction was used to account for the number of individual tests performed (i.e., by dividing the standard 0.05 significance level α by the number of tests performed for each species [ 93 ]). In months when a departure from uniformity was detected by the Hermans-Rasson test, a Rayleigh test [ 94 ] was also applied to examine whether the departure from uniformity consisted of a single peak, i.e., whether individuals of each species were significantly oriented (in the same direction) each month. To test whether the three species performed seasonal north-south movements, a daily mean latitude (relative to capture location) was calculated, and a rolling average was calculated over a 30-day window for each species separately. Where animals were tracked for multiple years ( P . alecto and P . poliocephalus ), we calculated the mean monthly relative latitude of roosting locations and used the “ets” function of the R package forecast [ 95 ] to test whether seasonality was present in the dataset. Availability of data and materials The datasets analyzed during the current study are available in the Dryad database; [ 96 ].
New research led by scientists at Western Sydney University and published in BMC Biology shows that flying-foxes are always on the move among a vast network of roosts, creating key challenges for their management and conservation in Australia. Australia's flying-foxes are amongst the largest mammals in the world that are capable of powered flight. They are highly mobile, which is thought to make them key seed and pollen dispersers in Australia's fragmented forest ecosystems. However, their mobility also facilitates transmission of disease and often brings them in conflict with humans, and so they require a precarious balancing of conservation and management concern. To gain a detailed understanding of flying-fox mobility at the landscape scale, the researchers analyzed the movements of 201 satellite-tracked individuals across eastern Australia, for up to five years. Three species were monitored including 109 gray-headed flying-foxes (P. poliocephalus), 80 black flying-foxes (Pteropus alecto) and 12 little red flying-foxes (P. scapulatus). The tracked flying-foxes used a total of 755 roost sites between them, of which more than half were previously unrecorded. One roost site, the Hervey Bay Botanic Gardens, was visited by tracked individuals from all three species. Individuals traveled thousands of kilometers among many of those roosts each year, with one gray-headed flying-fox covering at least 12,337 km between 123 roosts in 37 local government areas over 1,629 tracking days. Associate Professor Justin Welbergen, the lead author, said: "Our findings indicate that flying-fox roosts are better viewed as parts of a network of 'staging posts' that provide temporary shelters to extremely mobile individuals that wander nomadically throughout much of eastern Australia. This contrasts with the conventional portrayal of a roost as being home to a resident population made up of the same individuals. It has long been recognized that flying-foxes have the capacity to travel long distances; however, the vast scale of the movements among roosts shown by our study indicates that nomadism is in fact a fundamental aspect of flying-fox biology. This necessitates a re-evaluation of how these fascinating animals are managed and conserved." While the extreme mobility has profound implications for the roles of flying-foxes as seed and pollen dispersers, it also means that negative impacts from localized management actions can easily reverberate throughout the species ranges. Coordinated management and conservation efforts should therefore be implemented across Australia to protect these ecologically important species. The research is published in BMC Biology.
10.1186/s12915-020-00829-w
Medicine
Research finds consistent link between the seaside and better health
Sandra J. Geiger et al, Coastal proximity and visits are associated with better health but may not buffer health inequalities, Communications Earth & Environment (2023). DOI: 10.1038/s43247-023-00818-1 Journal information: Communications Earth & Environment
https://dx.doi.org/10.1038/s43247-023-00818-1
https://medicalxpress.com/news/2023-05-link-seaside-health.html
Abstract Societies value the marine environment for its health-promoting potential. In this preregistered study, we used cross-sectional, secondary data from the Seas, Oceans, and Public Health In Europe (SOPHIE) and Australia (SOPHIA) surveys to investigate: (a) relationships of self-reported home coastal proximity and coastal visits with self-reported general health; (b) the potential of both to buffer income-related health inequalities; and (c) the generalizability of these propositions across 15 countries ( n = 11,916–14,702). We find broad cross-country generalizability that living nearer to the coast and visiting it more often are associated with better self-reported general health. These results suggest that coastal access may be a viable and generalized route to promote public health across Europe and Australia. However, the relationships are not strongest among individuals with low household incomes, thereby challenging widespread assumptions of equigenesis that access to coastal environments can buffer income-related health inequalities. Introduction Societies value the marine environment for various reasons, including its health-promoting potential. Five single-country studies from the United Kingdom 1 , 2 , 3 , Belgium 4 , and Spain 5 (for details, see Supplementary Notes 1 ) found that individuals who lived nearer to the coast reported better health than those living further away. Supporting this cross-sectional research, a longitudinal study in England 6 showed that individuals reported better general health in years when they lived within 5 km of the coast compared to years when they lived further away. The relationship between living nearer to the coast and better health may result from reduced exposure to some environmental hazards (e.g., air pollution, but the evidence is mixed 4 ), more physical activity (e.g., walking 1 , 7 ), and/or opportunities for indirect contact (e.g., views from home) which are associated with lower psychological distress 8 . Moreover, living nearer to the coast is associated with more frequent coastal visits 9 . Coastal visit frequency may benefit health because it promotes longer bouts of and/or higher intensity forms of physical activity 10 , social interactions, and psychological restoration from stress 11 , which can help reduce allostatic load 6 . Given that coastal visit frequency drops exponentially as a function of home distance, these benefits are likely to show diminishing marginal returns with increasing home distance 4 . Despite these studies, a systematic review in 2017 7 concluded that the evidence for the relationship between exposure (especially visits) to blue spaces (including coastal environments) and a range of different health metrics was insufficient, in part because it was based on relatively few single-country studies with inconsistent health outcomes. Furthermore, previous research into marine settings and health has mainly focused on the direct relationship between nature contact and health, while its role as a potential modifier is under-researched 12 . One modifier of interest relates to the equigenesis hypothesis 13 , which posits that contact with nature more generally may mitigate or buffer adverse relationships between health risk factors (e.g., low area or household income) and health outcomes 12 , 14 . Buffering and mitigating in this article do not imply causality but mean that a relationship (e.g., between income and health) is weaker under certain circumstances (e.g., with more frequent nature contact). Consistent with the equigenesis hypothesis, a recent systematic review 15 and several studies 13 found that green spaces seemed to buffer the relationship of socioeconomic status and income with health outcomes. Other studies showed no effect 15 or a reverse effect, such that green spaces were associated with poorer health in low-income suburban areas 16 . Regarding coastal contact, two studies from England suggest that living nearer to the coast mitigates the relationship between income deprivation and both self-reported general 3 and mental 17 health. In this study, we investigated: (a) relationships of both self-reported home coastal proximity and coastal visits with self-reported general health; (b) the potential of both to buffer income-related health inequalities; and (c) the generalizability of these propositions across 15 countries, using a Bayesian approach to quantify the relative support for or against any relationships. We analyzed cross-sectional, secondary data from the Seas, Oceans, and Public Health In Europe (SOPHIE) and Australia (SOPHIA) surveys which collected samples representative in terms of age, sex, and region from 14 European countries and Australia, respectively. We expected that both living nearer to the coast (Hypothesis 1) and visiting it more often (Hypothesis 2) would predict better self-reported general health. We also expected that living nearer to the coast (Hypothesis 3) and visiting it more often (Hypothesis 4) would mitigate any adverse relationship between household income and health, such that this relationship would be weaker when individuals live nearer to the coast or visit it more often. We additionally examined generalization across countries in terms of both the proposed relationships (Research Question 1) and the potential modifier (Research Question 2). The present work advances research in four key ways. First, this study goes beyond the relationship of self-reported general health with home coastal proximity to explore its relationship with direct coastal contact (coastal visits). Second, this study investigates the potential modifying role of coastal proximity and visits on the relationship between household income and health. Investigating coastal contact as a potential modifier is important because it may provide “leverage points for intervention” 12 to reduce income-related health inequalities, in addition to policies that focus on reducing income inequality directly. Third, in contrast to previous single-country studies, this study includes samples from 14 European countries and Australia, representative in terms of age, sex, and region. This allows us to test the generalizability of any relationships found across countries. Lastly, we use a Bayesian analytical approach which allows for quantifying the relative support for or against any relationships, provides rich information about the strength of evidence, and is valid for every sample size, including large samples 18 . This approach paves the way for future studies to use the current findings (posterior distributions) as prior knowledge (prior distributions) for an informed and accumulated estimation of the effects 19 . This way, the present study provides a robust, high-quality test of the relationships between two types of coastal contact, proximity and visits, and health. Results Descriptive statistics for self-reported general health depending on (a) self-reported home coastal proximity and (b) self-reported coastal visits are listed in Supplementary Notes 2 . Relationship of home coastal proximity (Hypothesis 1) and coastal visits (Hypothesis 2) with health We investigated whether living nearer to the coast (Hypothesis 1; n = 13,620; 14 countries excluding Czechia) and visiting it more often (Hypothesis 2; n = 14,702; 15 countries) predict better self-reported general health and whether these relationships generalize across countries (Research Question 1). The model with monotonic proximity/visits and a random slope (Model 4) provides the best predictive ability, although differences in LOOIC are small (Supplementary Notes 3 ). Controlling for age, sex, and household income, we find very strong evidence for Hypothesis 1 that living nearer to the coast predicts better self-reported health within countries (BF +- = 82.33, b = 0.02, SE = 0.01, 90% CrI [0.01, 0.03]). A Bayes factor of 82.33 implies that the data are 82.33 times more likely under the hypothesis that the proximity-health relationship is positive (H + ) rather than negative (H − ). Due to the coding of home coastal proximity (lower values =living nearer to the coast) and health (lower values = better health), a positive association means that living nearer to the coast is associated with better health. A slope of 0.02 means that, on average, self-reported general health increases by 0.02 SD with every unit of living nearer to the coast within a country. Importantly, the largest improvement in health when living nearer to the coast, namely 37.9% (90% CrI [7.7%, 64.7%]) of the total improvement, happens between <1 km and 1–2 km. For the other adjacent coastal distance categories, the improvement in health ranges from 6.7% (90% CrI [0.3%, 20.4%]) for 2–5 km and 5–10 km to 18.2% (90% CrI [1.6%, 42.2%]) for 50–100 km to >100 km. Given this model and the prior distributions as well as keeping all covariates constant, individuals who live within 1 km of the coast are 1.22 and 1.06 times more likely to report very good (10.4%) or good (45.7%) health compared to those who live more than 100 km from the coast (very good: 8.5%, good: 43.2%). Reporting fair, bad, or very bad health are 1.07, 1.19, and 1.31 times more common among individuals who live more than 100 km from the coast (fair: 36.7%, bad: 9.2%, very bad: 2.4%) compared to those who live within 1 km of the coast (fair: 34.3%, bad: 7.7%, very bad: 1.8%; Fig. 1 a). Fig. 1: Marginal effects of home coastal proximity and coastal visits on health. Note . a Marginal effects of home coastal proximity on health. b Marginal effects of coastal visits on health. Probability ( y -axis) refers to the probability of choosing each health category. These probabilities sum up to 1 per category of a home coastal proximity and b visits. Points in a , b represent the posterior mean of the probability of self-reported general health in five categories from 1 ‘Very good’ to 5 ‘Very bad’ (indicated by the five colors), depending on a home coastal proximity and b coastal visits. The error bars indicate the 90% credible interval, including the random effects variance across countries. Full size image The evidence for this hypothesis varies across countries, from BF +- = 2.24 in Italy to BF +- = 341.86 in Norway, but always supports a positive relationship. Overall, half of the countries (Australia, Belgium, Bulgaria, Greece, Ireland, Norway, and Poland) show at least strong evidence of an association, while the other half (France, Germany, Italy, the Netherlands, Portugal, Spain, and the United Kingdom) show insufficient to moderate evidence. Nevertheless, the magnitude of this relationship is similar across countries, as indicated by the overlapping credible intervals (Fig. 2 a). Overall, the proximity-health relationship generalizes across countries. Fig. 2: Posterior distributions by country for the average monotonic effect of home coastal proximity and coastal visits on health. Note . a Posterior distributions by country for the average monotonic effect of home coastal proximity on health. b Posterior distributions by country for the average monotonic effect of coastal visits on health. The blue area represents the posterior distribution, and the black dot the average monotonic effect of a home coastal proximity and b coastal visits on self-reported health per county. The error bars indicate the 90% credible interval. For example, in a , a value of 0.04 for Greece means that, on average, self-reported general health improves by 0.04 SD with every one-unit decrease in home coastal distance in Greece. Full size image In line with Hypothesis 2, more coastal visits predict better self-reported health within countries (BF +- → ∞, b = 0.11, SE = 0.02, 90% CrI [0.08, 0.13]) when controlling for age, sex, and income. The slope of 0.11 indicates that, on average, self-reported general health increases by 0.11 SD with every one-unit increase in coastal visits. Of note, 13.3% (90% CrI [4.4%, 22.5%]) of the total increase in health when visiting the coast more often happens between the first two categories (‘once a week or more often’ and ‘once every 2 or 3 weeks’), whereas 53.4% (90% CrI [44.5%, 61.8%]) happens between the last two categories (‘once or twice a year’ and ‘never’). Based on this model and the prior distributions as well as keeping all covariates constant, individuals who visit the coast at least once a week are 2.60 and 1.36 times more likely to report very good (12.4%) or good (47.8%) health than those who never visit it (very good: 4.8%, good: 35.3%). Reporting fair, bad, or very bad health is 1.30, 2.13, and 3.29 times more common among individuals who never visit the coast (fair: 41.7%, bad: 13.7%, very bad: 4.5%) compared to those who visit it at least once a week (fair: 32.0%, bad: 6.4%, very bad: 1.4%; Fig. 1 b). In all countries except Italy, there is at least strong evidence for a positive relationship between coastal visits and self-reported health. However, the magnitude of this relationship differs between countries, as indicated by non-overlapping credible intervals (Fig. 2 b). The positive relationship between visits and health is strongest in Ireland (BF +- → ∞, b = 0.18, SE = 0.03, 90% CrI [0.13, 0.24]) and more positive than in Belgium, France, Italy, the Netherlands, and Spain. The relationship is second strongest in Greece (BF +- = 11,999.00, b = 0.16, SE = 0.04, 90% CrI [0.09, 0.23]), which is different from the one in Italy. Overall, the visits-health relationship seems to generalize across countries in terms of presence but not magnitude. Home coastal proximity (Hypothesis 3) and coastal visits (Hypothesis 4) as a potential ‘buffer’ on the relationship between household income and health We investigated whether home coastal proximity (Hypothesis 3; n = 11,916; 14 countries excluding Czechia) and coastal visits (Hypothesis 4; n = 12,790; 15 countries) moderate the relationship between household income and self-reported health, such that this relationship is weaker when individuals live nearer to the coast or visit it more often. We also tested whether these propositions generalize across countries (Research Question 2). The models with a random slope for both proximity/visits and income (Model 5) has the best predictive abilities (Supplementary Notes 3 ). Controlling for age and sex, we find extremely strong support that individuals with a higher rather than lower household income report better general health (BF +- → ∞, b = −0.17, SE = 0.03, 90% CrI [−0.21, −0.13]). Contrary to Hypothesis 3, there is very strong evidence that the income-health relationship is stronger when living nearer to the coast (BF +- = 39.68, b = 0.01, SE = 0.00, 90% CrI [0.00, 0.01]), though this effect is very small and most likely negligible (Fig. 3 a). Notably, the effect is also compatible with a null effect (i.e., the income-health relationship is similar regardless of how far individuals live from the coast), as indicated by the credible interval that includes zero. This finding generalizes across countries, as the model with a fixed interaction (i.e., identical effects across all countries) has the best predictive abilities. Fig. 3: Marginal effects of household income and home coastal proximity as well as household income and coastal visits on health. Note . a Marginal effects of household income and home coastal proximity on health. b Marginal effects of household income and coastal visits on health. Points represent the posterior mean of self-reported health from 1 ‘Very good’ to 5 ‘Very bad’ (treated as continuous for better understanding) depending on a both household income in quintiles (indicated by the five colors) and home coastal proximity as well as b both household income in quintiles (indicated by the five colors) and coastal visits. The error bars indicate the 90% credible interval, including the random effects variance across countries. The non-simplified graphs with health treated as an ordinal outcome are in Supplementary Notes 9 . Full size image Controlling for age and sex, we find insufficient evidence for Hypothesis 4 that the income-health relationship is weaker when visiting the coast more often (BF +- = 1.08, b = 0.00, SE = 0.00, 90% CrI [−0.01, 0.01]). Testing whether the effect is zero (vs. non-zero; not preregistered) reveals extremely strong evidence that lower household income is associated with poorer health regardless of how often individuals visit the coast (BF 01 = 2,348.96; Fig. 3 b). This finding once again generalizes across countries, as the model with a fixed interaction has the best predictive abilities. Sensitivity analyses We conducted several sensitivity analyses to check the robustness of all findings. The sensitivity analyses (a) with a narrower prior, α, b ~ Normal(0, 5), (b) with data from Czechia, (c) without speeders (i.e., respondents who completed the survey faster than 5 min; all Australian respondents were kept as completion time was not recorded in the Australian survey), and (d) with additional covariates (i.e., education, work status, and political orientation) yielded similar results to the main analyses, suggesting that the results are mostly robust to the choice of prior, inclusion of data from Czechia, exclusion of speeders, and inclusion of covariates (Supplementary Notes 4 ). Discussion We investigated: (a) relationships of self-reported home coastal proximity and coastal visits with self-reported general health; (b) their potential to buffer income-related health inequalities; and (c) the generalizability of these propositions across 15 countries, using a Bayesian approach. Living nearer to the coast predicts better self-reported health within countries across Europe and Australia, with a similar yet rather small magnitude (Table 1 ). Importantly, 37.9% of the total improvement in health when living nearer to the coast happens between <1 km and 1–2 km, suggesting that most health benefits arise from living very close to the coast. Smaller changes (6.7–18.2%) occur between the other adjacent distance categories. These findings are consistent with Wheeler et al. 3 that, for example, self-reported health is better for those living 20–50 km compared to more than 50 km from the coast. Overall, the coast provides people with a wide range of health-promoting opportunities (e.g., physical activity 1 , 7 ). Living very close to the coast may make people more likely to take advantage of these opportunities 6 . Table 1 Summary of the results. Full size table Visiting the coast more often also predicts better self-reported health within countries (Table 1 ). Notably, 53.4% of this total effect happens between visiting ‘once or twice a year’ and visiting ‘never’. This does not necessarily mean that the largest health benefits arise when visiting the coast once a year compared to never visiting it. Rather, we recognize the possibility of reverse causality here, such that individuals who never visit the coast may differ from other groups (e.g., by having a chronic illness), which limits their visit opportunities/mobility. The visits-health relationship varies across countries in terms of magnitude, with the strongest relationship in Ireland and the weakest in Italy, potentially because of high coastal tourism 20 and longer travel distances in Italy (percentage of respondents who live >20 km from the coast in Ireland: 34.4% vs. Italy: 57.3%) may limit the value of the coast for locals. Moreover, the coast may be less accessible in Italy compared to other countries due to high coastal privatization 21 . Supporting previous single-country studies 1 , 2 , 3 , 4 , 5 , 6 , 10 , 11 , 22 , 23 , 24 , the findings strengthen the evidence that living nearer to the coast and visiting it more often are associated with better self-reported health. Access to coastal environments may thus represent a viable and unified route to public health promotion across Europe and Australia. Besides these direct effects, this study provides very and extremely strong evidence against the ‘buffering’ role of coastal contact on the income-health relationship. Lower household income is more strongly associated with poorer health when living nearer to the coast, thereby potentially reinforcing existing income-related health inequalities. However, we recognize that this effect is very small, and the results are also compatible with a null effect that the income-health relationship is similar regardless of how far individuals live from the coast. This finding is in direct contrast to previous research that found a ‘buffering’ effect of coastal proximity for both general health, using the same outcome variable as here 3 , and mental health 17 . However, it aligns with a previous finding that more green spaces are associated with poorer health in low-income areas, potentially because green spaces in these areas are of poorer quality 16 . Coastal quality may also explain the current findings. When individuals with lower household incomes live near the coast, they may be more likely to live in areas of poor aesthetic and environmental quality because housing prices in such areas tend to be lower (e.g., water quality and housing prices 25 ). The current study also finds that lower household income is associated with poorer health regardless of coastal visits. In other words, visit frequency is positively associated with better health, irrespective of income. Although the same number of visits does not reduce health inequalities, increasing the frequency of coastal visits may still provide “leverage points for intervention” 12 to improve health for people of all of incomes; and when targeting groups with lower incomes, such interventions can ultimately reduce income-related health inequalities. Although this study contributes to understanding the value of coastal contact for health, we recognize several limitations. First, as the data are cross-sectional, we cannot rule out that healthier individuals are more likely to live near the coast and visit it more often. Nevertheless, the data are consistent with longitudinal and intervention studies that suggest that exposure to the coast is causally associated with improvements ranging from momentary moods to longer-term health effects 26 . Hence, the current results cannot be dismissed on selective migration grounds alone. Second, the data are limited to middle- and high-income countries. Future research should aim to investigate whether the coastal contact-health relationships also hold for low-income countries. As individuals in low-income countries disproportionally experience threats to their health from marine environments (e.g., due to marine pollution, poor water quality, parasites, and risk of drowning 27 ), the coast may be seen as a health risk rather than a health-protective factor. Third, the surveys were internet-based, which appear to have, for example, under-sampled individuals with low incomes 28 . The findings regarding the equigenesis hypothesis should, therefore, be treated with caution, and future research should attempt to collect samples that are representative of the population in terms of household income. Relatedly, the samples were representative in terms of age, sex, and region at the national but not sub-national level. Future studies may, therefore, aim to collect larger samples that are representative at the sub-national level and assess respondents’ ethnic background, to better understand whether the current findings hold for individuals living in different regions and environments (e.g., urban vs. rural areas) as well as for individuals with different ethnic backgrounds. Forth, the surveys were limited to rather general self-reported measures. For policy recommendations, it would be important to test the current findings’ robustness using objective health measures (e.g., hospitalizations, health service utilization). Future studies may also include (self-reported) travel time to the coast as an additional indicator of perceived coastal accessibility that may be an ecologically more valid indicator than distance per se. In addition, the self-reported visits measure may be subject to memory biases; for example, reported visit frequency in the past 12 months may be higher at the beginning of fall when thinking back to summer compared to spring when thinking back to winter. Although the number of reported visits (in the past four weeks) remains relatively frequent even in fall and winter 29 , future studies may aim to collect data several times a year to smooth out potential seasonality biases. Lastly, the surveys did not monitor coastal quality. Previous research shows that higher objective and perceived quality of green spaces are associated with better self-reported general health 30 . As this may also apply to coastal environments, we recommend that future research considers measures of coastal quality at national and regional levels. Conclusion Living nearer to the coast and visiting it more often are associated with better self-reported health within countries across Europe and Australia. Although direct coastal access may represent a viable route to public health promotion, the current data suggest that the relationships of coastal living and visits with health are not strongest among individuals with low household incomes. These findings challenge widespread assumptions that access to coastal environments can reduce income-related health inequalities. For policymakers, these results suggest that public access to coastal environments may provide clear benefits of coastal contact for public health. However, policymakers should not necessarily expect coastal access to reduce existing inequalities unless they target low-income groups in specific. Promoting and facilitating coastal contact with healthy marine environments in a fair and equitable way should be a guiding principle for policymaking as countries develop their maritime spatial plans, consider future housing needs, and develop public transportation links. Methods We used cross-sectional, secondary data from the Seas, Oceans, and Public Health In Europe (SOPHIE) 30 and Australia (SOPHIA) 31 surveys. The hypotheses and analyses of this work had been preregistered on the Open Science Framework (OSF; ). Deviations from the preregistration are reported in Supplementary Notes 7 . This research was performed in accordance with the declaration of Helsinki. Specifically, ethical approval for the original data collection was obtained from the Ethics Committee at the University of Exeter, Medical School (reference number: Nov18/B/171), and all participants provided informed consent at the beginning of the survey. For the current analysis, we only accessed deidentified participant data. The surveys were primarily focused on public beliefs about how marine-related issues and activities affect human health and well-being. They were translated and administered via online panels coordinated by an international polling company in 14 European countries (Belgium, Bulgaria, Czechia, France, Germany, Greece, Ireland, Italy, the Netherlands, Norway, Poland, Portugal, Spain, and the United Kingdom) from March to April 2019 and in Australia in September 2019 (spring in both settings). Samples from each country were stratified by age, sex, and region. The median survey completion time was 18.2 min. Participants The overall sample includes 15,179 respondents ( n ≈ 1000 per country; Supplementary Notes 2 ) between 18 and 99 years (M = 46.20, SD = 15.81). In all, 51.3% are female, and 48.3% hold a university degree. When using 2019 country-based (rather than sample-based) income quintiles, individuals in lower income quintiles are underrepresented (lowest quintile: 14.2%; highest quintile: 21.9%; for details, see Supplementary Notes 2 ). Measures The analyses focus on selected self-reported measures, including general health as the outcome, home coastal proximity, coastal visits, and household income as the main predictors and effect modifier, as well as basic demographic information (age and sex) as potential confounders, alongside country of residence (for all measures, see Supplementary Notes 5 ). Self-reported general health (outcome) Self-reported general health was measured with the first item from the short-form health survey (SF1): “How is your health in general?” (1 ‘very good’ to 5 ‘very bad’ ). The SF1 has been widely used in research on nature contact 1 , 32 as well as in the British and Irish censuses and is associated with objective health outcomes (e.g., future health service utilization 33 ). Self-reported coastal contact (predictor and effect modifier) Self-reported home coastal proximity was assessed with one item: “Approximately how far do you live from the coast in miles/ km?” (1 ‘up to 1 km/0.5 miles’ to 8 ‘more than 100 km/62 miles’ ). Although subjective, this measure has the advantage over previous studies which used Euclidean straight-line distances from home (or neighborhood centroids) in that it allows respondents to factor in personally relevant network traveling distances. Self-reported coastal visits were assessed as follows: “Thinking now about the last 12 months in particular, which of these statements best describes how often, if ever, you visit the coast or the sea?” (1 ‘once a week or more often’ to 6 ‘never’ ). Household income (predictor) Household income was assessed in deciles adapted to each country for 2019 by the polling company (Supplementary Notes 5 ). For the analysis, household income was collapsed into country-level relative income (rather than sample-level) quintiles and a missing category. This approach reduced model complexity, maintained larger samples, and ensured comparability with previous research 4 , 17 . All four main variables included additional ‘don’t know’ and ‘prefer not to answer’ options. Potential confounders Age (continuous), sex (male vs. female), and household income (in models where it was not an effect modifier) may be confounders. We controlled for these potential confounders if they were associated (Bayes factor of alternative against null hypothesis; BF 10 ≥ 3) with any of the predictors or the outcome in the respective model. Data analysis Data exclusions Respondents were excluded listwise if they had missing values on any of the key variables (but not on the confounders) in each model. The variables age and sex had no missing values, as these data were collected directly by the polling company. In addition, respondents from Czechia were excluded from the models with coastal proximity (Hypotheses 1 and 3). This resulted in analytical samples of 11,916 to 14,702 (for flow diagrams, see Supplementary Notes 6 ). The sampling weights, which ensure that the data are representative of the population (in terms of age, sex, and region), were rescaled to sum up to the new sample size. Bayesian multilevel cumulative probit regressions We used R (Version 4.1.0) 34 , brms (Version 2.16.3) 35 , and RStan (Version 2.21.3) 36 to fit Bayesian multilevel cumulative probit regressions, with respondents at level 1 and countries at level 2. Unlike frequentist approaches, Bayesian analyses allow for conclusions about the relative support for or against any relationships, provide rich information about the strength of evidence, and are valid for large samples 18 . Home coastal proximity and coastal visits were modeled as categorical (reference category: <1 km and ‘once a week or more often’) and monotonic (i.e., change in health between consecutive categories of proximity/visits is consistently non-increasing or non-decreasing 37 ) to account for the unequal distance between categories. To test Hypotheses 1 and 2, we deviated from the preregistration (for rationale, see Supplementary Notes 7 ) due to multi-collinearity between proximity and visits (Kendall’s τ b = 0.60) and fitted four models per hypothesis, each with a random intercept: Model 1 with proximity/visits as a categorical predictor, Model 2 with additional random slopes for categorical proximity/visits (i.e., proximity/visits-health relationship is allowed to vary across countries), Model 3 with proximity/visits as a monotonic predictor, and Model 4 with additional random slopes for monotonic proximity/visits. To test Hypotheses 3 and 4, we used the best-performing model and included the fixed interaction between household income and proximity/visits (Model 1 and 4), then the random slope for income (Model 2 and 5), and last the random interaction (Model 3 and 6). Conclusions regarding the hypotheses were based on the model with the best predictive abilities (i.e., lower leave-one-out cross-validation information criterion; LOOIC), the Bayes Factor (BF; Supplementary Notes 8 ), and the posterior distribution. The BF of alternative against null hypothesis (BF 10 ) and the BF of null against alternative hypothesis (BF 01 ) indicate that two hypotheses were tested (two-sided) against each other: the alternative hypothesis that the relationship differs from zero and the null hypothesis that the relationship is effectively zero. For example, BF 10 = 10 would indicate that the data are 10 times more likely under the alternative compared to the null hypothesis; BF 01 = 10 would indicate the opposite. In contrast, BF +- and BF -+ indicate that the hypotheses were tested one-sided: the relationship is positive (H + ) versus the relationship is negative (H − ). BF +- = 10 would, therefore, mean that the data are 10 times more likely under the hypothesis that the relationship is positive rather than negative, whereas BF -+ = 10 would, once again, indicate the opposite. Conclusions regarding generalizability across countries were based on the overlap of the credible intervals (CrIs). Priors were weakly informative, including α, b ~ Normal(0, 10) for the intercepts α and slope coefficients b , σ country ~ HalfCauchy(0, 10) for the variation around the intercept per country, and a uniform Dirichlet distribution on the simplex parameters ξ (difference between consecutive categories; only for monotonic models). Models were fitted with four independent Markov chain Monte Carlo chains and 4000 iterations, of which 1000 per chain served as a warm-up. The chains converged and were well-mixed, effective sample sizes indicated stable estimates, no divergent transitions occurred, and posterior predictive checks indicated that the models fit the data. Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability The deidentified participant data from Europe (SOPHIE) and the corresponding codebook are available after registration via the UK Data Service ( ; ). The cleaned datasets (SOPHIE and SOPHIA) for the analyses in this article are available as csv files on the OSF ( ). Code availability The R code to conduct the analyses is publicly available on the OSF ( ). Other additional documents are available on the OSF as well, including the preregistration, the statistical analysis plan, and an overview of all measures in the surveys. We followed the STROBE checklist for reporting cross-sectional studies ( ). We have preregistered Hypotheses 1 and 2, Research Question 1, and the data analyses on the OSF ( , December 13, 2021). After accessing the data (December 16, 2021) and inspecting the income-health correlation, we added an addendum for Hypotheses 3 and 4, Research Question 2, and the respective analyses (March 07, 2022).
Seaside residents and holidaymakers have felt it for centuries, but scientists have only recently started to investigate possible health benefits of the coast. Using data from 15 countries, new research led by Sandra Geiger from the Environmental Psychology Group at the University of Vienna confirms public intuition: Living near—but especially visiting—the seaside is associated with better health regardless of country or personal income. The idea that being near the ocean may boost health is not new. As early as 1660, doctors in England began promoting sea bathing and coastal walks for health benefits. By the mid-1800s, taking "the waters" or "sea air" were widely promoted as health treatments among wealthier European citizens. Technological advances in medicine in the early 20th century led to the decline in such practices, which are only recently gaining popularity again among the medical profession. As part of the project "Seas, Oceans, and Public Health In Europe," led by Professor Lora Fleming, Geiger and colleagues from the Universities of Vienna, Exeter, and Birmingham, as well as Seascape Belgium and the European Marine Board, surveyed over 15,000 participants across 14 European countries (Belgium, Bulgaria, Czechia, France, Germany, Greece, Ireland, Italy, the Netherlands, Norway, Poland, Portugal, Spain, the United Kingdom) and Australia about their opinions on various marine-related activities and their own health. The findings, published in the journal Communications Earth & Environment, surprised the team. Lead author Geiger said, "It is striking to see such consistent and clear patterns across all 15 countries. We also now demonstrate that everybody seems to benefit from being near the seaside, not just the wealthy. Although the associations are relatively small, living near and especially visiting the coast can still have substantial effects on population health." Understanding the potential benefits of coastal access for all members of society is key for policymaking. Dr. Paula Kellett from the European Marine Board said, "The substantial health benefits of equal and sustainable access to our coasts should be considered when countries develop their marine spatial plans, consider future housing needs, and develop public transportation links." But what does this mean for landlocked residents like Geiger and her colleagues in Austria? "Austrians and other central Europeans visit the coasts in their millions during the summer months, so they too get to experience some of these benefits. Besides, we are also starting to appreciate the similar health benefits offered by inland waters such as lakes and natural pools."
10.1038/s43247-023-00818-1
Biology
A missing genetic switch at the origin of embryonic malformations
Cell-specific alterations in Pitx1 regulatory landscape 1 activation caused 2 by the loss of a single enhancer, Nature Communications (2021). DOI: 10.1038/s41467-021-27492-1 Journal information: Nature Communications
http://dx.doi.org/10.1038/s41467-021-27492-1
https://phys.org/news/2021-12-genetic-embryonic-malformations.html
Abstract Developmental genes are frequently controlled by multiple enhancers sharing similar specificities. As a result, deletions of such regulatory elements have often failed to reveal their full function. Here, we use the Pitx1 testbed locus to characterize in detail the regulatory and cellular identity alterations following the deletion of one of its enhancers ( Pen ). By combining single cell transcriptomics and an in-embryo cell tracing approach, we observe an increased fraction of Pitx1 non/low-expressing cells and a decreased fraction of Pitx1 high-expressing cells. We find that the over-representation of Pitx1 non/low-expressing cells originates from a failure of the Pitx1 locus to coordinate enhancer activities and 3D chromatin changes. This locus mis -activation induces a localized heterochrony and a concurrent loss of irregular connective tissue, eventually leading to a clubfoot phenotype. This data suggests that, in some cases, redundant enhancers may be used to locally enforce a robust activation of their host regulatory landscapes. Introduction Alteration in the enhancer composition of regulatory landscapes at developmental genes can lead to pathologies by modifying the dosage and/or distribution of gene transcription 1 . Indeed, over the past years, losses of single regulatory units within complex and partially redundant regulatory landscapes were shown to have clear phenotypical outcomes despite inducing only partial decreases in average transcription 2 , 3 , 4 , 5 . As the alterations in the regulatory mechanisms following enhancer deletion have mostly been characterized using bulk tissue analysis, it has been difficult to determine the cell-specific variability behind the loss of expression that accounts for phenotypes. In order to understand the precise molecular origin of these phenotypes, it is therefore essential to characterize how a single enhancer contributes to the activation of entire regulatory landscapes in specific cell populations. An effective model system to address these unsolved questions is the limb bud, where organogenesis requires a tight control of gene transcription to achieve correct patterning 6 . Critical to this is Pitx1 , a transcription factor coding gene that is normally expressed in developing hindlimb buds, but not forelimbs, which channels the limb development program to differentiate into a leg 7 , 8 , 9 . Consequently, forelimb Pitx1 gain-of-function can induce an arm-to-leg transformation, featured by the appearance of an ectopic patella as well as complex changes in the muscular and tendons wiring 10 , 11 . In contrast, Pitx1 knock out has been shown to induce partial leg-to-arm transformations with the disappearance of the patella as well as long bone dysplasia and polydactyly 10 , 12 , 13 . Unexpectedly, bulk transcriptomics strategies have only revealed marginal downstream gene expression changes upon Pitx1 loss, suggesting that an interplay between these changes and the growth rate of limb cell subpopulations collectively result in the various phenotypes 8 , 10 , 11 , 13 , 14 . As for many developmental genes, several enhancers coordinate Pitx1 expression in hindlimbs and other tissues. So far, four enhancers have been identified in mammals: PelB which drives a distal reporter pattern in hindlimbs, PDE that drives expression in the mandibular arch, RA4 that can drive reporters in a subset of fore- and hindlimb cells and finally, Pen , a mesenchymal enhancer that drives expression in both fore- and hindlimbs 11 , 15 , 16 . Only the activity of Pen was so far shown to strongly contribute to Pitx1 function in the hindlimb as its deletion leads to a 35–50% reduction of Pitx1 expression 11 . The deletion of Pen has no impact on bone length or digit numbers but induces a partially penetrant clubfoot phenotype, similar to the one observed in mice and humans upon Pitx1 haploinsufficiency 11 , 14 . One particularity to the Pitx1 locus is that it establishes fundamentally different 3D chromatin conformations in transcriptionally active hindlimbs and inactive forelimbs. In active hindlimbs, Pitx1 forms chromatin interactions with cognate cis -regulatory regions spread over 400 kbs, including Pen as well as PDE, RA4 , and PelB . In contrast, in inactive forelimbs these interactions are absent and the Pitx1 gene forms a contact with the polycomb-repressed gene Neurog1 11 . In this work, we use a combination of single cell transcriptomics (scRNA-seq), a fluorescent cell-tracing approach and genomic technologies to define the contribution of a single enhancer ( Pen ) in establishing the epigenetically- and structurally-active Pitx1 regulatory landscape. Moreover, we investigate whether changes in enhancer activities or 3D structure fundamentally associate with transcription or if those can be functionally disconnected of the transcriptional process. Finally, we assess if Pitx1 expression is homogenous across limb cell populations and if distinct expression levels rely on different enhancer repertoires or, alternatively, in progressive changes in cis-regulatory landscape activities. Results Two approaches to track Pitx1 activities suggest a bimodal cis -regulatory behavior In order to characterise transcriptional, chromatin and structural changes following the Pen enhancer deletion, we combined genetic manipulation of the Pitx1 locus with scRNA-seq and chromatin analysis of sorted limb cell populations. Both approaches enabled characterization of complementary features of gene transcriptional regulation following alterations of the cis -regulatory landscape. First, to define the hindlimb cell types that are expressing Pitx1 and to assess how the Pen enhancer regulates its expression in these cells, we generated single-cell preparations from wildtype ( Pitx1 +/+ ) fore- and hindlimb buds as well as Pen enhancer deleted ( Pitx1 Pen−/Pen− ) or Pitx1 knocked-out ( Pitx1 −/− ) hindlimbs (Fig. 1A ). We performed 10× genomics in duplicates from E12.5 limb buds as these correspond to a transition stage between patterning and cell-differentiation phases. By performing unsupervised clustering of all the wildtype and mutant single cell transcriptomic datasets, we identified five clusters, to which all the dataset contributed, corresponding to the main populations of the limb: one mesenchymal cluster ( Prrx1 + , Prrx2 + , Twist1 +; 89% of the cells) and four non-mesenchymal satellite clusters including muscle ( Myod1 +, Ttn +; Myh3 +; 4% of the cells), epithelium ( Wnt6 +, Krt14 + ; 5% of the cells), endothelium ( Cdh5 + , Cldn5 +; 1% of the cells) and one immune cell cluster ( C1qa +, Ccr1 +; 1% of the cells) (Fig. 1B , Fig. S1 , Supplementary Dataset S1 ). Yet, as Pitx1 is mostly expressed in the hindlimb mesenchymal cluster, further analyses were performed only in these cells (Fig. 1C ). Fig. 1: Experimental setup, single cell clustering and regulatory sensor. A Pitx1 +/ + , Pitx1 Pen−/Pen− , and Pitx1 −/− transgenic E12.5 embryos were obtained by tetraploid complementation and single cell transcriptomic analyses were produced from fore- and hindlimbs. B UMAP clustering of wildtype and mutant fore- and hindlimbs shows one mesenchymal as well as four satellite clusters. C UMAP colored according to Pitx1 expression in wildtype hindlimbs (levels represented by the red color scale) shows expression mostly in the mesenchyme cluster. D A cassette containing a minimal β-globin promoter ( mP ) and an EGFP reporter gene is integrated upstream of Pitx1 . A secondary round of CRISPR/Cas9 targeting is then used to delete the Pen enhancer. E Conventional and light-sheet microscopy reveal that Pitx1 GFP embryos display EGFP expression domains corresponding to the one of Pitx1 ( N = 3), scale bars = 2 mm. F RNA-seq and H3K27ac tracks of sorted hindlimb cells show that the sensor approach can separate Pitx1 active (GFP+) and inactive (GFP−) regulatory landscapes. G C-HiC of the Pitx1 locus in GFP+ and GFP− hindlimb cells. Darker red or blue bins represent more frequent contacts as represented by scaled bars on the left. GFP+ cells bear chromatin interactions between Pitx1 and its associated enhancers (see green arrows). GFP− cells do not display these interactions but a strong contact between Pitx1 and Neurog1 (see red arrow). The lower map is a subtraction of the two above where GFP+ preferential interactions are displayed in red and GFP- preferential interactions in blue. Full size image In parallel, we devised a fluorescent reporter system to track the regulatory activities of the Pitx1 locus in hindlimbs (Fig. 1D ). Specifically, we first established a reporter line ( Pitx1 GFP ) by homozygously integrating a regulatory sensor cassette, constituted of a minimal β-globin promoter and an EGFP reporter gene, 2 kb upstream of the Pitx1 promoter in mouse embryonic stem cells (mESCs). These cells were re-targeted to obtain a homozygous deletion of the Pen enhancer ( Pitx1 GFP;ΔPen ). Embryos were then derived from the mESCs via tetraploid complementation 17 . Conventional and light sheet imaging of Pitx1 GFP embryos showed that the reporter was expressed in all Pitx1 expression domains including the pituitary gland, the mandible, the genital tubercle and the hindlimbs (Figs. 1E , S2A , Supplementary Video S1 ) 13 , 18 , 19 . In order to investigate potential alterations of gene expression following the EGFP transgene integration, we produced E12.5 bulk hindlimb transcriptomes in both Pitx1 +/+ and Pitx1 GFP . Here, we did not observe a change in Pitx1 expression suggesting that the insertion of the EGFP transgene did not alter Pitx1 regulation (Supplementary Dataset S2 ). We then FACS sorted GFP+ and GFP− cells from E12.5 Pitx1 GFP hindlimbs and processed cells for RNA-seq, ChIP-seq and Capture-HiC (C-HiC) (Figs. 1F–G , S2B, C ). We found that 8% of the cells in Pitx1 GFP hindlimbs displayed no EGFP signal, thereby suggesting that the majority of hindlimb cells possesses an active Pitx1 regulatory landscape. We next compared the transcriptome of GFP+ and GFP− cells and observed a 40-fold enrichment for Pitx1 expression in GFP+ cells, validating the Pitx1 GFP allele to track the Pitx1 regulatory landscape activities (Figs. 1F , S3A , Supplementary Dataset S3 ). As expected from our scRNA-seq analyses, we found that GFP+/ Pitx1 + cells were enriched for limb mesenchymal derivatives markers ( Prrx1 , Prrx2, Twist1, Sox9 , Col2a1, Col3a1, Lum ) and that GFP−/ Pitx1 − were enriched for markers of non-mesenchymal satellite clusters including muscle ( Myod1, Ttn ), epithelium ( Wnt6 , Krt15 ), endothelium ( Cdh5 , Cldn5 ) and immune cells ( C1qa , Ccr1 ) (Fig. S3B , Supplementary Dataset S3 ). Yet, the enrichment of these cell types does not preclude a fraction of GFP−/ Pitx1 − to be of mesenchymal origin as we found a weak but clear expression of some mesenchymal markers such as Prrx1 or Twist1 in this population (Fig. S3C ) . Conversely, we found weak expression of muscle ( Myh3 ) and ectodermal ( Krt14 ) markers in GFP +/ Pitx1 + cells (Fig. S3C ). Finally, as Pitx1 was previously associated with tissue outgrowth, we also assayed proliferative and apoptotic behaviors of GFP+ and GFP− cells 20 . As suspected, we found that GFP +/ Pitx1 + cells are slightly more proliferative and contain less apoptotic cells (Fig. S4A, B ). We then assayed the cis -regulatory activities in GFP−/Pitx1− and GFP + /Pitx1 + hindlimb cells using the H3K27ac chromatin mark as a proxy for enhancer activities and C-HiC to determine the locus chromatin architecture 21 . In GFP−/Pitx1− cells, neither Pitx1 promoter nor its various enhancers, including Pen , were found enriched with H3K27ac (Fig. 1F ). Moreover, the locus 3D structure is in a repressed state where Pitx1 displays a strong interaction with the repressed Neurog1 gene and no interaction with its cognate enhancers (Figs. 1G and S5 ). This data shows that GFP−/Pitx1− hindlimb cells display a complete absence of active regulatory landscape features. In contrast, in GFP + /Pitx1 + cells all known Pitx1 enhancers as well as its promoter are strongly enriched in H3K27ac chromatin marks. Furthermore, in these cells Pitx1 establishes strong contacts with its enhancers PelB , PDE, RA4 , and Pen (Figs. 1F, G and S5 ). In summary, this data shows that within the hindlimb, classically considered as a Pitx1 active tissue, 8% of cells, from mesenchymal, immune, endothelium, muscle and epithelium origin, display an inactive Pitx1 cis -regulatory landscape and 3D architecture. Moreover, it suggests a bimodal regulatory behavior, where the Pitx1 promoter, its associated enhancers and the locus 3D structure are all displaying an active mode or none of them are. We then further characterised Pitx1 expression specificities within the hindlimb mesenchyme. Hindlimb proximal cell clusters express Pitx1 at higher level To characterize Pitx1 transcription within mesenchymal subpopulations, we first re-clustered mesenchymal cells from all datasets. From this analysis, we could define nine clusters (Fig. 2A ). We first observed that their distribution in the UMAP space is strongly influenced by the limb proximo-distal axis, as illustrated by Shox2 (proximal marker) and Hoxd13 (distal marker) transcript distributions (Fig. 2B ). We further annotated the clusters according to the expression of known marker genes (Supplementary Dataset S1 ). In the proximal limb section, we identified four clusters. First, we found an undifferentiated P roximal P roliferative P rogenitors cluster which is characterized by high expression of proliferative marker genes, where most cells were found in G2 and S phase and that expresses markers linked to previously identified limb mesenchymal progenitor (LMPs) cells ( PPP : Irx5 +, Alx4 +, Tbx2/3 + , Shox2 + , Hist1h1d +, Top2a +) (Figs. 2C, D and S6A, B ) 22 . We then identified a T endon P rogenitor cluster ( TP : Shox2 +; Osr1 +; Scx +) and an I rregular C onnective T issue cluster which includes muscle connective tissue and ultimately patterns tendons and muscles ( ICT : Shox2 +; Osr1 + , Dcn + , Lum + , Kera + , Col3a1 +) (Fig. 2C, D ) 23 . Finally, in the proximal limb we observed a single cluster of P roximal C ondensations, which already displays late chondrogenic markers and will give rise to proximal limb bones ( PC : Tbx15 +; Sox9 +; Col2a1 +, Col9a3 + , Acan +)(Figs. 2C,D and S6B ) 22 . In the distal limb, we observed the presence of two undifferentiated distal mesenchyme ( Msx1 +) clusters that also relate to previously identified LMPs: one that we classified as D istal P roliferative P rogenitors ( DPP : Tbx2/3 + , Jag1 + , Hoxd13 +; Msx1 +; Hist1h1d +) as it displays a strong expression of proliferation markers, while the other is defined as D istal P rogenitors ( DP : Tbx2/3 + , Jag1 + , Hoxd13 +; Msx1 +) (Figs. 2C, D and S6B ). In both of these clusters a majority of the cells appear to be either in G2 or S phase indicative of their high proliferative rate (Fig. S6A ) . Also, in the distal limb, we identified two more differentiated clusters: E arly D igit C ondensations ( EDC : Hoxd13 + ; Sox9 +, Col2a1− , Col9a3− ), which are a type of distal osteochondrogenic progenitors and L ate D igit C ondensations ( LDC : Irx1 + , Col2a1 + , Col9a3 +) which are more differentiated chondrocytes 22 . Finally, in-between proximal and distal regions ( Shox2 + and Hoxd13 +), we found a cluster of chondrocytic cells that we considered to be the M e s opodium ( Ms : Sox9 +; FoxcI + , Gdf5 + , Col2a1 +) and thus corresponding to ankles or wrists (Fig. 2C, D , Supplementary Dataset S1 ). Fig. 2: Pitx1 expression in wildtype hindlimbs. A UMAP of re-clustered mesenchymal cells from all datasets. B UMAP distribution of Shox2 (proximal) and Hoxd13 (distal) markers. Red to blue heatmap color scale represents levels of expression of Shox2 or Hoxd13 , respectively. C Representative marker genes for each cluster. The dot size corresponds to the percentage of cells that express a given marker in the hindlimb Pitx1 +/+ dataset. D UMAP expression distribution of selected marker genes. Red color scales represent selected marker genes levels of expression. E RNA-velocity analysis of hindlimb wildtype mesenchymal clusters. Note that the differentiated chondrocytic cell clusters (upper part) are predicted to derive from proximal and distal progenitors’ clusters (bottom part). F Pitx1 expression density plot in the proximal (red line) and distal clusters (blue line) in the hindlimb wildtype dataset. Definition of the three types of Pitx1 -expressing cells: non/low− (<= 0.3 Pitx1 expression levels), intermediate- (>0.3; <= 1.45), high- expressing (>1.45). The expression scale corresponds to a logE(expression_value+1). G Hindlimb wildtype cells distribution across clusters in the UMAP space based on Pitx1 expression levels. H Hindlimb wildtype cells proportions according to Pitx1 expression levels across mesenchymal clusters. Full size image To better understand the links between the different clusters, we ran an RNA velocity analysis in the hindlimb dataset, which predicts cell lineage differentiation based on the dynamics of spliced (mature) versus unspliced (immature) mRNAs (Fig. 2E ) 24 , 25 . We found that in the proximal limb a set of Irx5 -expressing cells located within the PPP and ICT clusters are progenitors for the more differentiated proximal clusters such as TP and PC (Fig. 2D, E ) 26 . In the distal limb, DP and DPP clusters appear to be progenitors for EDC and then LDC. The Ms cluster originates from both proximal (PPP-ICT) and distal (DP-DPP) progenitor clusters, confirming its proximo-distal origin (Fig. 2E ). We then assessed whether Pitx1 is differentially expressed among clusters in Pitx1 +/+ hindlimbs. Overall, we found Pitx1 expressed in all mesenchymal clusters, yet with a proximal preference (Fig. 2D, F ). We then classified mesenchymal Pitx1 expressing cells in three categories: non/low-expressing (transcription values < =0.3, 21% of the hindlimb wildtype cells), intermediate-expressing (transcription > 0.3; <= 1.45, 40% of wildtype cells), and high-expressing (transcription > 1.45, 39% of wildtype cells) (Fig. 2F ). As expected, we found that a majority of high expressing cells are located in proximal clusters (PPP, TP, ICT, PC) and a majority of intermediate-expressing cells in distal clusters (DP, DPP, EDC, LDC) (Fig. 2G, H ). We also observed that the Ms cluster, previously identified as a cluster originating from the proximal and distal cell-types, is formed by a similar distribution of high-expressing (proximal) and intermediate-expressing (distal) cells in line with a proximo-distal origin (Fig. 2H ). Pitx1 expression levels associate with global change in regulatory landscape acetylation Next we explored how cells can achieve distinct Pitx1 transcriptional outputs. Practically, we asked whether high- and intermediate-expressing cells use a distinct Pitx1 enhancer repertoire to account for the different expression levels. We sorted the two cell populations from Pitx1 GFP hindlimbs by GFP intensities: GFP+− (intermediate-expressing) and GFP++ (high-expressing) and performed RNA-seq as well as H3K27ac ChIP-seq on the two positive populations (Fig. 3A ). On average, we found three times more Pitx1 transcripts in GFP++ cells than in GFP+− cells as well as an enrichment for several known Pitx1 target genes including Tbx4 (Fig. 3B , Fig. S7A,B , Supplementary Dataset S4 ) 8 . Moreover, as expected from the single-cell analysis, high-expressing GFP++ cells were mostly enriched for proximal limbs markers ( Shox2 , Gsc , Tbx18 , Tbx3 , and Hoxa11 ) and showed higher expression of ICT marker genes ( Kera and Lum ) (Figs. 3C , and S7C , Supplementary Dataset S4 ). In contrast, intermediate-expressing cells GFP+− where enriched for distal cell markers ( Hoxa13 , Hoxd13, Wnt5a , Lhx2 and Msx1 ) (Figs. 3C , S7C , Supplementary Dataset S4 ). Fig. 3: High- and intermediate-expressing Pitx1 regulatory landscape activities. A FACS sorting of Pitx1 GFPs/GFPs forelimb and hindlimbs. Note the EGFP high- (GFP++, dark green) and intermediate-expressing (GFP+−, light green) populations. B . Normalised counts of Pitx1 expression in GFP−, GFP+− and GFP++ cells. Averages are represented by a horizontal bar. Adjusted p -values (padj) of differential gene expression are computed using the Wald-test and Benjamini-Hochberg multiple testing correction as implemented in the Deseq2 tool (Supplementary Dataset S4 and Source data). C Expression of selected marker genes in GFP− (satellite cell types), GFP+− (distal) and GFP++ (proximal and ICT) cells. D H3K27ac ChIP-seq at the Pitx1 locus. Note that enhancers are active in both intermediate- and high-expressing cells, yet with a few regions marked only in high-expressing cells. E H3K27ac profile at the Pitx1 gene body, Pitx1 Proximal Promoter Region (PPPR, see black arrow), region A (regA), PDE and Pen enhancer. Full size image In both intermediate- and high-expressing cells, the previously characterized Pitx1 enhancer repertoire— PelB , PDE , RA4 , and Pen —was found marked by H3K27ac. Yet, in high-expressing cells (GFP++), stronger H3K27ac signal was found at these elements concomitantly with a strong increase at two specific regions: the Pitx1 proximal promoter region ( PPPR ) and the region A ( regA ). We also observed a few regions upstream of Pen that were strongly enriched for H3K27ac in GFP++ cells (Fig. 3D, E ) 11 . However, those sequences do not seem to be important for Pitx1 expression as the deletion of the entire region between Pitx1 and Pen , including Pen but not those regions, fully recapitulates the Pitx1 hindlimb knock out phenotype 11 . Altogether, this data shows that Pitx1 regional expression differences across hindlimbs associate with a progressive increase of its cis -regulatory landscape activity rather than from the usage of different enhancers repertoires. These results further re-enforce the idea that the fundamental unit of Pitx1 regulation is the landscape as a whole rather than individual enhancers. Pen deletion increases the proportion of Pitx1 non/low-expressing cells in hindlimbs Observing the coordination between regulatory units at the locus to modulate gene expression, we sought to test how the deletion of one of them influences the overall unity of the locus. Therefore, we took advantage of the Pitx1 EGFP sensor and of scRNA-seq to track how the homozygous deletion of the Pen enhancer affects the hindlimb Pitx1 locus activity. First, the removal of Pen within the Pitx1 GFP background ( Pitx1 GFP;ΔPen ) induced a shift in the expression of the GFP reporter gene in hindlimbs (Fig. 4A, B ). Specifically, the proportion of GFP− cells raised from 8% in Pitx1 GFP to 16% in Pitx1 GFP;ΔPen at E12.5 and from 12% to 29% at E13.5 (Fig. S8A, B ). To confirm that this effect is not due to a difference in the distribution of EGFP fluorescence during cell sorting, we compared EGFP transcription in Pitx1 GFP;ΔPen and Pitx1 GFP GFP− cells and did not observe a difference (Fig. S8C ). Fig. 4: Influence of the Pen deletion on Pitx1 expression in hindlimb cell populations. A EGFP expression pattern in Pitx1 GFP and Pitx1 GFP;ΔPen E12.5 embryos ( N = 3), scale bar = 2 mm. B FACS profile of Pitx1 GFP (red) and Pitx1 GFP;ΔPen (cyan) hindlimbs shows an increased number of EGFP non/low-expressing cells as well as a decrease of EGFP high-expressing cells in Pitx1 GFP;ΔPen hindlimbs. C Pitx1 expression distributions in Pitx1 +/+ (red) and Pitx1 Pen−/Pen− (cyan) hindlimb cells show an increased proportion of non/low-expressing cells and a decreased proportion of high-expressing cells in Pitx1 Pen−/Pen− hindlimbs. The dotted lines indicate the threshold between non/low-expressing and intermediate-expressing (0.3) as well as between intermediate-expressing and high expressing cells (1.45). The expression scale is a logE(expression_value+1). D Pitx1 expression across all clusters in Pitx1 +/+ and Pitx1 Pen−/Pen− hindlimbs. Averages are represented by a horizontal bar. Adjusted p -values ( p ) shown in the figure were calculated by Wilcoxon Ranks Sum test using the FindMarkers function from Seurat R Package (Supplementary Dataset S5 ). The dotted lines indicate the threshold between non/low-expressing and intermediate-expressing (0.3). The fold change in non/low-expressing cell number between Pitx1 +/+ and Pitx1 Pen−/Pen− is shown at the bottom of the violin plots. Note the strong loss of expression and the accumulation of non/low-expressing cells in ICT and PPP clusters. E Distribution of Pitx1 expression in proximal and distal cells of Pitx1 +/+ and Pitx1 Pen−/Pen− hindlimbs. Note the strong increase in proximal non/low-expressing cell fraction. F Proportion of non/low-, intermediate- and high- Pitx1 expressing cells across conditions. Full size image Secondly, we compared Pitx1 +/+ and Pitx1 Pen−/Pen− scRNA-seq dataset and found a similar effect as the Pen deletion induces a significant 29% loss of Pitx1 expression (adjusted p -value = 1.75e−96 (Wilcoxon Rank Sum test)) featured by a decrease in Pitx1 high-expressing cells and a strong increase in non/low-expressing cells (Fig. 4C ). Across hindlimb mesenchymal cells, the proportion of non/low-expressing cells was indeed raised from 21% in Pitx1 +/+ to 35% in Pitx1 Pen−/Pen− . In summary, the two approaches show that behind the weak average loss of Pitx1 expression, a strong increase of non/low-expressing cells in mutant hindlimbs could account for the clubfoot phenotype seen in these animals 11 . We further quantified within the scRNA-seq dataset if this alteration in expression was equally distributed among various hindlimb cell-types or if some populations were more specifically affected. All clusters with the exception of the Ms and the LDC showed a significant loss of Pitx1 expression ranging from 24 to 39% (Fig. 4D ). With respect to the proportion of non/low-expressing cells, we saw that proximal cells showed a preferential 2.1-fold enrichment of non/low-expressing cells (13 to 28%) in comparison with distal cells (1.6-fold, 29 to 45%) (Fig. 4E, F ). We then computed the increase of non/low-expressing Pitx1 cells in each cluster and saw that two proximal clusters, ICT and PPP, showed a particularly strong 3.5- and 2-fold increase in Pitx1 non/low-expressing cells respectively (Fig. 4D ). It is important to note that in both clusters the vast majority of cells usually express Pitx1 at a high level (Figs. 2H , S9 ). Other clusters showed 1.5- to 1.8-fold increase in Pitx1 non/low-expressing cells. In conclusion, we found that proximal, high-expressing clusters are more affected by the enhancer deletion than distal, intermediate-expressing clusters. We subsequently investigated if this differential alteration of Pitx1 expression among hindlimb cell population affected the proportion of cells within the clusters. Pen deletion delays the formation of irregular connective tissue As a positive control for the effect of Pitx1 loss-of-function in Pitx1 Pen−/Pen− embryos, we took advantage of two datasets that do not express Pitx1 : Pitx1 +/+ forelimbs and Pitx1 −/− hindlimbs. As a proxy for the functional impact of Pitx1 transcriptional change on limb development, we measured the relative proportions of the different cell clusters in the different datasets. First, we did not observe changes in the proportions of non-mesenchymal satellite cell clusters in any of the conditions (Fig. S10 ). We then measured the proportions of the different mesenchymal sub-clusters (Fig. 5A, B ). By comparing wildtype fore- and hindlimbs, we did not observe any significant change in the proportion of cell-types, suggesting that fore- and hindlimbs are similarly populated despite the obvious structural differences between arms and legs. In contrast, Pitx1 −/− hindlimbs display a heterochronic phenotype, featuring an increase in progenitor cells in both the proximal and distal regions of hindlimbs (PPP and DPP cell clusters) while a concurrent decrease is seen in several differentiated cell types in proximal and distal hindlimbs (ICT, PC, Ms, and LDC) (Fig. 5A, B ). Remarkably, the loss of the Pen enhancer resulted in a similar effect but only significant in the proximal limb cell clusters (Fig. 5A, B ). Specifically, the proportion of PPP cells increased in Pitx1 Pen−/Pen− hindlimbs as the proportion of ICT cells decreased. This alteration correlates with the strong loss of Pitx1 transcription seen in both clusters (Fig. 4D ). The increase in PPP cells is further supported by the upregulation of its markers in Pitx1 Pen−/Pen− hindlimbs ( Hist1h genes, Top2a , and others; Supplementary Dataset S5 ). Fig. 5: Influence of the Pen deletion on limb cell populations. A UMAP of mesenchymal cell type proportions across conditions, (+) and (−) symbols indicate increase or decrease in cell proportions in comparison to wildtype hindlimbs. Abbreviations are described in Fig. 2A . B Quantification of cell type proportions across conditions, p -values < 0.01 are marked with an asterisk (see Source data). P -values were calculated pairwise using differential proportion analysis in R 60 . Abbreviations are described in Fig. 2A . C Velocity analyses from proximal clusters in all dataset. Arrows represent the direction toward the predicted fate territories. Note in Pitx1 Pen−/Pen− and Pitx1 −/− the velocity-predicted cell movements between PPP and ICT which likely represent ongoing differentiation that is not predicted in wildtype hindlimbs or forelimbs. D Velocity analyses from distal clusters in all dataset. Note in Pitx1 −/− hindlimbs the loss of late digit condensations (black arrows) at the end of the velocity-predicted cell movements. Full size image To test if these effects could be explained by a delayed differentiation of progenitor cells, we performed a velocity analysis on Pitx1 −/− , Pitx1 Pen−/Pen− and Pitx1 +/+ limbs in proximal and distal cell clusters separately. In the proximal part of the hindlimb, we found in both mutants a predicted connection from PPP to ICT cells, suggesting an ongoing differentiation process (Fig. 5C ). This connection was not present in Pitx1 +/+ fore- and hindlimbs, suggesting that the differentiation process was completed in these tissues. These findings are further supported by an increase of ICT marker genes ( Lum , Dcn , and Kera ) in PPP cells of both mutants, suggesting that those cells have only partially adopted an ICT identity but still did not fully differentiate (Fig. S11 ). In contrast, the velocity analysis of distal clusters did not show any changes in Pitx1 Pen−/Pen− hindlimbs, in agreement with the unaltered proportion of distal cell clusters shown above (Fig. 5B, D ). Finally, we observed in Pitx1 −/− hindlimbs an accumulation of distal progenitor cells and a loss of differentiated LDC cells suggesting a slower distal differentiation process in Pitx1 −/− hindlimbs. Together, these findings support a form of heterochrony that affects only the proximal part of Pitx1 Pen−/Pen− hindlimbs and that is featured by a delayed differentiation of PPP to ICT. As Pitx1 has been shown to have both indirect and direct downstream effects, we further investigated differentially expressed genes in Pitx1 loss-of-function hindlimbs that could induce these effects. In particular it has been shown that Tbx4 , a known downstream target gene of Pitx1 , mediates the Pitx1 -effect on hindlimb buds growth rate 8 , 20 , 27 . As anticipated, we found a downregulation of the Tbx4 in all clusters aside of PC, Ms, and LDC in both Pitx1 −/− and Pitx1 Pen−/Pen− hindlimbs, with the strongest effect in ICT and PPP clusters (Fig. S12A–D , Supplementary Dataset S5 ). To further determine the origin of the Tbx4 loss we assessed in Pitx1 Pen−/Pen− hindlimb clusters the expression of caudal Hox genes, which have been suggested to control Tbx4 along with Pitx1 28 . Here, we did not find an alteration in Hox expression levels that correlates with Tbx4 loss, suggesting that Tbx4 decrease is rather a direct effect of Pitx1 loss-of-expression (Supplementary Dataset S5 ). Finally, we measured if the Tbx4 expression loss was sufficient to alter cell proliferation and apoptosis, therefore changing the hindlimb cell type composition 20 . Overall, we did not observe changes in either, suggesting that the observed loss of Tbx4 is not sufficient to alter cell proliferation and apoptosis, and that the induced phenotype takes origin from an independent mechanism (Fig. S13A, B ). Moreover, aside from Tbx4 , we found numerous dysregulated genes in Pitx1 Pen−/Pen− hindlimbs which might contribute to observed phenotypes (Supplementary Dataset S5 ). This is the case for Dcn , an ICT marker gene previously described to be involved in tendon elasticity in mice as well as the Six1 and Six2 genes that are expressed in connective tissue and necessary for skeletal muscle development 23 , 29 , 30 , 31 , 32 . The Pen enhancer contributes to Pitx1 regulatory landscape activation The establishment of the active Pitx1 chromatin landscape includes changes in 3D conformation and the acetylation of specific cis -regulatory elements. Therefore, we asked whether the Pen enhancer itself is required to establish these features and specifically if its deletion would impact them. In GFP+ and GFP− cells from Pitx1 GFP;ΔPen hindlimb buds, we used RNA-seq to assess whether we could observe similar changes in cellular identity upon Pen enhancer loss as the one previously described using scRNA-seq. As expected, we could observe in GFP− cells the accumulation of mesenchymal markers ( Prrx1 , Twist1 ) with a particular enrichment for ICT markers ( Col3a1, Col1a1, Col1a2, Lum) (Fig. 6A , Supplementary Dataset S6 ). As a consequence of the accumulation of Pitx1 non/low-expressing mesenchymal cells, we also observed a dilution of non-mesenchymal clusters marked by a decrease of epithelium ( Wnt6 , Krt14 ) and muscle ( Ttn ) markers. In GFP+ cells, we did not observe a clear change in identity markers indicating that the cell type composition is similar between Pitx1 GFP and Pitx1 GFP;ΔPen high-expressing cells (Supplementary Dataset S7 ). This suggests that these high expressing cells, that escape a loss-of-expression following the deletion of Pen , must display an adaptive mechanism to accommodate the Pen enhancer loss. Fig. 6: Single enhancer deletion results in inefficient regulatory landscape activation. A Log2 fold change and RPKM of mesenchymal (red) and satellite (darkgreen) marker genes in Pitx1 GFP and Pitx1 GFP;ΔPen GFP− hindlimbs cells. Note the decrease in satellite markers and the increase in mesenchymal markers in Pitx1 GFP;ΔPen GFP− cells. B H3K27ac ChIP-seq and RNA-seq tracks at the Pitx1 locus in GFP+ cells of Pitx1 GFP and Pitx1 GFP;ΔPen hindlimbs. Note the loss of the Pen enhancer region (black arrow). C H3K27ac ChIP-seq and RNA-seq at the Pitx1 locus in GFP− cells of Pitx1 GFP and Pitx1 GFP;ΔPen hindlimbs. Note the acetylation of Pitx1 promoter and enhancers (blue arrows) and the weak Pitx1 transcription. D C-HiC subtraction map between GFP− and GFP+ Pitx1 GFP;ΔPen hindlimb cells. GFP+ preferential interactions are displayed in red and GFP- preferential interactions in blue. E C-HiC subtraction maps between Pitx1 GFP and Pitx1 GFP;ΔPen GFP+ hindlimb cells. Note the loss of interaction between Pitx1 and the Pen- deleted region (green arrow). Pitx1 GFP;ΔPen GFP+ preferential interactions are displayed in red and Pitx1 GFP GFP+ preferential interactions in blue. F Subtraction track of virtual 4C between Pitx1 GFP (blue) and Pitx1 GFP;ΔPen (red) GFP+ hindlimb cells with the Pitx1 promoter as viewpoint. Note the partial loss of interactions between Pitx1 and its telomeric enhancers ( PDE, RA4 , and Pen ; green arrows). G C-HiC subtraction maps between Pitx1 GFP and Pitx1 GFP;ΔPen GFP− hindlimb cells. Pitx1 GFP;ΔPen GFP− preferential interactions are displayed in red and Pitx1 GFP GFP− preferential interactions in blue. Full size image We then performed H3K27ac ChIP-seq in the escaping GFP+ cells and in the increased fraction of GFP− cells. In Pitx1 GFP;ΔPen GFP+ cells, we observed a distribution of H3K27ac over the landscape that was virtually identical to Pitx1 GFP GFP+ hindlimbs cells, with the exception of the Pen enhancer itself (Fig. 6B ). This result suggests that the Pitx1 expressing cells in the Pen deletion background use the same enhancer repertoire as the Pitx1 GFP expressing cells and thus do not use an alternative regulatory landscape. Moreover, we observed the same average Pitx1 expression level in Pitx1 GFP and Pitx1 GFP;ΔPen GFP+ cells (Supplementary Dataset S7 ). In GFP− cells deleted for Pen , in contrast to Pitx1 GFP cells, we observed ectopic acetylation of the Pitx1 promoter as well as of the RA4 and PelB enhancers (Fig. 6C ). These activities are likely caused by the relocation, in the GFP− fraction, of cells that would normally express Pitx1 but fail to establish a fully active landscape in the absence of Pen . In these cells, we observed a marginal increase in Pitx1 expression (FC = 1.6, padj = 0.0026) that suggests that the locus is less repressed as in wildtype GFP− cells (Fig. 6C , Supplementary Dataset S6 ). We then measured how the lack of Pen affects the locus 3D structure dynamics in Pitx1 GFP;ΔPen hindlimbs. First, GFP+ and GFP− Pitx1 GFP;ΔPen hindlimb cells displayed differences similar to their Pitx1 GFP active and inactive counterparts (Figs. 1F , 6D , S14A ). This suggests that escaping high-expressing hindlimb Pitx1 GFP;ΔPen cells do not require Pen to establish an active 3D conformation. We then asked whether these cells bare an alternative chromatin structure than wildtype ones to compensate for the loss of Pen . By comparing Pitx1 GFP and Pitx1 GFP;ΔPen GFP+ cells we saw no major differences (Figs. 6E , S14B ). Yet, using virtual 4C, we saw a slight reduction of contacts between the Pitx1 promoter and PDE/RA4 in GFP+ cells (Figs. 6F , S14D ). This suggests that the remaining high-expressing cells do not necessarily undergo a strong adaptive structural response to the loss of Pen to ensure high Pitx1 expression. Finally, we asked whether the relocated Pitx1 GFP;ΔPen GFP− cells, that bear ectopic promoter and enhancer acetylation, display features of an active 3D structure (Figs. 6G , S14C, D ). However, we did not observe any changes in the Pitx1 locus conformation in these cells in comparison to Pitx1 GFP GFP− cells. This shows that despite some remaining regulatory activity (evidenced by low level H3K27ac; arrows Fig. 6C ), the locus is unable to assume its active 3D structure and therefore to efficiently transcribe Pitx1 (Fig. 6G ). In conclusion, the Pen enhancer is necessary to ensure that all the cells with active enhancers at the Pitx1 locus undergo a robust transition toward a structurally and transcriptionally active landscape (Fig. 7 ). Fig. 7: Model. In wildtype hindlimb tissues (left panel) 8% of the nuclei, mostly from non-mesenchymal origin, display a completely repressed Pitx1 locus, featured by inactive enhancers (white ovals), polycomb-repressed Pitx1 gene (red rectangle) and inactive 3D chromatin structure ( Pitx1 does not contact its enhancers but contacts the repressed Neurog1 gene). In active nuclei, the situation is inverted with active enhancers (green ovals), active 3D chromatin structure ( Pitx1 contacts its enhancers) and strong Pitx1 transcription. In contrast, in hindlimb lacking the Pen enhancer (right panel), 16% of the cells are lacking Pitx1 transcription. Among these cells, some display a partially active regulatory landscape. These latter cells, that have failed to establish an active 3D structure and a strong Pitx1 transcription, are of mesenchymal origins in particular of ICT and PPP types. The remaining active cells in mutant hindlimbs appear to display wildtype expression levels. Phenotypically, the effect of the enhancer deletion is a disharmonious outgrowth of cell populations featured by a gain of PPP and a decrease of ICT cells. This cellular phenotype is likely at the origin of the clubfoot phenotype. Full size image Discussion In this work we have shown that hindlimb cells display several states of Pitx1 regulatory activities. In active cells, all enhancers are marked with the active H3K27ac chromatin modification and are contacting the Pitx1 promoter. In contrast, in inactive cells, we could not observe partial regulatory activities, i.e. neither enhancer acetylation nor enhancer-promoter interactions. This shows that the locus follows a bimodal behavior where the regulatory landscape as a whole acts on Pitx1 transcription. Indeed, a common set of coordinated enhancers are active in both proximal Pitx1 high-expressing and distal Pitx1 low-expressing cells. In fact, the Pitx1 regulatory landscape acts here similarly to what was previously defined as a holo-enhancer, where the whole region seems to work as a coherent regulatory ensemble 33 . In this perspective, Pitx1 expression levels are adjusted by the entire landscape. This is what we observed in high Pitx1 -expressing proximal cells where the same enhancer set than in distal cells displays a higher enrichment for the active H3K27ac chromatin mark along with a few proximal-specific regions that are more enriched for H3K27ac. This suggests that proximal transcription factors or signaling cues are controlling the landscape either by binding simultaneously at several Pitx1 cis -regulatory regions or by targetedly modifying other parameters of locus activity such as the frequencies of active chromatin interactions or the proximity to the repressive nuclear lamina for instance. Here we have tested how the loss of one of the regulatory elements, the Pen enhancer, which is conserved among all tetrapods and required for hindlimb identity, affects the establishment of the Pitx1 active landscape 11 , 34 . Some escaping cells can induce Pitx1 regulatory landscape activation without Pen , suggesting that the other cis -regulatory modules ( PelB , PDE , and RA4 ) provide a form a compensation. These modules are likely activated by a similar gene-regulatory network in wildtype and mutant hindlimbs, as we could not observe a clear shift in cell identity of GFP/Pitx1 expressing cells. Alternatively, a cumulative effect of marginal transcriptional changes in cell identity along with specific non-transcriptional identity differences could maintain the capacity of cells to generate a high Pitx1 expression level despite the absence of Pen . Simultaneously, many Pitx1 low/non-expressing cells accumulate in hindlimbs that bear enrichment of H3K27ac at the Pitx1 promoter and at several of its enhancers (Fig. 7 ). Despite the presence of this active modification, the Pitx1 locus does not adopt an active 3D chromatin folding but maintains the hallmarks of its inactive configuration. In fact, these accumulated low/non-expressing cells are seemingly stuck in a limbo between activity and repression and show the importance of the coordinated action of enhancer activity and 3D chromatin changes to achieve sufficient transcriptional strength. Therefore, we hypothesize that the role of Pen is not to act as a pattern-defining enhancer but rather as a support enhancer that ensures a robust transition of cells towards a fully active Pitx1 landscape and therefore a strong Pitx1 transcription. Here, other enhancers, such as RA4 , PelB as well as other to be defined enhancers, might bear this pattern-defining role. In fact, Pen is a good model to understand the fundamental role of many enhancers that were characterised with a diverging activity than the gene they control 35 , 36 , 37 . This “class” of enhancers would therefore govern the cooperativity of loci’s regulatory landscapes without defining by themselves their expression specificities. Changes in the number of cells that express Pitx1 in the hindlimb have strong phenotypical consequences. In fact, the complete loss of Pitx1 induces an increase in proximal and distal progenitor cells concomitantly with a loss of differentiated cell types, overall altering the proportion of specific cell clusters in hindlimbs. The global increase in progenitors indicates a heterochrony in limb development that ultimately results in a reduction of the limb size and the loss of some limb structures such as the patella. In the case of the Pen enhancer deletion, we saw an enrichment of Pitx1 low/non-expressing cells in PPP and ICT clusters resulting in a delayed differentiation of PPP into ICT. Although the reduction of Pitx1 transcription induces the decrease of its direct target, Tbx4 , involved in limb bud outgrowth and cell proliferation, it was here not sufficient to alter cell proliferation or apoptotic rates, at E12.5, suggesting that the clubfoot phenotype builds on a Tbx4 -independent differentiation problem 20 , 27 . Here, the particularly strong effect of the Pen deletion on the ICT cell proportion pinpoints these cells as the origin of the clubfoot phenotype seen in mice lacking the enhancer. In fact, ICT has been repeatedly reported to function in a non-cell autonomous way during limb development and to act as an important driver of muscle patterning 23 , 38 , 39 , 40 , 41 , 42 , 43 . We, therefore, suspect that the loss of ICT in hindlimbs leads to a muscle patterning defect which would be at the base of the clubfoot phenotype. Moreover, the observed heterochrony in several of the mesenchymal cell populations could collectively cause the clubfoot since coordinated expansion and interactions among different mesenchymal cell populations are required for normal limb morphogenesis. Finally, despite lacking Pitx1 expression as well, forelimb cell clusters are present in the same proportion as hindlimb ones. This suggests that the role of Pitx1 in hindlimbs is mirrored by other genes in forelimbs, such as Tbx5 , that account for a harmonious outgrowth of the various cell populations. Indeed, Tbx5 loss of expression in the ICT population alters muscle and tendons patterning causing the mice to hold the paw in a supine position, leading them to walk on the edge or dorsal surface of the paw, resembling a clubfoot phenotype 23 . Our characterization of a single enhancer loss-of-function mutant at a cell subpopulation level opens the way to study the effect of other regulatory mutations with the same resolution and, in particular, of gain-of-function mutations. Such approaches will enable to select particular cell-subpopulations that show ectopic transcription in comparison to neighboring cells that bear the same mutation but no ectopic expression. This will facilitate a precise definition of features that are permissive for transcriptional gain-of-function and will be an important tool to further investigate the relationship between 3D structure, chromatin modifications, and gene transcriptional activation. Methods Cell culture, mice and tissue processing Animal procedures All animal procedures were in accordance with institutional, state, and government regulations (Canton de Genève authorisation: GE/89/19). CRISPR/Cas9 engineered alleles Genetically engineered alleles were generated using the CRISPR/Cas9 editing according to ref. 44 . Briefly, sgRNAs were designed using the online software Benchling and were chosen based on predicted off-target and on-target scores. All sgRNAs and target genomic locations for CRISPR–Cas9 can be found in Supplementary Table S1 . SgRNAs were then sub-cloned in the pX459 plasmid from Addgene and 8 μg of each vector was used for mESCs transfection. mESCs culture and genetic editing followed standard procedure 45 . To construct the Pitx1 GFP mESCs clone, the LacZ sensor from 11 was adapted by exchanging the LacZ by an EGFP cassette. The sgRNA was designed to target CRISPR–Cas9 to chr13:55935371-55935390 (Supplementary Table S1 ). Cells were transfected with 4 μg of EGFP−cassette and 8 μg of pX459 vector containing the sgRNA. Transgenic G4 ESCs clones can be obtained upon request. Aggregation of mESC Embryos were generated by tetraploid complementation from G4 male ESCs obtained from the Nagy laboratory ( ) 17 , 46 . Desired mESCs were thawed, seeded on male and female CD1 feeders and grown for 2 days before the aggregation procedure. Donor tetraploid embryos were provided from in vitro fertilisation using c57bl6J x B6D2F1 backgrounds. Aggregated embryos were transferred into CD1 foster females. All animals were obtained from Janvier laboratories. Single-cell RNA-seq dissociation Two replicates of fore and hindlimb buds of E12.5 wildtype embryos and hindlimb buds of mutant embryos ( Pitx1 Pen−/Pen− , Pitx1 −/− ) were micro-dissected and incubated for 12 min in 400 μl trypsin-EDTA 0.25% (Thermo Fischer Scientific, 25300062), supplemented with 40 μl of 5% BSA. During incubation tissues were disrupted by pipetting after 6 min of incubation and at the end of the 12 min. Trypsin was then inactivated by adding 2× volume of 5% BSA and single cell suspension was obtained by passing cells in a 40 μm cell strainer. Cells were then spun at 250 × g for 5 min at 4 °C and resuspended in 1%BSA in PBS. Cells were then counted using an automatized cell counter and a 1% BSA 700 cells/μl suspension was prepared. 10 μl of this solution was used as input for the 10× Genomics library preparation. Single-cell library preparation Single-cell libraries were prepared using the Chromium Single Cell 3′ GEM, Library & Gel Bead Kit v3 following the manufacture’s protocol (10× Genomics, PN-1000075). Briefly, Gel beads in EMulsion (GEMs) are generated by combining Single Cell 3′ v3 Gel Beads, a Master Mix containing cells, and Partitioning Oil onto Chromium Chip B. Incubation of the GEMs produced from the poly-adenylated mRNA barcoded, full-length cDNA. Immediately, gel beads are dissolved and cDNA is amplified via PCR followed by library construction and sequencing. Libraries were paired-end sequenced on a HiSeq 4000. On average, 7000 cells were loaded on the Chromium Chip and between 25,000 and 35,000 mean reads were obtained. Whole-mount in situ hybridization (WISH) Pitx1 WISH were performed on 40–45 somite stage mouse embryos (E12.5) using a digoxigenin-labeled Pitx1 antisense riboprobe transcribed from a cloned Pitx1 probe (PCR DIG Probe Synthesis Kit, Roche), as previously described in ref. 11 . Light sheet microscopy imaging E12 embryos post-fixed overnight in 4% PFA. Tissue was cleared using passive CLARITY based clearing method. Briefly, tissue was incubated in a Bis-free Hydrogel X-CLARITY™ Hydrogel Solution Kit (C1310X, Logos Biosystems) for 3 days at 4 °C, allowing diffusion of the hydrogel solution into the tissue. Polymerization of solution was carried in a Logos Polymerization (C20001, Logos Biosystem) system at 37 °C for 3 h. (SDS-Clearing solution: For 2 L of 4% SDS solution used 24.73 g of boric acid (Sigma B7660 or Thermofisher B3750), 80 g of sodium dodecyl sulfate (Brunschwig 45900-0010, Acros 419530010 or Sigma L3771), in dH 2 O, final solution pH 8.5). After two washes of 30 min in PBS, samples were immersed in a SDS based clearing solution and left at 37 °C for 48 h. Once cleared, tissue was washed twice in PBS-TritonX 0.1% and then placed in a Histodenz© based-refractive index-matching solution (Histodenz Sigma D22158, PB + Tween + NaN 3 pH 7.5 solution, 0.1% Tween-20, 0.01% NaN 3 , in 0.02 M phosphate buffer, final solution pH 7.5) Imaging was performed with a home-built mesoscale single-plane illumination microscope; complete description of the mesoSPIM microscope is available here: (Voigt et al. 47 ). Briefly, using one of the two excitation paths, the sample was excited with 488 and 561 nm laser. The beam waist was scanned using electrically tunable lenses (ETL, Optotune EL-16-40-5D-TC-L) synchronized with the rolling shutter of the sCMOS camera. This produced a uniform axial resolution across the field-of-vie7w (FOV) of 5 μm. GFP autofluorescence signal was filtered with 530/43 nm, 593/40 nm bandpass filter (BrightLine HC, AHF). Z-stacks were acquired at 5 μm spacing with a zoom set at ×1.25 resulting in an in-plane pixel size of 5.26 μm. Images were pre-processed to subtract the background and autofluorescence signal using the 561 nm excitation channel and subsequent normalization and filtering of the images were performed with the Amira 2019.4 software. 3D videos and images were captured using the Imaris 9.5 software. Tissue collection and cell preparation for FACS-sorting Forelimb and hindlimb buds from embryos with 40–45 somites (E12.5) were dissected in cold PBS solution. After PBS removal, a single cell suspension was achieved by incubating the limb buds in 400uL Trypsin-EDTA (Thermo Fischer Scientific, 25300062) for 12′ at 37 °C in a Thermomixer with a resuspension step at the 6′ mark. After blocking with one volume of 5% BSA (Sigma Aldrich, A7906-100G), cells were passed through a 40 μm cell strainer for further tissue disruption and another volume of 5% BSA was added to the cell strainer to pass leftover cells. Cells were then centrifuged at 400 × g for 5′ at 4 °C and, after discarding the supernatant, they were resuspended in 1% BSA for cell sorting. 5 mM of NaButyrate were added to the BSA when planning for subsequent fixation for H3K27Ac-ChIP. Proliferation and apoptosis analyses After tissue collection and cell dissociation, apoptotic cells were identified through Annexin V staining (Invitrogen, R37177). Following the manufacturer’s instructions, two replicates of 2 × 10 5 cells were resuspended in the kit’s binding buffer and 1 drop of Annexin V stain was added per 1 × 10 5 cells and left at room temperature incubation for 15′. Apoptotic phases were then determined by flow cytometry analysis. For cell cycle analysis we used two replicates of 2 × 10 5 cells, where DNA was stained with Hoechst-33342 dye (Abcam, ab228551) used to a final concentration of 5 μg/ml in a 1% BSA media. Cells were left for a 30′ incubation in a 37 °C water bath. Flow cytometry was then used to determine cell cycle stage. Both experiments were performed on a BDLSR Fortessa analyser and data was then processed using the FlowJo TM Software (10.6.1). Cell sorting Cell populations were isolated using fluorescent-activated cell sorting (FACS) using the Beckman Coulter MoFlo Astrios with GFP laser (excitation wavelength 488 nm). Initial FSC/SCC was set between 30/40 and 210/240 to exclude debris. After removal of dead cells with Draq7 dye and removal of doublets, following standard protocol, cells were gated for sorting as can be seen in Fig. S1A . As a control, a non-GFP expressing tissue (forelimbs isolated from the same E12.5 embryos) was used to determine the gating of the GFP− fraction of the samples to sort. When multiple cell sortings were needed, gating was done in accordance to previous samples to ensure non-variability in GFP intensity. Flow cytometry analysis to obtain GFP histograms was performed with the FlowJo TM Software (version 10.6.1). Cell processing for ChIP-seq and Capture-HiC After sorting, cells were centrifuged for 5′ at 400 × g at 4 °C and supernatant was discarded. Cells for ChIP-seq and Capture-HiC were resuspended in 10% FCS/PBS and fixed in 1% formaldehyde for ChIP and 2% for Capture-HiC at room temperature. The fixation was blocked by the addition of 1.25 M glycine, cells were isolated by centrifugation (1000 × g , at 4 °C for 8’), resuspended in cold lysis buffer (10 mM Tris, pH 7.5, 10 mM NaCl, 5 mM MgCl 2 , 0.1 mM EGTA, Protease Inhibitor (Roche, 04693159001)) and incubated on ice for 10’ to isolate the cell nuclei. The nuclei were isolated by centrifugation (1000 × g , at 4 °C for 3′), washed in cold 1× PBS, centrifuged again (1000 × g , at 4 °C for 1’) and stored frozen at −80 °C after removal of the PBS supernatant. RNA-seq Cell processing and library preparation After sorting, cells were centrifuged for 5′ at 400 × g at 4 °C, supernatant was discarded and cells frozen at −80 °C. At least two biological replicates of 1.5 × 10 5 cells each were used to extract total RNA using the RNeasy Micro Kit (QIAGEN, ID:74004) following manufacturer’s instructions and then stored frozen at −80 °C. Total RNA was quantified with a Qubit (fluorimeter from Life Technologies) and RNA integrity assessed with a Bioanalyzer (Agilent Technologies). The SMART-Seq v4 kit from Clontech was used for the reverse transcription and cDNA amplification according to the manufacturer’s specifications, starting with 5 ng of total RNA as input. 200 pg of cDNA were used for library preparation using the Nextera XT kit from Illumina. Library molarity and quality was assessed with the Qubit and Tapestation using a DNA High sensitivity chip (Agilent Technologies). Libraries were pooled at 2 nM and loaded for clustering on a Single-read Illumina Flow cell for an average of 35 mio reads/library. Reads of 50 bases were generated using the TruSeq SBS chemistry on an Illumina HiSeq 4000 sequencer. ChIP-seq and library preparation 5 × 10 5 fixed nuclei were sonicated to a 200–500 bp length with the Bioruptor Pico sonicator (Diagenode). H3K27Ac ChIP (Diagenode C15410174) was performed as previously described 48 , 49 , using 1/500 dilution of the antibody, with the addition of 5 mM of Na-Butyrate to all buffers. Libraries were then prepared following the Illumina ChIP TruSeq protocol and sequenced as 50 bp single-end reads on a illumina HiSeq 4000. Libraries were prepared starting with below <10 ng quantities of ChIP-enriched DNA as starting material and processed with the Illumina TruSeq ChIP kit according to manufacturer specifications. Libraries were validated on a Tapestation 2200 (Agilent) and a Qubit fluorimeter (Invitrogen – Thermofisher Scientific). Libraries were pooled at 2 nM and loaded for clustering on a Single-read Illumina Flow cell. Reads of 50 bases were generated using the TruSeq SBS chemistry on an Illumina HiSeq 4000 sequencer. Capture-HiC and library preparation 3C libraries were prepared as previously described 48 . Briefly, at least 1 × 10 6 fixed cells were digested using the DpnII restriction enzyme (NEB, R0543M). Chromatin was re-ligated with T4 ligase (Thermo Fisher Scientific), de-crosslinked and precipitated. To check the validity of the experiment, 500 ng of re-ligated DNA were loaded on a 1% gel along with undigested and digested controls. 3C libraries were sheared and adapters ligated to the libraries according to the manufacturer’s instructions for Illumina sequencing (Agilent). Pre-amplified libraries were hybridized to the custom-designed SureSelect beads (chr13: 54,000,001–57,300,000) 11 ) and indexed for sequencing (50–100 bp paired-end) following the manufacturer’s instructions (Agilent). Enriched libraries were pooled at 2 nM and loaded for clustering on a Paired-End Illumina Flow cell for an average of 215 mio reads/library. Reads of 100 bases were generated using the TruSeq SBS chemistry on an Illumina HiSeq 4000 sequencer. ChIP-seq, RNA-seq and Capture-HiC data analyses ChIP-seq Single-end reads were mapped to the reference genome NCBI37/mm9 using Bowtie2 version 2.3.4.2 50 , filtered for mapping quality q ≥ 25 and duplicates were removed with SAMtools 1.9. Reads were extended to 250 bp and scaled (1 million/total of unique reads) to produce coverage tracks using genomecov of BEDTools/2.28.0-fecbf4e3. BigWig files were produced using bedGraphToBigWig version 4 and visualized in the UCSC genome browser. RNA-seq Single-end reads were mapped to the mm9 reference genome using STAR mapper version 2.5.2a with default settings. Further processing was done according to ref. 48 . BigWig files were visualized in the UCSC genome browser. Counting was done using R version 3.6.2 and differential expression was analyzed through the “DEseq2” R package (version 3.14). The DEseq2 R package was also used to produce heatmaps by subtracting from each gene value per condition, given by vst, the mean value of all conditions. Genes were picked according to adjusted p-value, all being significantly differentially expressed between conditions. Pitx1 fold enrichment between wildtype GFP+ and GFP− and between GFP−, GFP+− and GFP++ populations was calculated using Deseq2’s normalization by size factor with the addition of a 0.5 pseudocount to aid data visualization. The Wald-test was used to examine the differential across samples. The p -values were adjusted for multiple testing with the FDR/Benjamin–Hochberg (BH) method and each analysis was performed with at least two biological replicates. Expression heatmaps were generated for non-mesenchyme satellite and mesenchymal markers as defined in Supplementary Dataset S1 . For visualization reasons, Ccr5 , Cldn5, and Col2a1 were added as sub-cluster markers (endothelium immune and condensation) and the forelimb-specific marker Tbx5 was removed from the marker list. Moreover, genes with expression less or equal to 1 RPKM in all 8 samples (GFP+ wildtype: replicate 1 and 2; GFP− wildtype: replicate 1 and 2, GFP+ mutant: replicate 1 and 2; GFP− mutant: replicate 1 and 2) were removed from the analysis. For the GFP− specific heatmap, we additionally removed all genes with less or equal to 1 RPKM in all 4 GFP− samples. The color of the expression heatmap corresponds to the z-score transformed RPKM values, using the mean and standard deviation per gene based on all 8 samples. Log2FC was calculated by averaging replicates RPKM for each datasets and dividing Pitx1 GFP and Pitx1 GFP;ΔPen values. Capture-HiC and virtual 4C Paired-end reads from sequencing were mapped to the reference genome NCBI37/mm9 using with Bowtie2 version 2.3.4.2 50 and further filtered and deduplicated using HiCUP version 0.6.1. When replicates were available, these were pooled through catenation (-cat in Python 2.7.11) before HiCUP analysis. Valid and unique di-tags were filtered and further processed with Juicer tools version 1.9.9 to produce binned contact maps from valid read pairs with MAPQ ≥ 30 and maps were normalized using Knights and Ruiz matrix balancing, considering only the genomic region chr13: 54,000,001–57,300,000 51 , 52 , 53 . After KR normalization, maps were exported at 5 kb resolution. Subtraction maps were produced from the KR normalized maps and scaled together across their subdiagonals. C-HiC maps were visualized as heatmaps, where contacts above the 99thpercentile were truncated for visualization purposes. Further details about data processing can be accessed at ref. 11 . Virtual 4C profiles were generated from the filtered hicup.bam files used also for Capture-HiC analysis. The viewpoint for the Pitx1 promoter was set at coordinates chr13:55,930,001–55,940,000 (10 kb bin) and contact analysis was performed over the entire genomic region considered for Capture-HiC (chr13: 54,000,001–57,300,000). A contact pair is considered when one interaction fragment is in the viewpoint and its pair mate is outside of it. The interaction profile was smoothed by averaging over 5 kb intervals and was produced as a bedgraph file. Single-cell data analyses Processing of sequenced reads Demultiplexing, alignment, filtering barcode, and UMI counting was performed with 10× Genomics Cell Ranger software (version 3.0.2) following manufacture’s recommendations, default settings and mm10 reference genome (version 3.0.0, provided by 10X Genomics, downloaded in 2019). Cell Ranger outputs files for each dataset were processed using the velocyto run10x shortcut from velocyto.py tool 24 (version 0.17.17) to generate a loom file for each sample, using as reference genome the one provided by 10× Genomics and the UCSC genome browser repeat masker.gtf file, to mask expressed repetitive elements. Each loom matrix, containing spliced/unspliced/ambiguous reads, was individually imported in R (version 3.6.2) with the Read Velocity function from the Seurat Wrappers package (version 0.2.0). In parallel, feature filtered output matrices obtained from Cell Ranger were individually loaded into R through the Read10X function of the Seurat package (version 3.2.0 54 ). Then, we combined the spliced, unspliced, ambiguous, and RNA feature data in a single matrix for each dataset. Subsequently each matrix was transformed into a Seurat object using Seurat package. Therefore, for each sample we obtained for each sample a single Seurat object comprehend by four assays, three of them (spliced, unspliced and ambiguous) were used for downstream RNA velocities estimations and the RNA feature assay was used for downstream gene expression analysis between the samples, as described below. Quality control and filtering Quality control and pre-processing of each Seurat object of our eight samples was performed attending to the following criteria. Cells expressing less than 200 genes were excluded. Additionally, we calculated the reads that mapped to the mitochondrial genome and we filtered out the cells with a mitochondrial content higher than 15%, since high levels of mitochondrial mRNA has been associated to death cells. Also, we excluded cells with a mitochondrial content lower than 1%, since we observed that these belong, in our datasets, to blood cells probably coming from the dissection protocol. Individual dataset normalization, scaling, and dimensional reduction After filtering, one by one we normalized the eight datasets following the default Seurat parameters for the LogNormalize method and applying it only to the RNA features assay. We next scaled it by applying a linear transformation and we calculated the most variable features individually for downstream analysis, using standard Seurat parameters. Scaled data were then used for principal component analysis (PCA), we used the 50 PCs established by default, and non-linear dimensional reduction by Uniform Manifold Approximation Projection (UMAP 55 ), we used 1:50 dims as input. Cell doublet identification Pre-process and normalized datasets were individually screened for detection of putative doublet cells. Doublets in each dataset were also excluded using DoubletFinder R package (version 2.0.2) 56 as described in . The doublet rate (nExp parameter) used was estimated from the number of cells captured and it is as follows: Pitx1 +/+ Hindlimb replicate 1, nExp = 106; Pitx1 +/+ Hindlimb replicate 2, nExp = 123; Pitx1 +/+ Forelimb replicate 1, nExp = 97; Pitx1 +/+ Forelimb replicate 2, nExp = 116; Pitx1 − / − Hindlimb replicate 1, nExp = 104; Pitx1 − / − Hindlimb replicate 2, nExp = 122; Pitx1 Pen−/Pen− Hindlimb replicate 1, nExp = 118; Pitx1 Pen−/Pen− Hindlimb replicate 2, nExp = 116. The pK parameter was calculated following the strategy defined by ref. 56 and is as follow: Pitx1 +/+ Hindlimb replicate 1, pK = 0.12; Pitx1 +/+ Hindlimb replicate 2, pK = 0.005; Pitx1 +/+ Forelimb replicate 1, pK = 0.09; Pitx1 +/+ Forelimb replicate 2, pK = 0.04; Pitx1 − / − Hindlimb replicate 1, pK = 0.04; Pitx1 − / − Hindlimb replicate 2, pK = 0.01; Pitx1 Pen−/Pen− Hindlimb replicate 1, pK = 0.005; Pitx1 Pen−/Pen− Hindlimb replicate 2, pK = 0.005. After filtering, we kept for downstream analysis the following number of cells for each dataset: Pitx1 +/+ Hindlimb replicate 1, 4143 cells; Pitx1 +/+ Hindlimb replicate 2, 4816 cells; Pitx1 +/+ Forelimb replicate 1, 3802 cells; Pitx1 +/+ Forelimb replicate 2, 4521 cells; Pitx1 − / − Hindlimb replicate 1, 4049 cells; Pitx1 − / − Hindlimb replicate 2, 4745cells; Pitx1 Pen−/Pen− Hindlimb replicate 1, 4600 cells; Pitx1 Pen−/Pen− Hindlimb replicate 2, 4518 cells. Merge of all datasets and normalization Once each dataset was individually filtered and doublets were removed, all datasets were merged in a unique Seurat object without performing integration to execute an ensemble downstream analysis of the eight datasets. No batch effect was observed later on in this merged dataset. A new column to the Seurat object metadata was added to label replicates of the same tissue and animal model with the same name for downstream analysis. Therefore the cells of Pitx1 +/+ Hindlimb, replicate 1, and replicate 2 were labeled as Pitx1 +/+ Hindlimb, the same logic was applied to the rest of the samples. Subsequently, we normalized our new and unique Seurat object applying the SCTransform normalization protocol 57 , with default parameters, over the spliced assay. Cell-cycle scoring and regression Since from the individual analysis of our dataset we observed a part of the variance was explained by cell-cycle genes, we examine cell-cycle variation in the merged dataset. To do so we assigned to each cell a score based on its expression of a pre-determined list of cell cycle gene markers, following the strategy defined by 58 and by applying CellCycleScoring function implemented in Seurat. Subsequently, the evaluation of this results, we decided to regress out the cell-cycle heterogeneity. Therefore, we applied to our merged object the SCTransform normalization method, using the spliced assay as source, and adding to the default settings the cell-cycle calculated scores (S.Score and G2M.Scores) as variables to regressed. Cell Cycle classification was later on used to estimate cell cycle proportions on each cluster. Clustering After cell-cycle regression, cells were clustered using standard steps of the SCTransform Seurat workflow. Briefly, PCA (npcs = 50), UMAP (dims = 1:50) and nearest neighbors of each cell were calculated. Clusters were determined using Seurat FindClusters function with default parameters and a resolution of 0.2, in that way 10 clusters were defined. Identification of clusters identity was done by calculating the expression difference of each gene between each cluster and the rest of the clusters using the FindConservedMarkers function. We applied this function to each cluster (ident.1) using default parameters, only.pos = TRUE and setting as grouping variable the limb identity of the datasets, in that way we obtained a list of markers for each cluster independent of the limb sample. Clusters with similar marker were combined, therefore we finally worked with 5 clusters (Fig. 1B ): the mesenchyme (that contains 5 out of the 10 clusters), the epithelium (formed by 2 out of 10), and the immune cell cluster, the muscle and the endothelium clusters (composed by only 1 cluster each). We confirmed the expected identity markers were present in the new clustering by running the FindMarkers function with the following parameters logfc.threshold = 0.7; pseudocount.use = 0; only.pos = TRUE; min.diff.pct = 0.15 and all other default parameters (Supplementary Dataset S1 ). Subsetting and re-clustering Since the interest of this work was focus on the populations that in a wildtype hindlimb express Pitx1 (Fig. 1C ), we subsetted the mesenchyme cluster. To have a better insight on the different cell-types that integrate it, we re-cluster the mesenchyme cluster. To do so, UMAP embedding was calculated with the following parameters: dims = c(1:10), n.neighbors = 15L, min.dist = 0.01, metric = “cosine”, spread = 0.5, all other parameters were default. Cluster resolution after finding neighbors was established at 0.4 to reveal subpopulations. We observed 9 mesenchyme subpopulations (Fig. 3A ) that we named according to their identity genes. Identity markers were found using FindMarkers on the RNA assay, setting logfc.threshold = 0.3, pseudocount = 0, min.diff.pct = 0.1, only.pos = TRUE and all other parameters as default (Supplementary Dataset S1 ). Differential expression analysis To perform Pitx1 +/+ Hindlimb vs Pitx1 Pen−/Pen− Hindlimb differential expression analysis in the mesenchyme cluster and in each one of the nine mesenchyme clusters we used the FindMarkers function on the RNA assay. For whole mesenchyme analysis Pitx1 Pen−/Pen− Hindlimb was set up as ident.1 and Pitx1 +/+ Hindlimb as ident.2. To determine the differentially expressed genes between each dataset in each mesenchyme cluster, we created a new column in the metadata slot that contains both cluster and dataset information. Then, this column was set as new identity and differential expression analysis was run using as ident.1: ICT_Pitx1 Pen − /Pen − , Ms_Pitx1 Pen − /Pen − , TP_Pitx1 Pen − /Pen − , LDC_Pitx1 Pen − /Pen − , DPP_Pitx1 Pen − /Pen − , DP_Pitx1 Pen − /Pen − , PC_Pitx1 Pen − /Pen − , EDC_Pitx1 Pen − /Pen − , or PPP_Pitx1 Pen − /Pen − and as ident.2: ICT_HL_Pitx1 +/+ , Ms_HL_Pitx1 +/+ , TP_HL_Pitx1 +/+ , LDC_HL_Pitx1 +/+ , DPP_HL_Pitx1 +/+ , DP_HL_Pitx1 +/+ , PC_HL_Pitx1 +/+ , EDC_HL_Pitx1 +/+ or PPP_HL_Pitx1 +/+ . All the other parameters as default except setting logfc.threshold = 0.2, pseudocount.use = 0 (Supplementary Dataset S5 ). RNA-velocity analysis As input data for the RNA-velocity analysis, we used the unspliced (pre-mature) and spliced (mature) abundances calculated for each replicate of our datasets as explained above (see in Methods, Processing of sequencing reads). To perform the RNA velocity analysis on the mesenchyme clusters of each dataset we subset the cells belonging to the 2 replicates. Therefore, we subset Pitx1 +/+ Hindlimb, Pitx1 +/+ Forelimb, Pitx1 Pen − /Pen − Hindlimb, and Pitx1 − / − Hindlimb individually. We also performed RNA-velocity analysis of all combined datasets. To perform proximal clusters and distal clusters analysis, we subset them separately following the criteria for proximal and distal cluster classification that is explained below in the Methods. Seurat objects from which we performed RNA-velocity analysis were saved as h5Seurat file using SeuratDisk package (version 0.0.0.9013) and exported to be used as input of Scvelo (version 0.2.2) 59 in Python (version 3.7.3). Then the standard protocol described in scVelo was followed. Standard parameters were used except npcs = 10 and n.neighbors = 15, to be the same that we used for the UMAP embedding in Seurat. Differential proportion analysis Statistical differential proportion analysis, to study the differences in clusters cell proportions between the different limb-type conditions, was performed in R using the source code published by 60 after generating the proportion tables in R. Null distribution was calculated using n = 100,000 and p = 0.1 as in the original reference. Pairwise comparisons were performed between the different condition tested. Proximal and distal cell classification Proximal, distal or NR attribute was given to each cluster based on its Shox2 and Hoxd13 expression. Therefore, ICT, TP, PPP, and PC clusters were classified as proximal clusters, DP, DPP, EDC, and LDC as distal ones. Meanwhile Ms cluster that express both markers were not classify to any of them. This classification was added to the Seurat object metadata and used in downstream analysis. Pitx1 density plot and cell classification by Pitx1 expression Pitx1 normalized expression values (using Seurat default LogNormalize method, using log1p), from the RNA assay of the all dataset merged Seurat Object, were extracted in a data frame. This data frame was used to create a density plot using ggplot2 package (version 3.3.2). From the overlay of Pitx1 density distributions in the Pitx1 +/+ Hindlimb and the Pitx1 Pen − /Pen − Hindlimb samples we define the intersection point of 0.3 to classify cells in non/low-expressing and expressing cells. The second intersection point of 1.45 that subclassify these expressing cells in intermediate- and high- expressing cells was established based on the intersection of the Pitx1 +/+ Hindlimb proximal and distal cells (Fig. 2F ). Therefore, we classified as non/low- expressing cells those with Pitx1 expression values <0.3, as intermediate-expressing those with Pitx1 expressing values between >0.3, <1.45 and as high-expressing cells those >1.45. This classification and Pitx1 expression values were added as new columns to the Seurat object metadata and used in downstream analysis. Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article. Data availability Sequencing data are available at the GEO repository under the accession number “ GSE168633 ”. All other relevant data supporting the key findings of this study are available within the article and its Supplementary Information files or from the corresponding author upon reasonable request. Source data are provided with this paper.
Embryonic development follows delicate stages: For everything to go well, many genes must coordinate their activity according to a very meticulous scheme and tempo. This precision mechanism sometimes fails, leading to more or less disabling malformations. By studying the Pitx1 gene, one of the genes involved in the construction of the lower limbs, a team from the University of Geneva (UNIGE), in Switzerland, has discovered how a small disturbance in the activation process of this gene is at the origin of clubfoot, a common foot malformation. Indeed, even a fully functional gene cannot act properly without one of its genetic switches. These short DNA sequences provide the signal for the transcription of DNA into RNA, and are essential for this mechanism. And when just one of these switches is missing, the proportion of cells where the gene is active decreases, preventing the lower limbs from being built properly. These results, published in the journal Nature Communications, highlight the hitherto largely underestimated role of genetic switches in developmental disorders. During embryonic development, hundreds of genes must be precisely activated or repressed for organs to build properly. This control of activity is directed by short DNA sequences that, by binding certain proteins in the cell nucleus, act as true ON/OFF switches. "When the switch is turned on, it initiates the transcription of a gene into RNA, which in turn is translated into a protein that can then perform a specific task," explains Guillaume Andrey, professor in the Department of Genetic and Developmental Medicine at the UNIGE Faculty of Medicine, who led this research. "Without this, genes would be continuously switched on or off, and therefore unable to act selectively, in the right place and at the right time." In general, each gene has several switches to ensure that the mechanism is robust. "However, could the loss of one of these switches have consequences? This is what we wanted to test here by taking as a model the Pitx1 gene, whose role in the construction of the lower limbs is well known," says Raquel Rouco, a post-doctoral researcher in Guillaume Andrey's laboratory and co-first author of this study. A decrease in cellular activation that leads to clubfoot To do this, the scientists modified mouse stem cells using the genetic engineering tool CRISPR-CAS 9, which makes it possible to add or remove specific elements of the genome. "Here, we removed one of Pitx1's switches, called Pen, and added a fluorescence marker that allows us to visualize the gene activation," explains Olimpia Bompadre, a doctoral student in the research team and co-first author. "These modified cells are then aggregated with mouse embryonic cells for us to study their early stages of development." Usually, about 90 percent of cells in future legs activate the Pitx1 gene, while 10 percent of cells do not. "However, when we removed the Pen switch, we found that the proportion of cells that did not activate Pitx1 rose from 10 to 20 percent, which was enough to modify the construction of the musculoskeletal system and to induce a clubfoot," explains Guillaume Andrey. The proportion of inactive cells increased particularly in the immature cells of the lower limbs and in the irregular connective tissue, a tissue that is essential for building the musculoskeletal system. The same mechanism in many genes Beyond the Pitx1 gene and clubfoot, the UNIGE scientists have discovered a general principle whose mechanism could be found in a large number of genes. Flawed genetic switches could thus be at the origin of numerous malformations or developmental diseases. Moreover, a gene does not control the development of a single organ in the body, but is usually involved in the construction of a wide range of organs. "A non-lethal malformation, such as clubfoot for example, could be an indicator of disorders elsewhere in the body that, while not immediately visible, could be much more dangerous. If we can accurately interpret the action of each mutation, we could not only read the information in the genome to find the root cause of a malformation, but also predict effects in other organs, which would silently develop, in order to intervene as early as possible," the authors conclude.
10.1038/s41467-021-27492-1
Nano
Scientists discover that a single layer of tiny diamonds increases electron emission 13,000-fold
Karthik Thimmavajjula Narasimha et al. Ultralow effective work function surfaces using diamondoid monolayers, Nature Nanotechnology (2015). DOI: 10.1038/nnano.2015.277 Journal information: Nature Nanotechnology
http://dx.doi.org/10.1038/nnano.2015.277
https://phys.org/news/2015-12-scientists-layer-tiny-diamonds-electron.html
Abstract Electron emission is critical for a host of modern fabrication and analysis applications including mass spectrometry, electron imaging and nanopatterning. Here, we report that monolayers of diamondoids effectively confer dramatically enhanced field emission properties to metal surfaces. We attribute the improved emission to a significant reduction of the work function rather than a geometric enhancement. This effect depends on the particular diamondoid isomer, with [121]tetramantane-2-thiol reducing gold's work function from ∼ 5.1 eV to 1.60 ± 0.3 eV, corresponding to an increase in current by a factor of over 13,000. This reduction in work function is the largest reported for any organic species and also the largest for any air-stable compound 1 , 2 , 3 . This effect was not observed for sp 3 -hybridized alkanes, nor for smaller diamondoid molecules. The magnitude of the enhancement, molecule specificity and elimination of gold metal rearrangement precludes geometric factors as the dominant contribution. Instead, we attribute this effect to the stable radical cation of diamondoids. Our computed enhancement due to a positively charged radical cation was in agreement with the measured work functions to within ±0.3 eV, suggesting a new paradigm for low-work-function coatings based on the design of nanoparticles with stable radical cations. Main Field emission from sharp-tipped cathodes is very dependent on the nanoscale geometry of the tip, so slight changes in the structure of the emitter can lead to dramatic differences in the current. Lowering the material work function is preferable to using sharp tips. However, this requires the application of surface coatings such as Cs or Ba alkali metals, but these are highly reactive and volatile, which largely precludes their application. Nanoscale materials or coatings that could achieve similar emission properties without environmental sensitivity or reactivity would thus be a significant step towards engineering electron emission surfaces. Diamond films have previously been used for field- 4 , 5 , 6 and thermionic-emission 7 , 8 devices, but the emission can be restricted by the poor conductivity of bulk diamond, requiring a complex mixture of sp 3 diamond and sp 2 graphite to achieve adequate performance. Furthermore, the role of these nanoscale material inhomogeneities in the emission process is still not fully understood 4 , 6 , 9 . Here, we report that monolayers of diamondoids—nanoscale diamond molecules—effectively confer significantly enhanced field-emission properties to metal surfaces. More generally, this approach provides an avenue for the design of new nanoscale materials by modulating surface work functions far beyond the typical ∼ 1 eV dipolar effects. Diamondoids are nanometre-scale cage sp 3 hydrocarbons with a diamond bonding structure and fully terminated with hydrogen 10 , 11 . Their properties are an interesting blend of those of diamond nanoparticles and small molecules 12 , and the high structural rigidity, chemical stability and low dielectric constant 13 of diamond, yet can be purified, organized 14 , 15 and chemically functionalized 16 , 17 , 18 in a manner similar to small molecules. Diamondoids exhibit a number of unexpected properties, including monochromatic secondary electron emission 19 , 20 , extraordinarily long carbon–carbon bonds 21 , 22 and a lack of quantum confinement in the unoccupied states 23 . They are interesting candidates for field emission as they represent the ultimate limit in diamond grain size reduction, which has previously been correlated with improved electron emission 6 . Thiol-modified versions of the diamondoids 18 were organized into self-assembled monolayers on Au or Ag, with thicknesses ranging from 0.5 nm (for adamantane-thiol) to 0.9 nm (for [121]tetramantane-6-thiol). By using diamondoid monolayers on metallic surfaces we avoid the conductivity problems associated with bulk diamond, precluding the need for mixed sp 2 / sp 3 materials. The lack of sp 2 carbon or bulk defect states in the diamondoids also presents an ideal test case to assess the role of defects compared to bulk diamond films. Here, the field-emission current for diamondoid-modified Au surfaces was investigated for a series of different sizes and orientations of alkane and diamondoid-thiol monolayers ( Fig. 1 ). We chose the diamondoid thiols adamantane-1-thiol (ADT), diamantane-4-thiol (4DT), triamantane-9-thiol (9TrT), [121]tetramantane-6-thiol (6TT), [121]tetramantane-2-thiol (2TT), as well as the methyl- and carboxyl-terminated linear alkanethiols dodecane-1-thiol (DDT) and 16-mercapto hexadecanoic-1-acid (16MHDA) as control samples. Synthesis of the thiol-modified diamondoids has been described previously 18 . Two different [121]tetramantane thiol orientations were investigated, one with the thiol on an apical carbon (6TT) for an ‘upright’ configuration, and one on a medial carbon (2TT) for a horizontal orientation on the Au surface. Diamondoid self-assembled monolayers (SAMs) were formed by immersing the Au substrate in 1 mM diamondoid in 9:1 toluene/ethyl alcohol (vol/vol) solution for ∼ 24–48 h, rinsing with toluene to remove excess molecules and blowing dry. These materials were air-stable after synthesis and could be stored for a period longer than a week without degradation. The field-emission current as a function of applied voltage was tested in a parallel plate configuration in an ultrahigh-vacuum system ( Fig. 2 ) by assembling the diamondoid thiols onto Au-coated Ge nanowires. Nanowires were used to obtain reasonable voltage levels for electron emission and to reduce the electrical arcing observed in planar samples, typically reducing the necessary voltage by a factor of ∼ 150. Figure 1: Field emission apparatus. a , Schematic of field emission set-up. b , Scanning electron microscopy image of the vertically oriented Ge nanowires on the Ge (111) surface. The wires are ∼ 25 µm in length. Scale bars: 10 µm (main image) and 500 nm (inset). c , Structures of the molecules tested, including diamondoid thiols with one to five cages and linear-chain alkane thiols with methyl and carboxyl terminations. Hydrogen atoms are not shown for clarity. Full size image Figure 2: Diamondoid field emission. a , Schematic of the diamondoid functionalization enhancing electron emission from Au-coated Ge nanowire emitters. b , Current–voltage traces from the nanowire emitters before functionalizing with 6TT (blue), after functionalizing with 6TT (green) and after desorbing 6TT (red). c , Fowler–Nordheim plots of the data from b . Full size image Figure 2b presents an example of a current–voltage plot and Fig. 2c a Fowler–Nordheim (FN) plot of a Au-coated Ge nanowire sample with and without 6TT SAM functionalization. The pristine Au surface had a turn-on voltage of 500 V (red trace), and a bare Au work function equal to 5.1 eV (ref. 24 ). A monolayer of 6TT was then assembled onto this same sample, resulting in a significant decrease in turn-on voltage to 250 V (green trace), corresponding to an approximately 13,000–15,000-fold current increase at the equivalent voltage and an effective work function of 1.72 ± 0.3 eV ( Fig. 2c ). Field emission from metallic surfaces is described by the FN equation 25 , which relates the field-emission current I from metallic field emitters to the applied voltage V by where d is the distance between the cathode and anode, β is the geometric field enhancement factor at the emitter surface, A is the area of emission, ϕ is the work function of the emitter, and a and b are constants. The work function is found by plotting ln( I / V 2 ) versus 1/ V , giving a slope of m = (6.83 × 10 7 dϕ 3/2 )/ β . Both the prefactor and the exponent depend on geometric factors, but because the addition of the diamondoid molecules onto the Au nanowires did not change the geometry, these could be eliminated by calculating the work functions from the ratio of the FN slopes, rather than from the absolute values (see Supplementary Information for more details). These results are remarkably consistent, with the current–voltage traces ( N > 15) giving the same values. Although more advanced treatments of fitting the FN are possible 25 , in our experiments the data are quite linear, so the simple FN model is a reasonable description. The effective work functions for each of the diamondoid molecules are summarized in Fig. 3 (red triangles). Dodecanethiol alkane and ada-, dia- and triamantane diamondoids had ∼ 4 eV work functions, consistent with reported work function values for linear-chain alkane thiol molecules 1 , 2 . However, both four-cage tetramantane molecules, 2TT and 6TT, produced substantial reductions to 1.60 ± 0.3 eV and 1.72 ± 0.3 eV, respectively. Figure 3: Summary of the work functions obtained using the various experimental and computational techniques used in the present work. The 6TT- and 2TT-coated nanowire emitters exhibit extraordinarily low work functions in field-emission measurements. These work functions are in excellent agreement with our radical cation-based work function lowering theory. The field-emission and UPS work functions of all other molecules and the UPS work functions of 6TT and 2TT agree well with neutral thiol dipole-based work function lowering (blue squares). The black dashed line denotes the work function of polycrystalline gold. Full size image There are two mechanisms through which field emission can be significantly enhanced: changes in the geometry or via the work function. Geometric enhancement could arise from the molecules themselves acting as a ‘lightning rod’ on the surface, or from alteration of the underlying Au material. The first case is eliminated by analysis of the molecular coatings. Atomic force microscopy and near-edge X-ray absorption show that these diamondoids form single monolayers on the Au surface, with a typical tilt angle of 30° (ref. 14 ). The aspect ratio of these molecules ranges from 1:1 to 2:1. According to the geometric enhancement factor β = h/r , where h is height and r is radius, these molecules would give an enhancement of one to two times. Given that we observed more than four orders of magnitude current enhancement (13,000×), even an exceptional geometric arrangement of the molecules could not lead to the observed behaviour. We also performed finite element analysis (Comsol) of the nanowire tips with and without the diamondoids and observed less than 1% change in the local electric fields due to addition of the molecular layer. Note that the large anode–cathode distance (50 µm) compared to the molecule height also alleviates concerns about more complex geometric enhancements. Finally, we tested 2TT tetramantane where the thiol group was located on the medial (side) position rather than on the tip ( Fig. 1 ), leading to a horizontal orientation and thus a lower geometric enhancement factor. In fact, the current increased more than for the upright 6TT, with an effective work function of 1.60 ± 0.3 eV, thus eliminating molecular geometric enhancement as the origin of this effect. Second, the enhancement could arise from the molecules causing reorganization of the Au surface itself. To test this hypothesis, the voltage was increased during field-emission testing until molecules desorbed. At this point the field-emission current ( Fig. 2 , blue trace) and FN slope returned to values very similar to those for the initial Au surface, indicating that little, if any, modification of the metal itself had occurred. Based on this analysis, the enhanced emission is most probably due to changes in the work function of the diamondoid-modified Au surface. Adsorbed molecules are known to alter the work function due to the intrinsic dipole moment, the chemisorption dipole and the ‘cushion effect’, which arises from repulsion between the surface and molecular electron orbitals 1 , 2 . However, these reductions are typically on the order of 1 eV, with a highest organic work function reduction of 2.2 eV with reducing agent 1,1-dimethyl-1H,1H-4,4-bipyridinylidene on Au due to partial electron exchange 3 . The 6TT and 2TT tetramantane work function reductions (3.38 and 3.50 eV), are the largest observed for any organic molecule, and also the largest for any air-stable compounds. These are equivalent to the lowest observed work function of Cs-coated Au (1.6 eV) 24 . The magnitude of the work function shift and emission current for 2TT and 6TT is thus a surprise. To gain further insight we performed ultraviolet photoelectron spectroscopy (UPS) on the SAM-coated Au surfaces to measure the average work function, rather than just at the field electron emission site. These work functions are plotted in Fig. 3 (green triangles). All diamondoid thiol and dodecane thiol-coated Au surfaces show ∼ 1 eV lowering in the work function, while the 16MHDA surface shows an increase of ∼ 0.1 eV due to the surface carboxyl groups. These values agree within error (typically ±0.3 eV) with the field-emission-derived work functions for ADT through 9TrT and DDT and 16MHDA. Interestingly, the 6TT and 2TT monolayer UPS results do not show the large work function change observed in field emission. Because field emission is sensitive to the local work function at the nanometre-scale emission site and the UPS results are an area-average, this suggests that the low tetramantane work functions are probably a local event near the emission tip. Previous studies by Alloway et al . 1 , 2 showed that work function modulation by alkane thiol molecules on Au surfaces can be computed from the cushion effect and molecular and Au–S dipole moments of the molecule in isolation using density functional theory (DFT). Using this methodology we computed the work functions of a series of neutral diamondoid thiols oriented at a 30° angle on Au using the B3LYP/6-31G(d) level of theory ( Fig. 3 , blue squares), which has previously been used successfully for diamondoids 11 , 21 , 26 . These computations agree remarkably well with both UPS and field-emission results, again with the exception of the [121]tetramantane field emission. The excellent agreement also provides further confidence that geometric enhancement (which is not accounted for in DFT) plays little role in the measured field emission. The large magnitude of the work function lowering thus cannot be effectively explained by the intrinsic dipole moment, the bond dipole or the cushion effect, suggesting that an excited-state mechanism might be at play. This is also reinforced by a simple coulombic potential model, which requires a full positive and negative charge separated by ∼ 0.5 nm to achieve the 3.5 V change in potential observed, so small dipoles from partial charge separation are likely to be insufficient. One of the unique features of diamondoid cages is their relative stability upon removal of an electron to form a charged radical cation. Previous experimental 27 , 28 , 29 , 30 and DFT studies 26 of radical cation gas-phase stability show that an electron can be removed to form persistent diamondoid radical cations without significant structural rearrangement, with larger diamondoids forming more stable cations. This is a unique feature of diamondoids, compared to alkanes, as a consequence of their cage structure, which stabilizes the excess positive charge by delocalization. If a radical cation forms during the course of field-emission measurement, this would lead to significant local electrostatic gating. These radical cations would probably only be present at the very tips of the nanowires where they are most likely to become ionized. This would explain the difference between the UPS results, which measure the average work function of the mostly neutral molecules, and the field-emission results, which would be sensitive to a small number of radical cations on the nanowire tip. A similar effect for smaller diamondoids or alkane thiol SAMs would not be observed due to the higher energy and lower stability of their radical cations 26 . To see if this effect would explain the observed work function lowering, the scenario of a single 6TT radical cation in a film of neutral 6TT molecules was investigated by DFT simulations. DFT cannot force an excited-state electron transfer between a metal substrate and a molecule, so the structure and electron density of the neutral and singly ionized 6TT radical cation were computed in the gas phase. The charge density difference due to the radical cation was then superposed onto a 6TT molecule on the Au surface, using the known geometry on Au (ref. 13 ). The electrostatic fields around the 6TT and Au surface were calculated from the radical cation charge distribution and image charges in the Au ( Fig. 4a ). Figure 4: Diamondoid radical cation mechanism. a , Schematic of the cation-based model showing a cationic hotspot represented by the difference in charge density (e – /Å 3 ) between the 6TT radical cation and the neutral molecule, computed at the B3LYP/6-31G(d) level of theory, with corresponding image charge inside the Au using a 30° molecular orientation. b , Two-dimensional electric potential profile on a cross-section of 6TT. The molecule is superimposed on the image. The x and z axes on the image are in nanometres and the colour bar is in volts. The electric field is intense near the molecule and drops to zero within ∼ 10 nm of the surface. c , Potential versus z coordinate plots at locations ‘a’ to ‘e’ between x = 0.75 nm and x = 1.5 nm in b . ‘f’ is located away from the molecule at x = 4 nm where the potential due to the cationic charge difference essentially drops to zero. d , Electron potential barrier versus distance along the normal to the surface with and without the 6TT cation under an applied field of 1.5 V nm –1 , showing that the presence of the 6TT cation modulates the potential barrier. Full size image The results show that an individual diamondoid-thiol radical cation substantially influences the electric potential in the vicinity of the surface, much like Cs on metal surfaces 31 . Figure 4b presents a two-dimensional slice of the potential through the middle of the diamondoid, and Fig. 4c shows several individual potential profiles at different locations within the molecule. The potential can vary by several volts within a nanometre due to the radical cation, but only in the vicinity of the molecule itself. There is a large positive charge near the sulphur atom and near the middle of the diamondoid, although this calculation did not account for any externally applied potential that could subtly change this distribution. The field rapidly decays away from the molecule, such that the effect is negligible beyond ∼ 10 nm; however, it is significant near the metal. The positive charge due to the molecular radical cation may thus gate electron emission from the surface. To test this concept, the expected field emission from the potential energy surface was calculated by superposing the cation electrostatic potential profile together with an external applied potential V ext normal to the Au surface ( Fig. 4d ). The external potential was estimated from the experimental field emission from bare Au-coated nanowires to be 1.5 V nm –1 , consistent with 500 V applied over a 50 µm gap with a ∼ 150 times geometric enhancement from the nanowires. The resulting field-emission current as a function of V ext was then calculated from potential barrier transparency using transfer matrix methodology 32 , with the corresponding current voltage and FN plots shown in Fig. 5a and b , respectively. Figure 5: Calculated field emission with radical cations. a , b , Simulated current–voltage ( a ) and Fowler–Nordheim ( b ) plots of emitters with and without the 6TT radical cation using tunnelling probabilities from transfer matrix calculations based on the potential landscapes shown in Fig. 4d . The lines in both plots are fits using the Fowler–Nordheim equation. The calculated work function of 1.30 eV for 6TT is in reasonable agreement with the measured value of 1.72 eV. The voltage on the x axis of a is applied at a distance of 3.15 nm from the Au surface. Full size image The work functions from the FN fit to the radical cation-gated emission were 1.30 eV for 6TT and 1.14 eV for 2TT, in good agreement with the field-emission results of 6TT and 2TT monolayers (1.72 ± 0.3 eV and 1.60 ± 0.3 eV, respectively), suggesting that the radical cation mechanism is reasonable. The slightly lower calculated values may reflect differences in the actual 6TT geometry on the surface or influences of the metal on the 6TT electronic structure, which could be substantial. Indeed, C 60 monolayers on Au were observed to have 0.6 eV Fermi level shifts 33 . This mechanism would imply that the tetramantane radical cations are persistent for long enough on Au to modify the electron emission, while for smaller diamondoid molecules this would not be the case. What causes the initial ionization event is unknown, although the large electric field across the diamondoids (1.5 V nm –1 ) together with electron scattering from field-emitted electrons is a possibility. These results highlight the unique aspect of the nanoscale diamondoids compared to similar sp 3 -conjugated molecule linear-chain alkanes. The cage-like nature of the diamondoids stabilizes the structure, suggesting that other nanoparticle systems could produce similar effects. The radical cation may thus be a general mechanism for work function lowering with environmentally stable coatings based on excited-state gating, which could be quite effective for long-lived cations and good electron transparency. It is worth noting that the potential field near the single radical cation returns to zero within ∼ 10 nm of the surface ( Fig. 4c ) and would not influence the bulk potential difference between emitter and anode. Thus, the diamondoid radical cation does not change the work function of the entire surface, but instead significantly lowers the local potential barrier for electron emission. For many electron emission applications this ‘effective work function’ is the key requirement, as the bulk of the film does not participate in the emission process. In summary, we report that the excellent field-emission characteristics of diamond films can be transferred to Au surfaces while circumventing the bulk diamond conductivity issues by functionalizing Au surfaces with SAMs of molecular-scale diamondoids. The four-cage tetramantane-thiol monolayers reduce the work function of Au to 1.6–1.7 eV, which we attribute to the formation of excited-state radical cations. These surfaces were also moisture- and air-stable. Although the Au–thiol bond is relatively weak, desorbing around 100 °C, recent direct covalent attachment of diamondoids to inorganic surfaces increased the thermal stability to over 400 °C, suggesting that these devices could survive cleaning processes 34 . This work suggests a new paradigm for engineering the surface work function using nanomaterials that form persistent radical cations rather than relying upon reactive metals like Cs. This emission mechanism could be further tuned by the design and functionalization of the molecule, or could be extended to charged nanoparticles. Methods Field emission Sample preparation Ge nanowires were grown on Ge (111) wafers (University Wafers, p-type resistivity 0.035–0.039 Ω cm) using Au colloids (Sigma Aldrich, nominal particle diameter of 40 nm) to seed the growth. The Ge substrate was cut into ∼ 0.5 inch × 0.25 inch pieces, and the pieces were rinsed with isopropanol followed by swabbing with a cleanroom swab soaked in isopropanol. The samples were sonicated in acetone for ∼ 30 min, then rinsed once more with isopropanol and blown dry with nitrogen. To remove the native oxide, Ge substrates were rinsed with deionized (DI) water for 1 min followed by etching in 2% HF for 5 min. The samples were then rinsed with DI water and blown dry with nitrogen. To improve the adhesion of Au nanoparticles onto the substrate, poly- L -lysine (Sigma Aldrich) solution was immediately spread on the substrates and was rinsed with DI water after 5 min. The Au colloid solution was then spread on the substrate to obtain a dense coverage of Au nanoparticles. The substrates were subsequently loaded into a chemical vapour deposition furnace. The nanowires were synthesized at ∼ 320 °C for 90 min with a continuous flow of GeH 4 at 50 s.c.c.m. and H 2 at 200 s.c.c.m. The total pressure of the reaction chamber was controlled at 30 torr. This procedure gave predominantly vertically oriented nanowires of ∼ 25 µm in length. The Ge nanowires thus obtained were then coated with a 5 nm Ti adhesion layer and 15 nm Au using sputtering. Solutions (1 mg ml –1 ) of adamantane thiol (ADT), 4-diamantane thiol (4DT), 7-triamantane thiol (7TrT), [121]tetramantane-6-thiol (6TT) and [121]tetramantane-2-thiol (2TT) were prepared by dissolving the respective thiols in 90% ethanol and 10% toluene. Diamantane, triamantane and tetramantane were isolated from petroleum as describe in ref. 10 and derivatized with thiol functional groups as described in ref. 18 . Dodecanethiol (DDT) and 16-mercapto hexadecanoic acid (16-MHDA) (1 mg ml –1 ) solutions were prepared by dissolving the respective thiols in tetrahydrofuran. SAMs of these molecules were obtained by immersing the Au-coated nanowire substrates into the thiol solutions for 24–48 h. On removal from the solutions, the samples were rinsed with appropriate solvents (toluene and ethanol for ADT, 4DT, 7TrT, 6TT and 2TT, ethanol for DDT and 16-MHDA) to remove physisorbed molecules and were then blown dry with nitrogen. Each of these samples was mounted onto the sample stage and loaded into the custom-built ultrahigh-vacuum field-emission chamber immediately after it had been prepared with the SAM to carry out field-emission measurements. Measurements Electron field-emission characterization was carried out in an ultrahigh-vacuum chamber with a base pressure under 5 × 10 –8 torr. Typically used base pressures were in the range 2 × 10 –9 to 5 × 10 –8 torr. A schematic diagram of the field-emission set-up is shown in Fig. 1 . A polished Cu block was used as the anode and Mylar film (50 μm nominal thickness) with an open area of ∼ 0.5 cm 2 was used as a spacer to keep the Cu anode at a fixed distance from the substrate. The applied sweeping voltage and emission current were controlled and recorded by a Keithley 2410 Voltage sourcemeter using Labview 7.1. UPS Sample preparation Si wafer (p-type, resistivity 15 Ω cm) was cut into 1 inch × 1 inch pieces, and samples for UPS measurements were prepared by first cleaning them with acetone and methanol followed by swabbing the surface with a cleanroom swab soaked in isopropanol. After drying with nitrogen, the samples were immersed in piranha solution (H 2 SO 4 :H 2 O 2 , 70:30 by volume) for 15 min and subsequently rinsed with copious amounts of Millipore (18 MΩ cm) water and blown dry with nitrogen. The samples were then loaded into an electron-beam metal deposition chamber (base pressure of ∼ 1 × 10 –6 torr). A 5 nm Ti adhesion layer was deposited at 0.5 Å s –1 followed by a 100 nm Au layer deposited at 2.0 Å s –1 . On removal from the chamber the substrates were quickly immersed into the respective thiol solutions to deposit the SAMs. (See ‘Sample preparation’ in ‘Field emission’ section for details on SAM preparation.) Freshly prepared samples were then immediately loaded into the photoemission analysis chamber to carry out UPS measurements. Measurements UPS experiments were performed on beamline 8–1 of the Stanford Synchrotron Radiation Lightsource at a pressure of ∼ 10 −9 torr. The detector was a PHI model 10–360 hemispherical capacitor electron analyser. Band offsets [Fermi edge ( E F ) – HOMO] were measured by the centre of the initial rise in intensity below E F for the monolayer coated samples in accordance with standard practice; see Supplementary Information for details. Work function measurements were performed at an incident photon energy of 120 eV and a pass energy of 0.585 eV for the low-kinetic-energy cutoff. A bias of −5 V was applied to the sample relative to the analyser to detect the low-kinetic-energy electrons. The sample work function was given by the incident photon energy minus the width of the photoemission curve. All relative values are accurate to within ±0.06 eV based on the approximate resolution of the measurement. The absolute values are accurate to within the analyser work function calibration. For all UPS experiments, spot-to-spot and sample-to-sample variations were identical within the resolution of the measurement. Measurements did not change as a function of exposure time (1–30 min), indicating negligible beam damage at these incident photon energies.
They sound like futuristic weapons, but electron guns are actually workhorse tools for research and industry: They emit streams of electrons for electron microscopes, semiconductor patterning equipment and particle accelerators, to name a few important uses. Now scientists at Stanford University and the Department of Energy's SLAC National Accelerator Laboratory have figured out how to increase these electron flows 13,000-fold by applying a single layer of diamondoids – tiny, perfect diamond cages – to an electron gun's sharp gold tip. The results, published today in Nature Nanotechnology, suggest a whole new approach for increasing the power of these devices. They also provide an avenue for designing other types of electron emitters with atom-by-atom precision, said Nick Melosh, an associate professor at SLAC and Stanford who led the study. Diamondoids are interlocking cages made of carbon and hydrogen atoms. They're the smallest possible bits of diamond, each weighing less than a billionth of a billionth of a carat. That small size, along with their rigid, sturdy structure and high chemical purity, give them useful properties that larger diamonds lack. SLAC and Stanford have become one of the world's leading centers for diamondoid research. Studies are carried out through SIMES, the Stanford Institute for Materials and Energy Sciences, and a lab at SLAC is devoted to extracting diamondoids from petroleum. In 2007, a team led by many of the same SIMES researchers showed that a single layer of diamondoids on a metal surface could emit and focus electrons into a tiny beam with a very narrow range of energies. The research team used tiny nanopillars of germanium wire as stand-ins for the tips of electron guns in experiments aimed at improving electron emission. This image was made with a scanning electron microscope – one of a number of devices that use emitted electrons. Credit: Karthik Narasimha/Stanford The new study looked at whether a diamondoid coating could also improve emissions from electron guns. One way to increase the power of an electron gun is to make the tip really sharp, which makes it easier to get the electrons out, Melosh said. But these sharp tips are unstable; even tiny irregularities can affect their performance. Researchers have tried to get around this by coating the tips with chemicals that boost electron emission, but this can be problematic because some of the most effective ones burst into flames when exposed to air. For this study, the scientists used tiny nanopillars of germanium wire as stand-ins for electron gun tips. They coated the wires with gold and then with diamondoids of various sizes. Germanium nanopillars were coated with gold and then with diamondoids of various sizes. The scientists got the best results by coating the pillars with diamondoid molecules that consist of four “cages;” this increased the emission of electrons from the tips 13,000-fold. Credit: Karthik Narasimha/Stanford When the scientists applied a voltage to the nanowires to stimulate the release of electrons from the tips, they found they got the best results from tips coated with diamondoids that consist of four "cages." These released a whopping 13,000 times more electrons than bare gold tips. Further tests and computer simulations suggest that the increase was not due to changes in the shape of the tip or in the underlying gold surface. Instead, it looks like some of the diamondoid molecules in the tip lost a single electron – it's not clear exactly how. This created a positive charge that attracted electrons from the underlying surface and made it easier for them to flow out of the tip, Melosh said. "Most other molecules would not be stable if you removed an electron; they'd fall apart," he said. "But the cage-like nature of the diamondoid makes it unusually stable, and that's why this process works. Now that we understand what's going on, we may be able to use that knowledge to engineer other materials that are really good at emitting electrons." Diamondoid structures tested in the experiment; the two on the bottom, which consist of four “cages” with carbon atoms at each corner, produced the biggest gains in electron emission. The chemical tags at the bottom of each molecule were added to help the diamondoids stick to the gold surface of the nanopillars. Credit: Karthik Narasimha/Stanford SIMES researchers Nick Melosh, left, and Jeremy Dahl in a Stanford laboratory with equipment used to perform diamondoid experiments. Credit: SLAC National Accelerator Laboratory
10.1038/nnano.2015.277
Chemistry
Mass-produced microvalves are the key to scalable production of disposable, plug-and-play microfluidic devices
Seyed Ali Mousavi Shaegh et al. Plug-and-play microvalve and micropump for rapid integration with microfluidic chips, Microfluidics and Nanofluidics (2015). DOI: 10.1007/s10404-015-1582-4
http://dx.doi.org/10.1007/s10404-015-1582-4
https://phys.org/news/2016-05-mass-produced-microvalves-key-scalable-production.html
Abstract This paper reports design, fabrication, and characterization of an air-actuated microvalve and a micropump made of thermoplastic materials. The bonding process was carried out by thermal fusion process with no particular surface treatment. The developed microvalve was used as a reversible switch for controlling both liquid flow and electrical field. Bonding strength of the fabricated microvalves could withstand liquid and air pressures of up to 600 kPa with no burst failure. The micropump made of three connected microvalves, actuated by compressed air, could generate a liquid flow rate of up to 85 µl/min. The proposed microvalve and micropump can be used as pre-fabricated off-the-shelf microfluidic functional elements for easy and rapid integration with thermoplastic microfluidic circuitries in a plug-and-play arrangement. Access provided by DEAL DE / Springer Compact Clearingstelle Uni Freiburg _ Working on a manuscript? Avoid the common mistakes 1 Introduction Because of essential role of flow control and manipulation for microfluidic applications, many investigations have been carried out to develop various designs of microvalves (Oh and Ahn 2006 ; Zhu et al. 2012 ; Jiang and Erickson 2013 ; Kang et al. 2013 ; Shiraki et al. 2015 ) and micropumps (Nguyen et al. 2002 ; Yobas et al. 2008 ; Qin et al. 2009 ; Zhang et al. 2015 ). Examples of deploying microvalves can be found for cell culture assays (Gómez-Sjöberg et al. 2007 ; Wu et al. 2008 ; Frimat et al. 2011 ), microfluidic drug screening (Ma et al. 2009 ; Nguyen et al. 2013 ), single-cell analysis (Irimia 2010 ), and droplet microfluidics (Zeng et al. 2009 ). Silicon-based microvalves and micropumps were developed at the early stages of microfluidic progress using surface and bulk micromachining technologies (Oh and Ahn 2006 ). But for the last 15 years, different polymers have been adopted for the development of microvalves and micropumps taking advantage of cheaper materials and easier fabrication methods. Generally, a microvalve is made of a flexible diaphragm sandwiched between a control chamber and a liquid chamber (control chamber/diaphragm/liquid chamber). Upon the deformation of the diaphragm by an external means, flow inside the fluidic chamber can be manipulated. In terms of fabrication process, microvalves are generally categorized into (1) built-in microvalves and (2) pre-fabricated microvalves. For built-in microvalves, diaphragm is generally made of polydimethylsiloxane (PDMS). Control chambers and the liquid chambers can be made of PDMS, polymethylmethacrylate (PMMA) and cyclic olefin copolymer (COC) during the course of chip fabrication in the following arrangements: (PDMS/PDMS/PDMS) (Unger et al. 2000 ; Gómez-Sjöberg et al. 2007 ; Gu et al. 2010 ), (PMMA/PDMS/PMMA) (Zhang et al. 2009 ), and (COC/PDMS/COC) (Gu et al. 2010 ). Because of small footprint of such microvalves, e.g., 100 µm × 100 µm (Unger et al. 2000 ), built-in microvalve concept enables microfluidic large-scale integration (Melin and Quake 2007 ) with pneumatic actuation, suitable for high-throughput cell culture and single-cell analysis (Wu et al. 2004 ; Gómez-Sjöberg et al. 2007 ; Lii et al. 2008 ). It has been reported that PDMS with no surface treatment can absorb some small hydrophobic molecules, i.e., estrogen, during microfluidic drug screening (Regehr et al. 2009 ; Berthier et al. 2012 ). In addition, some uncured oligomer compounds from the polymeric network of PDMS can leach into the microchannel media affecting cell membrane during cell culture (Regehr et al. 2009 ; Berthier et al. 2012 ). Also, some organic solvents swell PDMS or dissolve PDMS compounds (Lee et al. 2003 ). Such features of PDMS can hinder some applications, particularly microfluidic organ-on-chip devices for drug screening. In order to mitigate the mentioned problems of using PDMS for microvalve fabrication, other elastomers including Teflon (Teflon/Teflon/Teflon) (Grover et al. 2008 ), Viton ® (PMMA/Viton ® /PMMA) or (COC/Viton ® /COC) (Ogilvie et al. 2011 ) have been explored for built-in microvalves. Pre-fabricated microvalves (Elizabeth Hulme et al. 2009 ) can be made in advance and then integrated with a pre-fabricated microfluidic chip. In contrast to built-in microvalves, these microvalves have larger footprints at millimeter scale. Therefore, because of ease of incorporation, such microvalves are suitable for plug-and-play applications where low-density integration of microfluidic components is required. Some explored applications are gradient generators (Elizabeth Hulme et al. 2009 ), immunoassay (Weibel et al. 2005 ) and on-chip lifelong observation of C. elegans (Hulme et al. 2010 ). Depending on the microvalve design, a pre-fabricated microvalve can be actuated manually by a screw (Weibel et al. 2005 ; Hulme et al. 2010 ), electrically by a solenoid actuator (Weibel et al. 2005 ), or pneumatically (Elizabeth Hulme et al. 2009 ). Pre-fabricated valves are mainly made of PDMS in large quantities. They are embedded into microfluidic chips during the casting of the master-made PDMS (Hulme et al. 2010 ). Pre-fabricated valves made from PDMS are suitable for PDMS-based microfluidic devices. Therefore, there is a lack of microvalves for user-friendly integration with microfluidic chips made of thermoplastic materials. In recent years, thermoplastic materials have gained significant popularity for microfluidic applications (Tsao and DeVoe 2009 ) suitable for high volume, low-cost production (Chin et al. 2012 ). Also, they have lower oxygen permeability compared to PDMS (Ochs et al. 2014 ). Materials with low oxygen permeability are required for making devices to create oxygen-controlled conditions on a microfluidic chip for tumor microenvironment and hypoxia (Byrne et al. 2014 ). In this paper, we reported a systematic approach for design, fabrication, and characterization of a plug-and-play pre-fabricated microvalve and a micropump for easy integration with microfluidic chips made of thermoplastic materials. As shown in Fig. 1 , the normally open microvalve was made of a flexible diaphragm, thermoplastic polyurethane (TPU), sandwiched between a liquid chamber and an air chamber both made of PMMA. Upon increasing air pressure inside the control chamber, the diaphragm was deformed downward at the liquid chamber (displacement chamber) to stop the liquid passing through the microvalve. Different layers of microvalve were bonded together through thermal fusion process with no particular surface treatment. Fig. 1 a Concept design of the plug-and-play microvalve for integration with thermoplastic microfluidic chips, b schematic of the microvalve-chip assembly, c a cross-sectional view of the microvalve with an embedded connector, d an exploded view of the microvalve: ( 1 ) bottom component to accommodate liquid chamber, ( 2 ) TPU flexile diaphragm, ( 3 ) intermediate component to accommodate the air chamber and the embedded connector, ( 4 ) the embedded connector, and ( 5 ) top component Full size image 2 Valve design Single microvalve module was designed based on using five components, Fig. 1 b–d. Components numbers (1), (3), and (5) were fabricated using micromilling process out of PMMA. Both liquid and air chambers were designed to have a diameter of 4 mm. Upon applying pressure, the diaphragm sandwiched between the bottom and the intermediate components was deformed downward at the liquid chamber (displacement chamber) to block the inlet port, which was located at the center of the liquid chamber. Spacing between the inlet port and the outlet port was set to be 1.0 mm. Such design enables the microvalve to be integrated with a microfluidic chip easily using different bonding techniques including thermal bonding, ultrasonic welding or adhesive bonding. The microvalve can be implemented on a microfluidic chip in a plug-and-play manner by linking its inlet and outlet ports to the corresponding ports of a fluidic channel embedded in the microfluidic chip, Fig. 1 . For an easy chip-to-world connection, a connector made of silicone rubber was made and embedded within the microvalve structure. The connector was used to connect the microvalve to a compressed air regulator for valve actuation. 3 Material selection Micromilling process was used to make the circular control and liquid chambers from Poly(methyl methacrylate), PMMA. PMMA has been widely used for rapid prototyping of microfluidic chips by micromilling method and laser ablation techniques (Waldbaur et al. 2011 ). Also, it can be injection molded (Becker and Gärtner 2008 ) for the mass production of commercialized chips. An off-the-shelf thermoplastic polyurethane (TPU) film with a thickness of 150 ± 15 µm, Bothane 85A (Texin ® ) from BAYER ® , was selected for making the flexible diaphragm. Polyurethane elastomers have gained significant attentions for microfluidic chip fabrication (Piccin et al. 2007 ; Wu et al. 2012 ; Gu et al. 2013 ). They have high mechanical strength, resiliency, and good resistance to abrasion. (Gu et al. 2013 ). Also, glass transition temperature ( T g ) of TPU film was measured. Dynamic mechanical thermal analysis upon heating from −120 to 140 °C revealed that TPU has a low T g of −50 °C while T g of PMMA is generally reported around 110 °C. In addition, TPU film was optically characterized. High light absorbance at infrared wavelengths of 6–11 µm in transmittance measurement was observed. This optical property of TPU film makes it a suitable material for ablation and cutting using CO 2 laser beam which has an inherent wavelength of 10.6 µm (Hong et al. 2010 ). 4 Numerical simulation and experimental validation of diaphragm deflection Finite element method (FEM) using ANSYS 11.0 software was used to simulate the deflection of the circular diaphragm under different air pressures. All geometries, forces, and boundary conditions were axisymmetric, therefore a two-dimensional (2D) and 8-node element axisymmetric model was established in ANSYS to mesh the computational domain. In order to establish the FEM model, we performed a uniaxial tension test on the TPU film to understand its strain–stress characteristics. TPU film showed a nonlinear hyperelastic property, and Mooney–Rivlin (nine parameters) constitutive model was exploited to fit the data. Diameter of liquid and control chambers was set at 4 mm. The following boundary conditions and assumptions were considered to simulate the diaphragm deformation at the computational domain: Axisymmetric setup with respect to the center of the valve geometry Displacement in the horizontal direction constrained along the axis of symmetry All degrees of freedom constrained on the bottom surface of the microvalve seat Load applied as a uniformly distributed pressure on the top surface of the diaphragm Contact elements condition used to simulate the contact conditions at the diaphragm–valve seat interface For contact elements condition, a friction coefficient of 0.1 was considered for diaphragm–valve seat interface. Large displacement static option was used to solve the model. In order to evaluate the predications for diaphragm deformation obtained from FEM model, a test chip was fabricated where the diameter of the test chip diaphragm was the same as diaphragm diameter of the FEM model (4 mm), Fig. 2 a,b. The test chip was made of the same TPU film sandwiched between two PMMA slabs through thermal boding process. As shown in Fig. 2 c, the test chip enabled the measurement of diaphragm deflection under different air pressures for comparison with the results obtained from the FEM analysis. Three diaphragms were tested. In the experiment, the vertical deflection was measured optically at the central point of the diaphragm using a ZETA-20 3D Imaging & Metrology System. In general, the FEM predictions have similar trends with the experimental results. Both FEM and experimental results showed that the diaphragm deflected more as the air pressure increased. Experimental results showed higher deflections than the FEM model predictions and they are characterized by offsets in the vertical direction. Upon examination of the diaphragms, it was noticed that there were initial warpage of the diaphragm introduced by the process of bonding the TPU diaphragm to the PMMA layers. This initial warpage may lead to a higher deflection compared to a flat diaphragm. Figure 2 c shows a displacement result obtained from FEM model when the diaphragm was subjected to a pressure of 35 kPa. The periphery of the diaphragm was constrained from displacing downward. But the central region showed the largest displacement of ~300 µm causing it to just come into contact with the edge of the inlet hole. The inlet was located at the center of the microvalve seat inside the liquid chamber. Fig. 2 a Fabricated test chip for measuring diaphragm deflection under different air pressures, b detailed schematic of the test chip: ( 1 ) window for microscopy, ( 2 ) PMMA layer, ( 3 ) TPU diaphragm, ( 4 ) PMMA layer, ( 5 ) air entry for diaphragm actuation, and ( 6 ) PMMA layer for embedded connector, c experimental characterization results for three diaphragms of the test chip shown in a in comparison with FEM analysis of membrane deflection versus different applied air pressures, d 2D axisymmetric FEM analysis showing deflection of the microvalve diaphragm under a pressure of 35 kPa applied uniformly to the top of the diaphragm. The bottom of the membrane was seen to be just touching the edge of the inlet hole. Legend unit: meters Full size image Figure 3 shows FEM prediction of contact pressure between the diaphragm and the valve seat at pressures of 40 and 100 kPa. At 40 kPa, a contact region around the periphery of the inlet hole was formed as the diaphragm pressed against it. The width of the contact ring was about 50 µm with a peak in the contact pressure distribution at the edge of the hole. When the pressure was increased to 100 kPa, the width of the contact ring increased to about 450 µm. The peak due to the sharp edge of the hole was still there. Most of the contact region maintained a contact pressure of over 100 kPa which was essential for leakage-free sealing. Fig. 3 Contact pressure between the flexible diaphragm and the valve seat from FEM simulation, a at 40 kPa pressure, b at 100 kPa pressure. Legend units: Pascal Full size image 5 Valve fabrication Since both PMMA and TPU were thermoplastic, the fabrication of the whole valve module took advantage of direct thermal bonding technique with no intermediate adhesive. PMMA components were fabricated using micromilling process, while TPU films were cut using either CO 2 laser beam or a blade cutter at circular shapes. The thermal bonding process was comprised of two major steps: thermal pre-treatment step and low-pressure bonding step where components were bonded together using a metallic jig. During the thermal pre-treatment process, surface of PMMA components and TPU films were cleaned by isopropyl alcohol (IPA) and then blown by filtered air. They were subsequently kept in a vacuum oven at 80 °C for 24 h to facilitate the removal of any volatile residuals from the components. After 24 h, TPU film was removed from the vacuum oven and its surface was cleaned thoroughly again using cleanroom wiper and IPA and DI water followed by air blow to remove any residuals and debris. Then it was returned back to the vacuum oven and kept for at least 6 h. This thermal pre-treatment can enhance the strength and quality of the bonding at the low-pressure bonding step. The thermally treated PMMA components and the TPU film were removed from the oven and then aligned on top of each other accordingly. The assembled components were sandwiched in a metallic jig for thermal bonding process. Subsequently, the whole assembly was put in the oven (Memmert, model UFE600). It was heated and then kept at temperature of 115 °C for 60 min. Then the assembly was cooled down to 60 °C within 60 min. The metallic jig had adjustable screws to control the bonding pressure. The applied torque to adjust the screws was 1 Nm. Inspection after bonding process was carried out for any visible distortion, crack, or delamination of multiple layers of the fabricated valves. 6 Valve characterization As shown in Fig. 4 , fabricated valves were tested on a test chip to visualize the diagram deformation upon applying pressurized air to the control chamber. Also, a pressure test setup was designed and fabricated to examine microvalve operation and mechanical strength of thermal bonding of TPU to PMMA under different operating pressures. It was observed that the bonding strength of PMMA/TPU/PMMA was sufficient to withstand liquid and air pressures of up to 600 kPa with no burst failure. As shown in Fig. 5 , characterization experiments were carried out for control air pressures of 100, 200, and 300 kPa. Leakage-free operations at liquid pressures lower than the air pressure were realized. It was observed that when the pressure of liquid approached to the pressure of actuating air, liquid started to leak through the diaphragm–valve seat interface. Fig. 4 a Individual fabricated microvalve, b integrated microvalve on the test chip using a double adhesive Kapton tape, c liquid chamber of the valve shown in b before diaphragm actuation, d liquid chamber of the valve shown in b after actuation under 100 kPa air pressure. The white area shows the diaphragm–valve seat interface Full size image Fig. 5 Microvalve leakage rate versus liquid pressure under different actuation air pressures Full size image The function of the microvalve was also demonstrated in an electrical isolation test as a reversible electrical switch. As shown in Fig. 6 , a microvalve was integrated onto an electrophoresis chip. To start with the characterization test, channel was filled up with TAE buffer solution (40 mM Tris, 20 mM acetic acid, 1 mM EDTA, pH 8.3). Then the whole channel between the inlet and the outlet was filled with a 0.5 % agarose gel. After gel solidified, a 1-kb DNA ladder, from 250 to 10000 base pairs, mixed SYBR Green I (100×) was deposited on top of the inlet. A potential of 110 V was applied to electrodes and DNA band travelled 6.5 mm toward the positive electrode in 18 min. Then the microvalve was kept closed for 6 min by applying an air pressure of 250 kPa. During this period, it was observed that the leading edge of the DNA band showed a minor move of less than 0.1 mm, which can be attributed to diffusion phenomenon. After that, pressurized air was cut off to make the valve open and a movement of 2.4 mm was observed within the next 6 min. This observation indicated that the air-actuated microvalve was able to isolate two adjacent fluidic chambers as a reversible electrical switch. Fig. 6 DNA electrophoresis chip with an integrated valve (at the left side of the outlet). For all figures, voltage was 110 V AC. a Start time ( t = 0), microvalve open, b after 18 min, valve open, DNA band travelled 6.5 mm, c after 24 min, microvalve kept closed for 6 min, d after 30 min, valve was open for 6 min, DNA travelled 2.4 mm Full size image 7 Plug-and-play peristaltic micropump Making use of the microvalve design, a peristaltic pumping scheme was achieved by consecutive integration and operation of three interconnected liquid chambers integrated on a substrate. The working principle of the micropump was based on the deflection of three TPU diaphragms actuated by three air entries on top of the diaphragms to generate a peristaltic-like effect for liquid pumping. The fabrication process was similar to microvalve fabrication as mentioned earlier. In order to investigate the frequency response time of the micropump, the effect of two actuation frequencies of 3.3 and 5 Hz on pumping flow rate was investigated. An in-house-developed air pump with adjustable actuation frequency was used to actuate the diaphragms. Pumping flow rate decreased from 89 ± 6 to 65 ± 5 µl/min as the actuation frequency dropped from 5 to 3.3 Hz using 20 kPa air pressure for actuation. This decline in the flow rate can be associated with the longer residence time of the flow in the pumping chambers at lower frequency of actuation. Also, the impact of downstream pressure at the discharge port on the overall pumping rate was investigated. The test setup had one liquid column at the suction port and one liquid column at the discharge port. By changing the difference between the heights of the two liquid columns, the pressure at the discharge port was controlled. In order to measure the flow rate, the liquid pumped to the discharge column was collected by a 1-ml syringe from the highest point of the discharge column in a given time. As the downstream pressure at the discharge port was increased from 12 to 42 mm, pumping flow rate was decreased from 82 to 55 µl/min, Fig. 7 d. Fig. 7 a Schematic of cross-sectional view of the micropump comprised of three interconnected liquid chambers integrated on a single substrate. b The bottom view of a fabricated micropump integrated on a test chip using Kapton tape. c Pumping flow rate versus pressure difference between suction and discharge ports of the micropump, frequency of diaphragm actuation was 5 Hz with an actuation air pressure of 20 kPa Full size image 8 Conclusions This paper reported the fabrication and characterization of an air-actuated normally open microvalve and a micropump made of thermoplastics. Microvalves could withstand liquid pressures of up to 600 kPa with no burst failure. Also, leakage-free operation at liquid pressures lower than the air pressure was realized. Characterization results proved that the microvalve can be used for controlling both liquid flows and electrical fields. No particular surface treatment was used for bonding process. The plug-and-play microvalves and micropumps can be easily sterilized and autoclaved for cell-based microfluidic devices and microfluidic organ-on-chip platforms. Multiple valves can be integrated into one microfluidic device and provide complex flow manipulation functions. Fabricated microvalves with the embedded chip-to-world connectors are simple to operate. Such design features make them as off-the-shelf functional elements with easy integration onto thermoplastic microfluidic chips. Exploited materials and the proposed fabrication process are appropriate for mass production of microfluidic components and circuitries using thermoforming process, particularly injection molding method. In addition, other thermoplastic materials including COC and PC with respective T g of 80 and 148 °C (Gärtner 2008 ) can be explored for bonding to TPU film to make the proposed functional elements.
The elusive 'lab on a chip' capable of shrinking and integrating operations normally performed in a chemical or medical laboratory on to a single chip smaller than a credit card, may soon be realized thanks to disposable, plug-and-play microfluidic devices developed by A*STAR researchers1. Microfluidic systems use networks of channels much narrower than a human hair to control the movement of miniscule amounts of fluids. Recent advances in microfluidics technology have proven invaluable for immediate point-of-care diagnosis of diseases and have greatly improved enzymatic and DNA analysis. High throughput microfluidic systems are also being employed in stem cell studies and for the discovery of new drugs. A stumbling block for successful miniaturization and commercialization of fully integrated microfluidic systems, however, has been the development of reliable microfluidic components, such as microvalves and micropumps. Zhenfeng Wang and colleagues from the Singapore Institute of Manufacturing Technology (SIMTech), A*STAR have removed that obstacle by developing an efficient and scalable method to fabricate disposable plug-and-play microfluidic devices. "Integrating valves and pumps into thermoplastic devices is usually challenging and costly because the fabrication process is very complicated," says Wang. "Mass-producing the microvalve module separately from the main device, however, makes the fabrication of the main device relatively simple and robust." Microvalves consist of a flexible diaphragm sandwiched between a control chamber and liquid chamber, and by applying pressure from an external mechanical pump, or through electrostatic or pneumatic forces, the diaphragm can be deformed allowing for the manipulation of fluidic flow within the chamber. The researchers fabricated a micropump consisting of three microvalves, but had to find a suitable material for the microvalve membranes, which must be flexible, but also easily bonded to other parts of the microvalve without the use of adhesives. "We found that thermoplastic polyurethane film works well as the membrane and can be bonded tightly with the main body, which is made from polymethyl methacrylate," explains Wang. The design of the microvalve also presented a challenge to the researchers, which they overcame by using finite element method simulation—a numerical tool used to solve complex structural problems—to generate the design guidelines, which were then verified by experimentation. Using the SIMTech Microfluidics Foundry, the researchers are currently developing a number of 'on chip' modules and are expanding their capabilities in the design, prototyping and manufacture of polymer based microfluidic devices. "Further miniaturizing the size of microvalve modules could increase the scale of integration and broaden the range of potential applications," says Wang.
10.1007/s10404-015-1582-4
Computer
Researchers create a new etching method to improve smartphone circuit performance
Thi-Thuy-Nga Nguyen et al, Dry etching of ternary metal carbide TiAlC via surface modification using floating wire-assisted vapor plasma, Scientific Reports (2022). DOI: 10.1038/s41598-022-24949-1 Journal information: Scientific Reports
https://dx.doi.org/10.1038/s41598-022-24949-1
https://techxplore.com/news/2023-02-etching-method-smartphone-circuit.html
Abstract Dry etching of ternary metal carbides TiAlC has been first developed by transferring from wet etching to dry etching using a floating wire (FW)-assisted Ar/ammonium hydroxide vapor plasma. FW-assisted non-halogen vapor plasma generated at medium pressure can produce high-density reactive radicals (NH, H, and OH) for TiAlC surface modifications such as hydrogenation and methylamination. A proposed mechanism for dry etching of TiAlC is considered with the formation of the volatile products from the modified layer. Introduction In a fin-type or nanosheet field effect transistor (FET) of a logic semiconductor device, it has been proposed to use metal gate materials, for examples, metal carbides (TiC, TiAlC) and metal nitrides (TiN, TaN, AlN, TiAlN) 1 , 2 , 3 , 4 , 5 . Ternary metal compound such as TiAlC belongs to high-melting point, high-hardness, and high-wear resistance materials 1 , 6 , 7 . Conventionally, the TiAlC film and TiC film in semiconductor devices are etched by wet etching using H 2 O 2 mixtures 5 , 8 , 9 , 10 . However, a poor metal removability in wet etching requires a prolonged etching time to fully remove the target metals, and in the worst case, the metal gate can be damaged 5 . In order to fabricate the next generation FETs in semiconductor industries, it is strongly demanded to develop an etching method that enables to control the selective and isotropic removal of metal carbides (TiAlC, TiC, and AlC) at an atomic layer level 5 , 9 , 10 . No dry etching (plasma etching) of ternary material TiAlC with atomic level control has been developed yet. Atmospheric pressure plasma (APP) and medium-pressure plasma techniques with a large difference in chemical kinetics compared to low-pressure plasma are able to miniaturize equipment size, fabrication cost, and energy consumption 11 , 12 , 13 , 14 , 15 , 16 , 17 . Medium-pressure plasma (0.2–50 kPa) can produce higher plasma density compared to vacuum plasma and larger plasma volume compared to APP 18 , 19 , 20 . In order to improve the plasma density at a remote region where the substrate is placed, we have inserted a long floating metal wire inside the discharge tube to enhance the electric field not only near the copper coil region, but also at a remote region. Therefore, a rich radical source (10 14 cm −3 ) near the sample surface that is far from the coil region can be obtained 15 . The generated rich radical source can produce a large amount of etchant or co-reactant species to enhance the reaction rate with sample surface. This radical-rich environment plays an important role in controlling isotropic etching of 3D multilayer semiconductor devices. Here we have first developed a new dry etching method for metal carbides such as ternary material TiAlC by using a floating wire (FW)-assisted vapor plasma of Ar gas mixed with vapor sources of NH 4 OH-based mixtures at medium pressure. Although the mechanism of wet etching and dry etching can be different because the formed compounds are dissolved in solutions in wet etching, and for plasma etching, the formed compounds should be volatile in the gas phase, the wet etching brings lots of useful ideas to develop new etching chemistries for traditional materials or dry etching methods for new materials. In this study, we aim to develop a new etching method (wet-dry etching or wet-like plasma etching) that can combine the advantages of wet etching (high isotropy and selectivity) and dry etching (high controllability) for new materials or hard-to-etch materials. Surface reaction plays a key role in developing atomic layer etching (ALE) processes, which normally are proceeded in multi-steps including surface modification to reduce the surface energy of sample surface in the first step, and then removal of modified layer in the next step. Here the surface modification, such as hydrogenation and methylamination, of the TiAlC film, was obtained by controlling the active radicals, such as NH, H, and OH. The treated TiAlC surface can be removed via the formation of modifed layers. Lastly, the mechanism for plasma etching of metal carbides (TiAlC, TiC, AlC) is proposed. The new etching method will be explored for etching metals and metal compounds such as nitrides, carbides, and oxides to determine if the selectivity and isotropy commonly seen in wet etching will also occur in dry etching. Materials and methods Sample preparation TiAlC films were prepared on Si wafer by vacuum evaporation with the Ti, Al sources, and C 2 H 2 gas. TiAlC/Si samples were prepared with the size of 15 mm × 15 mm (for wet etching) and 15 mm × 20 mm (for dry etching). TiAlC films were analyzed by using an ellipsometry (M-2000, J.A. Woolam Co.) with a Xe arc light source (FLS-300). A model was used for spectral fitting of ellipsometric data of TiAlC sample including a top layer (native oxide, deposited layer, or modified layer), a TiAlC layer, an interface layer, and Si substrate, as shown in Fig. 1 a. The pristine TiAlC film on Si substrate (TiAlC/Si) has a thickness of around 35 nm. Figure 1 ( a ) A model for spectral fitting of ellipsometric data. A stratified layer model is constructed by a structure of TiAlC sample including top layer (native oxide, deposited layer, or modified layer), TiAlC layer, interface layer, and Si wafer. ( b ) Dispersions of refractive index and extinction coefficient of TiAlC film as functions of wavelength obtained by the Gen-Osc model with fitted parameters as listed in Table 1 . Full size image Table 1 Best-fit parameters of TiAlC layer obtained by Gen-Osc model. Full size table The dielectric function of the TiAlC film is expressed by the Gen-Osc model involving one Drude and three Lorentz oscillators 21 . The Gen-Osc model with the best-fit parameters is shown in Table 1 . Dispersions of optical constants including refractive index (n) and extinction coefficient (k) of TiAlC film as functions of the wavelength ranged from 245 to 1000 nm were obtained by the Gen-Osc model, as shown in Fig. 1 b. The n and k values at 633 nm are 2.58 and 0.83, respectively. Cross-sectional microstructure and film thickness of samples were characterized by a cold field-emission scanning electron microscope (FE-SEM, SU-8230, Hitachi). Wet etching of TiAlC To develop the new etching chemistries for TiAlC, both halogen-containing mixtures and non-halogen mixtures were used to test the potential etching chemistries. The 35 nm-TiAlC/Si samples (15 mm × 15 mm) were immersed in liquid mixtures such as peroxide solution that is a mixture of a 30% by weight water-based solution of hydrogen peroxide H 2 O 2 (H 2 O 2 , 30 wt%), hydrochloric acid solution (HCl, 36 wt%), ammonium hydroxide solution (NH 4 OH, 29 wt%), and deionized water H 2 O. Table 2 shows a list of four experimental conditions (L1, L2, L3, and L4) of liquid mixtures for wet etching of TiAlC film. Table 2 A list of four experimental conditions (L1, L2, L3, and L4) of liquid mixtures that were mixed at room temperature for wet etching of TiAlC film. Full size table Dry etching of TiAlC by FW-assisted vapor plasma A dry etching method of TiAlC was developed by using FW-assisted vapor plasma. A long floating metal wire was placed inside the discharge tube to enhance the plasma density, that provides a rich radical source at a remote region 14 , 15 . The FW-assisted plasma can generate high densities of radicals, electronically excited particles, and photons in the visible and UV range, in which radicals are able to travel long distances. The charged particles from atmospheric-pressure plasmas have very short lifetime and are difficult to reach the substrate surface at a far distance 12 , 22 . In order to improve this, the FW-plasma is designed to assist the short-lived particles to reach the substrate surface placed far away from the plasma source. The FW-assisted plasma was connected with a process chamber, a vacuum dry pump, and a heating unit, as shown in Fig. 2 a. The FW-assisted plasma consists of a 500-mm-high discharge quartz tube (inner diameter of 6 mm) with a three-turn Cu coil connected with a very high-frequency (VHF) power of 100 MHz. A polytetrafluoroethylene (PTFE) seal was used to connect the discharge tube with the chamber. The FW made of metal wire is coated by a protective material to avoid the chemical reaction with plasma species. The distance between the center of copper coil and the sample surface is 140 mm. The distance between the sample surface and the discharge tube is 2 mm. Working pressure can be controlled from atmospheric pressure to medium pressure by using a rough valve (RV) and a fine valve (FV) between a dry pump (Kashiyama, NeoDry 15E) and the process chamber. The value of working pressure was recorded by a pressure gauge (Baratron MKS, 628B). In this study, the working pressure was controlled at 0.64 kPa. Figure 2 ( a ) Schematic structure of FW-plasma system. Vapors that can be introduced from upstream region. ( b ) Top view of measurement setup for OES of FW-plasma exposing to the sample. Full size image The vapor was flowed from upstream with Ar gas to generate remote FW-Ar/upstream vapor plasma along the discharge tube. Liquid mixtures including deionized water H 2 O and two ammonium hydroxide solutions including (NH 4 OH, 28 wt%) and (NH 4 OH, 17 wt%) were prepared. In order to generate FW-Ar/vapor plasma, the Ar gas of 1.5 standard liter per minute (slm) was used. The saturated vapor pressures of 100%, 28%, and 17% ammonia hydroxide at 25 °C are 1007 kPa, 83 kPa, and 30 kPa, respectively 23 , that are higher than the working pressure of 0.64 kPa used in this study. A list of three experimental conditions (P1, P2, and P3) to generate FW-Ar/vapor plasmas by various liquid mixtures injected from upper vapor line is shown in Table 3 . The FW-Ar/vapor plasmas were generated at 100 W, and the temperature of the liquid canister (T can ) was controlled at 70 °C. The substrate temperature (T sub ) obtained from plasma discharge was measured by a thermocouple (around 150 °C, no additional heater was used). Table 3 A list of three experimental conditions (P1, P2, and P3) to generate FW-Ar/vapor plasmas by various vapor mixtures injected from upstream region for TiAlC treatment. Full size table Plasma diagnostics The optical emission spectra (OES) of the FW-Ar/vapor plasmas such as the emissions of Ar, OH, NH, H β , H α , O were detected by using a spectrometer (Ocean Photonics, HR4000CG-UV-NIR) with the wavelength from 200 to 900 nm. The measured point was set on the TiAlC film surface (Fig. 2 b). The distance between the sample center and the head of optical fiber is 150 mm. Material characterization In order to analyze the surface modification of the TiAlC surface, X-ray photoelectron spectra (XPS) were obtained using a spectrometer (ESCALAB 250; Vacuum Generator, UK) equipped with an Al K α (photon energy = 1486.6 eV) source in an analysis chamber evacuated to a base pressure of 5 × 10 −7 Pa using an ion pump. Peak deconvolution and elemental concentrations were analyzed by the Advantage program. Depth profile of atomic concentration in an initial (pristine) TiAlC film deposited on Si substrate was evaluated with Ar sputtering at 3 keV and 1 µA to a sputter area of 2 mm × 2 mm for 10 min. Results The initial (pristine) TiAlC film deposited on Si substrate was evaluated by depth profile of atomic concentration, as shown in Fig. S1 . After removing the native oxide (Al–O, Ti–O, C=O) around 40% oxygen atomic concentration, the ratio of Ti:Al:C:O is around 28:22:40:10. Oxygen also exists inside the TiAlC film around 10%. The oxygen concentration increases to 20% at the interface between TiAlC film and Si substrate surface, that was assigned for the Si–O–C bond. Wet etching of TiAlC In order to develop the etching chemistry for new materials, wet etching of TiAlC was conducted with different liquid mixtures including chlorine-based solutions and non-chlorine solutions. Figure 3 a is a cross-sectional SEM images of TiAlC/Si samples before wet etching (35 nm-TiAlC, L0) and after wet etching by chlorine-based solutions and non-chlorine solutions. For chlorine-based solutions, no etching occurred with the mixture of HCl, H 2 O 2 , and H 2 O at a mixture ratio of 1:1:6 (condition L1). A low etch rate of 0.8 nm/min can be obtained with a mixture of HCl and H 2 O 2 (10:1, condition L2). Figure 3 ( a ) SEM images of (L0) pristine TiAlC/Si sample and TiAlC/Si samples after wet etching in various solutions, (L1) HCl/H 2 O 2 /H 2 O (1:1:6) for 10 min, (L2) HCl/H 2 O 2 (10:1) for 10 min, (L3) 30% H 2 O 2 for 5 min, and (L4) NH 4 OH/H 2 O 2 /H 2 O (2.2:3:52) for 10 min. ( b ) Thickness of TiAlC film as a function of wet etching time in NH 4 OH/H 2 O 2 /H 2 O mixture. Film thickness was evaluated by using both ellipsometry and scanning electron microscopy. Full size image For non-chlorine solutions, no etching occurred with H 2 O 2 solution (condition L3). A high etch rate of 2.3 nm/min can be obtained by using a NH 4 OH/H 2 O 2 /H 2 O mixture (2.2:3:52, condition L4). Wet etching of TiAlC by non-halogen liquid mixtures at different etch time was done by NH 4 OH/H 2 O 2 /H 2 O mixture. The surface modification of TiAlC film after wet chemical etching in NH 4 OH/H 2 O 2 /H 2 O mixture at room temperature is analyzed by XPS spectra, as shown Fig. S2 . The intensity of Ti 2p and Al 2p was significantly reduced after 5 min and 10 min etching, whereas C–C bond at 284.8 eV (C 1s) was significantly increased. N–H and C–N peaks (around 400 eV) can be found in N 1s spectra after etching. This indicates that the compounds containing C–C and C–N bonds after wet etching of TiAlC surface are not able to dissolve in NH 4 OH/H 2 O 2 /H 2 O mixture. As a result, Ti and Al could be dissolved in the solution; however, the TiAlC surface was covered with the C-C and C–N bonds. Based on the XPS results and the studies of Kakihana et al. and Sirijaraensre et al. on dissolution of Ti compounds 24 , 25 , the reaction of NH 4 OH/H 2 O 2 /H 2 O mixture with TiAlC to form hydroxylamine or other compounds can be assumed as follows. For Ti–Al bond: $${\text{Ti}}{-}{\text{Al }} + {\text{ H}}_{{2}} {\text{O}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{-OH}} \cdots {\text{H}}_{{2}} {\text{O}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{-OOH}} \cdots {\text{H}}_{{2}} {\text{O,}}$$ (1) $${\text{Ti}}\left( {{\text{Al-}}} \right){\text{OOH}} \cdots {\text{H}}_{{2}} {\text{O }} + {\text{ NH}}_{{3}} \to {\text{ Ti}}\left( {{\text{Al-}}} \right){\text{OOH}} \cdots {\text{NH}}_{{3}} + {\text{ H}}_{{2}} {\text{O}},$$ (2) $${\text{Ti}}\left( {{\text{Al}}} \right){\text{-OOH}} \cdots {\text{NH}}_{{3}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{-OH}} \cdots {\text{ONH}}_{{3}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{-OH}} \cdots {\text{NH}}_{{2}} {\text{OH}}.$$ (3) For Ti–C or Al–C bond: $${\text{Ti}}\left( {{\text{Al}}} \right){-}{\text{C}} + {\text{ H}}_{{2}} {\text{O}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){-}{\text{C}}{-}{\text{OH}} \cdots {\text{H}}_{{2}} {\text{O}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){-}{\text{C}}( = {\text{O}}){\text{OH}} \cdots {\text{H}}_{{2}} {\text{O}},$$ (4) $${\text{Ti(Al)C(}}={\text{O)OH}} \ldots {\text{H}}_{2}{\text{O}} + {\text{NH}}_{3} \to {\text{Ti(Al)}}-{\text{C}}(={\text{O}}){\text{OH}} \ldots {\text{NH}}_{2}{\text{OH}}$$ (5) $${\text{n}}\left[ {{\text{Ti}}\left( {{\text{Al}}} \right) - {\text{C}}} \right] \, + {\text{ nNH}}_{{3}} + {\text{ nH}}_{{2}} {\text{O}}_{{2}} \to {\text{ nTi}}\left( {{\text{Al}}} \right) - {\text{OH}} \cdots {\text{NH}}_{{2}} {\text{OH}} + {\text{ C}}_{{\text{n}}} {\text{H}}_{{{\text{2n}} + {1}}} - {\text{NH}}_{{\text{x}}} .$$ (6) These hydrogen bonding structures, such as Ti(Al)-OH ⋯ NH 2 OH and Ti(Al)-COOH ⋯ NH 2 OH, are soluble in water. However, the structures (such as C n C 2n+1 –NH x ), having C–C and C–N, are insoluble in H 2 O, this forms a barrier to stop wet etching. Table S1 shows the film thickness of TiAlC film and the surface layer (top layer) etched by the liquid mixture of NH 4 OH, H 2 O 2 , and H 2 O (condition L4). The film thickness reduces with an increase of etch time. The etch rate decreased from 4.9 to 2.1 nm/min when the etch time was increased from 5 to 15 min. The etch rate reduces owing to a C layer formed on TiAlC surface, this becomes a barrier for the reaction between the liquid and TiAlC surface. Figure 3 b presents the thickness of TiAlC films as a function of etch time in the NH 4 OH/H 2 O 2 /H 2 O mixture. The etch rate decreases from 4.9 to 2.1 nm/min when the etch time is increased from 5 to 15 min due to a C layer forming on the TiAlC surface. The etch stop at the Si–O–C interface between the TiAlC film and the Si substrate. Therefore, without removal of the C layer, it is difficult to control the etch rate of TiAlC in a NH 4 OH/H 2 O 2 /H 2 O mixture. Wet chemical etching of TiAlC film brings potential chemistries for the development of dry etching of TiAlC film. In addition to chlorine-based etching, non-chlorine etching with elements such as H, N, O or their combinations such as NH x , OH, NO x can be candidates for reactive species in plasma etching of TiAlC. Dry etching of TiAlC by a remote FW-assisted vapor plasma With considerations about the volatile products for etching metal compounds, especially for the ternary or more than three-element compounds such as TiAlC, the potential plasma etchants are halogen-based etchants. Fluorine-based plasma forms AlF 3 that is a non-volatile product (boiling point (b.p.) more than 1290 °C) 26 , 27 . Chlorine, bromide, or iodine-based plasmas can form volatile products, such as TiCl 4 (b.p. ~ 136 °C) and AlCl 3 (b.p. ~ 183 °C) 27 , 28 . In order to obtain high selective removal between Ti compounds, halogen-based plasma is limited to use due to formation of the same volatile product such as TiCl 4 . The wet etching of non-halogen mixture of ammonium hydroxide solution, peroxide solution, and deionized water shows promised results with higher etch rate than the halogen-based mixture. This wet chemical etching of TiAlC film brings potential chemistries for the development of non-halogen dry etching of TiAlC film with etchants based on elements such as H, N, and O or their combinations to produce reactive species in plasma etching of TiAlC such as NH x , H, OH, or NO x . In this study, vapors were prepared based on the liquid mixtures that were used in wet etching. These vapors were used for generating reactive species in plasma etching. The remote FW-Ar/vapor plasma can generate various radicals by using different liquid mixtures, this can be used to for selective chemical reactions with TiAlC surface. Vapor was flowed from upstream region with Ar gas to generate the remote FW-Ar/vapor plasma along the discharge tube. The vapors were prepared without using H 2 O 2 . The mixtures of Ar gas and H 2 O vapor, Ar gas and NH 4 OH 17% vapor, and Ar gas and NH 4 OH 28% vapor respectively generate Ar/H 2 O plasma (condition P1), Ar/NH 4 OH-17 plasma (condition P2), and Ar/NH 4 OH-28 plasma (condition P3) (Table 3 ). Figure 4 presents the OES of Ar/H 2 O plasma and Ar/NH 4 OH plasmas at 100 W and 0.64 kPa. Various plasma colors with different vapor mixtures can be observed from the insets of photograph images. H β (486.1 nm) emission is detected in all spectra, whereas no O emission line (777.4 nm) can be seen from Ar/NH 4 OH-28 plasma. Strong emission lines for OH, NH, and H α compared with Ar emission are detected. Intensity ratio of OH emission over NH emission can be controlled by different liquid mixtures. In case of using the Ar/NH 4 OH plasmas, mainly NH 3 from NH 4 OH solution was injected to the chamber due to lower boiling point of NH 3 (− 33 °C) compared to H 2 O (100 °C) at 1 atm 29 . NH 3 can dissociate by energetic electron collisions to form radicals such as NH 2 (571 nm), NH (336 nm), N 2 (357 nm), and H β (486.1 nm), and H α (656.3 nm) 30 , 31 , 32 , 33 . Figure 4 OES of plasmas generated with ( a ) Ar/H 2 O plasma, ( b ) Ar/NH 4 OH-17 plasma, and ( c ) Ar/NH 4 OH-28 plasma at 100 W and 0.64 kPa. Full size image At the power of 100 W, inductive component is dominant (H-mode), and hence, high-density of the Ar/NH 4 OH plasma can be generated with mainly NH radical (NH*) and H radical (H*) as follows: $${\text{NH}}_{{3}} + {\text{ e}}^{ - } \to {\text{ NH}}_{{2}} * \, + {\text{ H}}*$$ (7) $${\text{ and NH}}_{{2}} * + {\text{ e}}^{ - } \to {\text{ NH}}* \, + {\text{H}}*.$$ (8) The selective generation of reactive species such as OH, O, NH, H radicals can be controlled by using different liquid mixtures. Ar/H 2 O plasma mainly produces OH, H, and O radicals, whereas Ar/NH 4 OH plasma mainly produces NH and H radicals. Ar/NH 4 OH-17 plasma can generate all of species that were listed by both Ar/H 2 O plasma and Ar/NH 4 OH-28 plasma such as NH, H, OH, and O radicals. Depending on each application, the generation of selective radicals can be controlled to modify the surface of metal compounds such as oxidation, hydrogenation, nitridation, and methylamination. This modified layer can be removed by heating, ion bombardment, or is exchanged by other ligands for selective removal over other materials. Film thickness of TiAlC was changed by all of the FW-Ar/vapor plasmas including Ar/H 2 O plasma (condition P1) and Ar/NH 4 OH plasmas (condition P2 and P3). Etching effect of TiAlC by FW-Ar/NH 4 OH plasma was evaluated by ellipsometry. In case of using Ar/H 2 O plasma for 10 min, the film thickness increases 1.61 nm due to oxidation of metal surface. In case of using Ar/NH 4 OH plasmas, the film thickness decreases around 1.70 nm after 10 min exposing for both Ar/NH 4 OH-17 plasma (1.76 nm) and Ar/NH 4 OH-28 plasma (1.67 nm), proving etching occurred with TiAlC surface when exposing TiAlC to FW-Ar/NH 4 OH plasma. There is no significant difference in etch rates between Ar/NH 4 OH-17 and Ar/NH 4 OH-28 because in addition to forming the volatile products, nitridation also occurred at higher concentration of ammonium hydroxide solution (Fig. 6 ) at longer treatment time. The experiment in Fig. 6 was done with only 10 min treatment, in which the nitridation was not occurred seriously, so the etch rate in both cases are almost the same. The results prove that radicals (NH, H) from Ar/NH 4 OH plasma can reacts with TiAlC surface to form of volatile products. Surface modification of TiAlC by the remote FW-assisted vapor plasma The surface modification of TiAlC film before (pristine) and after exposure to Ar/H 2 O plasma at 100 W and 0.64 kPa was analyzed by XPS spectra, as shown in Fig. 5 . In case of using Ar/H 2 O plasma (Fig. 5 b), surface oxidation of TiAlC occurs obviously with the removal of Ti–C, Al–C bonds. Effect of water vapor in water-containing atmospheric-pressure plasma has been studied 34 , 35 , 36 , 37 , 38 . The Ar/H 2 O plasma jet was reported for polymer etching and surface modification at atmospheric pressure, in which OH radical plays a dominant role or is an effective etchant in polymer etching more than H radical and O radical 39 , 40 . In this study, the O atomic concentration on TiAlC surface increases from 41% (pristine) to 51% (Ar/H 2 O plasma). The FW-Ar/H 2 O plasma can produce very high density of OH and O radicals, and therefore, fully oxidation (may including hydroxylation) of TiAlC surface with only Ti–O(H) and Al–O(H) bonds were detected, and the removal of Ti–C and Al–C bonds were obtained. Only C from TiAlC compound can be etched by Ar/H 2 O plasma. Figure 5 XPS spectra obtained on the surface of TiAlC film ( a ) before and ( b ) after exposure to Ar/H 2 O plasma at 100 W and 0.64 kPa. Full size image The surface modification of TiAlC film after exposure to Ar/NH 4 OH plasmas at 100 W and 0.64 kPa is shown in Fig. 6 . Although etching depths of TiAlC samples after exposing to Ar/NH 4 OH plasmas for 10 min including Ar/NH 4 OH-17 plasma and Ar/NH 4 OH-28 plasma are almost the same, the surface modification of TiAlC for these two samples are quite different. In case of using Ar/NH 4 OH-17 plasma, NH and H radicals are more dominant compared to OH and O radicals (Fig. 4 ). A modest amount of N atom on TiAlC surface (less than 2%) can be detected in N 1s spectrum with the formation of N–H and N–O bonds (Fig. 6 a). The present of a small amount of OH and O radicals play an important role in hindering nitridation of TiAlC surface. In contrast with the Ar/H 2 O plasma, the shape of C 1s of sample treated by the Ar/NH 4 OH plasma shows the same tendency with that of pristine sample, indicating that both Ti–C and Al–C bonds still exist on TiAlC surface. After exposing to Ar/NH 4 OH-17 plasma, etching occurred with the removal of volatile products, the surface of TiAlC became almost the same with that of pristine sample. Figure 6 XPS spectra obtained on the surface of TiAlC film after exposure to ( a ) Ar/NH 4 OH-17 plasma and ( b ) Ar/NH 4 OH-28 plasma at 100 W and 0.64 kPa. Full size image In case of using Ar/NH 4 OH-28 plasma, nitridation and amination can be detected with a significant change in shape of Ti 2p peak and N 1s peak compared to using Ar/H 2 O plasma and Ar/NH 4 OH-17 plasma (Fig. 6 b). Main species such as NH and H radicals were detected from OES result (Fig. 4 ). More N atoms are penetrated into TiAlC with 6.24% (Ar/NH 4 OH-28 plasma) compared to 1.83% (Ar/NH 4 OH-17 plasma). In N 1s spectrum, N–H, C–N, Ti–N, and O–Ti–N bonds can be respectively detected at 400.18 eV, 397.99 eV, 396.92 eV, and 396.36 eV. Both of the Ti–C and Al–C bonds of sample treated by the Ar/NH 4 OH plasmas are still remained (Fig. 7 a,b), and Ti(Al)-O bond are removed and replaced by Ti(Al)–N. The remains of C in metal-C bonds are important for developing etching of metal carbides. This C in TiAlC can combine with N(H) and H from the Ar/NH 4 OH plasma to form Al–CH 3 , Ti(Al)–N–CH 3 , and Ti(Al)–O–C n H 2n+1 bonds in volatile products, whereas it is impossible to form these bonds on TiN surface by the Ar/NH 4 OH plasma. The surface of TiAlC is very sensitive with oxidization, and the XPS measurement was conducts after exposure the samples in air, so the atomic concentration of O is over 40% in all cases. Figure 7 Chemical bond (Ti 2p and Al 2p) percentage of TiAlC surface before (pristine) and after exposure to Ar/H 2 O plasma and Ar/NH 4 OH plasmas at 100 W and 0.64 kPa. Full size image The FW-Ar/vapor plasma has been developed to be used for dry etching of ternary TiAlC material. Surface modification is an indispensable step to approach the atomic layer etching. Discussion Surface modification and etching of TiAlC using FW-Ar/NH 4 OH plasma The NH (NH*) and H (H*) radicals generated by FW-Ar/NH 4 OH plasma can modify the metal carbides (MC) to form volatile metal–nitrogen–hydrocarbons M(N(CH 3 ) 2 ) n or metal-hydrocarbons M(CH 3 ) n via hydrogenation and methylamination of TiAlC surface. The formation of volatile product from TiAlC surface can be explained in the good agreement with Tamaki et al. in the formation of titanium nitride layer 32 . NH radicals and H radicals play a key role in this reaction. NH radicals and H radicals from plasma are first adsorbed on the TiAlC surface. The H radical can work as a catalyst to change the NH radical into an active N atom (N act ) that can be absorbed into TiAlC surface to form Ti(Al)–N–CH 3 bond and release H 2 gas. In addition, numerous active H radicals (H act ) can also penetrate to the sample surface for hydrogenation of TiAlC to form Al–CH 3 group. Hydrogenation and methylamination of TiAlC surface can occur simultaneously in the presence of NH radicals (NH*) and H radicals (H*). $${\text{H}}* \, + {\text{ NH}}* \, + {\text{ Ti}}\left( {{\text{Al}}} \right){\text{C }} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{C }} + {\text{ N}}_{{{\text{act}}}} + {\text{ H}}_{{{\text{act}}}} + {\text{ H}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right) - {\text{N}}{-}{\text{CH}}_{{3}} + {\text{Al}}{-}{\text{CH}}_{{3}} + {\text{ Ti}}\left( {{\text{Al}}} \right){-}{\text{N }} + {\text{ H}}_{{2}} .$$ (9) Small amount of OH from the plasma can hinder the nitridation by forming NO x gas. $${\text{H}}* \, + {\text{ NH}}* \, + {\text{ OH}}* \, + {\text{ Ti}}\left( {{\text{Al}}} \right){\text{C }} \to {\text{ Ti}}\left( {{\text{Al}}} \right){\text{C }} + {\text{ N}}_{{{\text{act}}}} + {\text{ H}}_{{{\text{act}}}} + {\text{ O }} + {\text{ H}}_{{2}} \to {\text{ Ti}}\left( {{\text{Al}}} \right){-}{\text{N}}{-}{\text{CH}}_{{3}} + {\text{ Al}}{-}{\text{CH}}_{{3}} + {\text{ N}}{-}{\text{O }} + {\text{ H}}_{{2}} .$$ (10) Overall, the formation of bonds such as Al–CH 3 , Ti(Al)–N–CH 3 or Ti(Al)–O–C n H 2n+1 (in case of pristine sample is TiAlOC) shows a potential of producing volatile products such as Al(R or R′ or R′′) 3, and Ti(R or R′ or R′′) 4 , in which R is –CH 3 , R′ is –N–CH 3 , and R′′ is –O–C n H 2n+1 . This modified layer could be removed by forming volatile products. Proposed mechanism of dry etching TiAlC A plasma etching process for metal carbides MCs such as TiAlC, TiC, and AlC using FW-assisted plasma is demonstrated here (Fig. 8 ). The surface modification (hydrogenation, and amination) by reactive radicals (NH and H) and the removal of volatile metalorganic products, such as Al(CH 3 ) 3 , dimer of Al(N(CH 3 ) 2 ) 3 , and Ti(N(CH 3 ) 2 ) 4 , are designed for plasma etching of metal carbides. The reactive radicals can be produced by ammonium hydroxide vapor plasma, NH 3 plasma, H 2 and NH 3 mixture (H 2 /NH 3 ) plasma, N 2 and NH 3 mixture (N 2 /NH 3 ) plasma, or N 2 and H 2 mixture (N 2 /H 2 ) plasma, or alcohol and ammonium hydroxide vapor mixture (C n H 2n+1 OH/NH 4 OH; n = 1–4) plasma. Surface modifications (hydrogenation and amination) of the TiAlC film were controlled by the active radicals produced from FW-assisted non-halogen plasma. The chemical bonds, involving (1) metal and methyl group (Al–CH 3 ), (2) metal and dimethylamine group (Ti(Al)–N(CH 3 ) 2 ), and (3) metal and alkoxy group (Ti(Al)–OC n H 2n+1 ) on TiAlC surface play an important role to form volatile products. Hence, the FW-assisted plasma, that is a rich radical source, is expected to be applied for atomic layer etching of metal and metal compounds in semiconductor device fabrication. Figure 8 Proposed mechanism for plasma etching of metal carbide (MC) using FW-Ar/NH 4 OH plasma. Full size image Conclusions A dry etching method for a ternary metal carbide TiAlC at atomic level has been developed here by transferring from wet etching to dry plasma etching using FW-assisted non-halogen vapor plasma of ammonium hydroxide. Surface modifications of the TiAlC film were controlled by exposing to the active radicals such as H, NH, and OH radicals, produced from the FW-assisted plasma. Mechanism for removal of metal carbide MC (TiAlC, TiC, AlC) is presented by using NH radicals and H radicals. This FW-assisted plasma technique is expected to be available for highly selective and isotropic atomic layer etching of metal and metal compounds in semiconductor device fabrication. Data availability The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
In circuitry, etching is used to remove the deformed layer created during the grinding and polishing of metal components by selective chemical reactions. Now, a research group at Nagoya University in Japan has developed a new method called "wet-like plasma etching" that combines the selectivity of wet etching with the controllability of dry etching. This technique will make it possible to etch new and hard-to-etch materials, enabling higher performance and lower power consumption of the silicon semiconductor integrated circuits used in smartphones and data centers. The researchers' findings were published in the journal Scientific Reports. In the race to create the fastest and most energy-efficient circuits for computing devices, scientists are constantly looking for new transistor designs. Recently, there has been a shift from FinFET-type transistors, so-called because the gate is raised above the silicon plane like a shark's fin to gate-all-around transistors, in which the fin is replaced by a stack of horizontal sheets that look like a pagoda in a Buddhist temple. In this type, the sheets surround the channel to reduce leakage and increase the drive current. To fabricate these complex structures, metal carbides consisting of titanium (Ti) and aluminum (Al), such as TiC or TiAlC, are used as metal gates where the voltage is applied. TiAlC is a ternary material with high hardness, high wear resistance, high melting point, and excellent electrochemical performance. There are two ways to etch such materials. Wet etching uses chemical solutions, while dry etching uses gases. Conventionally, TiAlC films used in semiconductor devices are etched by wet etching using hydrogen peroxide liquid mixtures. However, this process requires a long etching time to completely remove the target metals. It also runs the risk of chemically damaging the metal gate. Additionally, the liquids used can create surface tension at the atomic level, destroying important features. In order to develop an advanced etching process for the selective removal of TiAlC over other Ti compounds, non-halogen etching has been tested as a possible solution. Currently, there is no non-halogen dry etching process for metal carbides made of the three elements. Now, a research group led by Professors Masaru Hori, Kenji Ishikawa, and Thi-Thuy-Nga Nguyen at the Center for Low-Temperature Plasma Science at Nagoya University, in collaboration with the companies Hitachi, Ltd. and Hitachi High-Tech Corp., has developed a new dry etching method for metal carbides. This method uses a floating wire-assisted vapor plasma of argon gas mixed with vapor sources of ammonium hydroxide-based mixtures at medium pressure. In the circuit, the plasma is generated by adding energy to the gas, so the additional floating wire can enhance the generation of high-density plasma. Since this process generates active radicals of H, NH, and OH from the ammonium hydroxide gas (NH4OH), the treated surface of TiAlC can be removed after surface modifications of the TiAlC film. "Atmospheric pressure plasma and medium-pressure plasma techniques are used to miniaturize equipment size, fabrication cost, and energy consumption," explained Ishikawa. "It is difficult to etch off compounds involving multiple elements. Therefore, control of surface modification plays a key role. "Our group investigated the use of various radicals for surface modification and developed a method for generating such radicals using floating wire plasma and a vapor supplement. This provides a rich radical source of NH, H, and OH, which react with the TiAlC surface to form volatile products and etch the TiAlC surface. "This floating wire-assisted plasma technique is expected to be available for highly selective etching of metals and metal compounds used in semiconductor device fabrication," Ishikawa continued. "Metal carbides are promising gate electrode materials for advanced silicon semiconductors, and our joint research group was the first in the world to succeed in chemical dry etching of non-silicon semiconductor materials. "This achievement is important for the development of atomic layer-level etching technology, which has been difficult to achieve so far. Our results represent an important milestone, and a dramatic technological leap forward, in microfabrication technology."
10.1038/s41598-022-24949-1
Biology
Mother controls embryo's gene activity
Saartje Hontelez et al. Embryonic transcription is controlled by maternally defined chromatin state, Nature Communications (2015). DOI: 10.1038/NCOMMS10148 Journal information: Nature Communications
http://dx.doi.org/10.1038/NCOMMS10148
https://phys.org/news/2015-12-mother-embryo-gene.html
Abstract Histone-modifying enzymes are required for cell identity and lineage commitment, however little is known about the regulatory origins of the epigenome during embryonic development. Here we generate a comprehensive set of epigenome reference maps, which we use to determine the extent to which maternal factors shape chromatin state in Xenopus embryos. Using α-amanitin to inhibit zygotic transcription, we find that the majority of H3K4me3- and H3K27me3-enriched regions form a maternally defined epigenetic regulatory space with an underlying logic of hypomethylated islands. This maternal regulatory space extends to a substantial proportion of neurula stage-activated promoters. In contrast, p300 recruitment to distal regulatory regions requires embryonic transcription at most loci. The results show that H3K4me3 and H3K27me3 are part of a regulatory space that exerts an extended maternal control well into post-gastrulation development, and highlight the combinatorial action of maternal and zygotic factors through proximal and distal regulatory sequences. Introduction During early embryonic development cells differentiate, acquiring specific transcription and protein expression profiles. Histone modifications can control the activity of genes through regulatory elements in a cell-type-specific manner 1 , 2 , 3 , 4 . Recent advances have been made in the annotation of functional genomic elements of mammalian cells, Drosophila and Caenorhabditis through genome-wide profiling of chromatin marks 5 , 6 . Immediately after fertilization, the embryonic genome is transcriptionally silent, and zygotic genome activation (ZGA) occurs after a number of mitotic cycles 7 . In Drosophila and zebrafish ( Danio rerio ) ZGA starts after 8 and 9 mitotic cycles, respectively, in mammals transcription starts at the two-cell stage 8 , 9 , whereas in Xenopus this happens after the first 12 cleavages at the mid-blastula transition (MBT) 10 , 11 , 12 . Permissive H3K4me3 and repressive H3K27me3 histone modifications emerge during blastula and gastrula stages 13 , 14 , 15 , 16 . To date, little is known about the origin and specification of the epigenome in embryonic development of vertebrates, which is essential for understanding physiological cell lineage commitment and differentiation. To explore the developmental origins of epigenetic regulation we have generated epigenome reference maps during early development of Xenopus tropicalis embryos and assessed the need for embryonic transcription in their acquisition. We find a hierarchical appearance of histone modifications, with a priority for promoter marks which are deposited hours before transcription activation on regions with hypomethylated DNA. Surprisingly, the promoter H3K4me3 and the Polycomb H3K27me3 modifications are largely maternally defined (MaD), providing maternal epigenetic control of gene activation that extends well into neurula and tailbud stages. By contrast, p300 recruitment to distal regulatory elements is largely under the control of zygotic factors. Moreover, this maternal-proximal and zygotic-distal dichotomy of gene regulatory sequences also differentiates between early and late Wnt signalling target genes, suggesting that different levels of permissiveness are involved in temporal target gene selection. Results Progressive specification of chromatin state We have performed chromatin immunoprecipitation (ChIP) sequencing of eight histone modifications, RNA polymerase II (RNAPII) and the enhancer protein p300 at five stages of development: blastula (st. 9), gastrula (st. 10.5, 12.5), neurula (st. 16) and tailbud (st. 30). These experiments allow identification of enhancers (H3K4me1, p300) 17 , 18 , 19 , 20 , promoters (H3K4me3, H3K9ac) 14 , 21 , 22 , 23 , transcribed regions (H3K36me3, RNAPII) 22 and repressed and heterochromatic domains (H3K27me3, H3K9me2, H3K9me3 and H4K20me3) 1 , 14 , 24 , 25 . In addition we generated pre-MBT (st. 8) maps for three histone modifications (H3K4me3, H3K9ac and H3K27me3) and single-base resolution DNA methylome maps using whole-genome bisulfite sequencing of blastula and gastrula (st. 9 and 10.5) embryos ( Fig. 1 ; Supplementary Fig. 1 ). Our data set consists of 2.7 billion aligned sequence reads representing the most comprehensive set of epigenome reference maps of vertebrate embryos to date. Using a Hidden Markov Model approach 26 we have identified 19 chromatin states based on co-occurring ChIP signals ( Fig. 2a ). This analysis identifies combinations of ChIP signals at specific genomic sequences without distinguishing between overlapping histone modifications that result from regional or cell-type specificity and co-occurrence in the same cells 14 . Seven main groups were recognized, namely (i) Polycomb (H3K27me3, deposited by Polycomb Repressive Complex 2 (PRC2)), (ii) poised enhancers, (iii) p300-bound enhancers, (iv) transcribed regions, (v) promoters, (vi) heterochromatin and (vii) unmodified regions ( Fig. 2a ; Supplementary Fig. 2 ). Alluvial plots of state coverage per stage show that all states increase in coverage during development, except for the unmodified state ( Fig. 2b ; Supplementary Fig. 2a ). Unmodified regions decrease in coverage during development, however, even at tailbud stage 67% of the total epigenome remains naive for the modifications and bound proteins in our data set ( Supplementary Fig. 2b ). Promoter coverage remains relatively constant during development from blastula to tailbud stages, in contrast to the Polycomb state which increases in coverage during gastrulation. P300-bound enhancers are highly dynamic during development ( Fig. 2b ). Global enrichment levels of modified regions show similar dynamics, and reveal a priority for promoter marking at or before the blastula stage, followed by enhancer activation and heterochromatic repression during late blastula and gastrulation stages ( Supplementary Fig. 3a,b ). A detailed time course between fertilization and early gastrulation shows that both H3K4me3 and H3K9ac emerge hours before the start of embryonic transcription ( Supplementary Fig. 3c ). We and others have previously reported that H3K4me3 is acquired during blastula stages 14 . Indeed, H3K4me3 and H3K9ac levels increase strongly before the MBT, well before embryonic transcription starts. This however raises the question to what extent histone modifications are regulated by maternal or embryonic factors. Figure 1: Reference epigenome maps of Xenopus tropicalis development. ( a ) Genome-wide profiles were generated for stages 8 and 9 (blastula, before and after MBT), 10.5 and 12.5 (gastrula), 16 (neurula) and 30 (tailbud). Adapted from Tan, M.H. et al . Genome Res. 23 , 201–216 (2013), under a Creative Commons License (Attribution-NonCommercial 3.0 Unported License), as described at . ( b ) Gata2 locus with late gastrula (stage 10.5) methylC-seq, ChIP-seq enrichment of histone modifications, RNAPII and p300 (cf. Supplementary Figs 1 and 2 ). Full size image Figure 2: Chromatin state dynamics. ( a ) Emission states (same for all developmental stages) of the hidden Markov model, identifying the 19 most prevalent combinations of histone modifications and bound proteins. From top to bottom: Polycomb (red), Poised enhancers and promoters (blue), Active Enhancers (gold), Transcribed (dark magenta), Promoter (green), Heterochromatin (purple) and unmodified (grey). ( b ) Alluvial plots of chromatin state coverage during development. Each plot shows the transitions (to and from the highlighted group of chromatin states) across developmental stages (stages 9–30). The height represents the base pair coverage of the chromatin state relative to the modified genome. The ‘modified genome’ has a chromatin state other than unmodified in any of the stages 9–30. From top to bottom left: promoters (green), poised (blue), p300-bound enhancers (gold). From top to bottom right: transcribed (dark magenta), Polycomb (red) and heterochromatin (purple). Line plots: Chromatin state coverage per stage as a percentage of the modified genome. Full size image Maternal and zygotic epigenetic regulation To determine the maternal and zygotic contributions to chromatin state, we used α-amanitin to block embryonic transcription ( Fig. 3a ). α-Amanitin blocks the translocation of RNA polymerase II (RNAPII) on DNA, thereby preventing transcript elongation 27 . It is therefore expected that injection of α-amanitin into embryos will stall RNAPII, immobilizing it on DNA after its recruitment to pre-initiation complexes. Indeed, both RNAPII elongation and embryonic transcription were effectively blocked in α-amanitin-injected embryos ( Fig. 3b,c ; Supplementary Fig. 4a ). New transcription is necessary for gastrulation 11 , 28 , 29 , but α-amanitin-injected embryos survive to the equivalent of stage 11 control embryos. ChIP sequencing of replicates of α-amanitin-injected and control embryos (stage 11) revealed that the majority of H3K4me3 (86%) and H3K27me3 (90%) regions are consistently modified with these modifications independently of embryonic transcription ( Fig. 3d ; Supplementary Fig. 4b,c ). This is especially surprising given the temporal hierarchy of H3K27me3 and H3K4me3, and the relatively late acquisition of H3K27me3 ( Fig. 2b ). By contrast, only 15% of the p300-bound regions recruit p300 independently of active transcription ( Fig. 3d ). This suggests that the promoter-permissive H3K4me3 mark and the Polycomb-repressive H3K27me3 mark are mostly controlled by maternal factors (maternally defined, MaD), whereas p300 binding to regulatory regions is largely zygotically defined (ZyD). Regions with MaD H3K4me3 and H3K27me3 acquire these modifications more robustly and also earlier during development compared with ZyD regions ( Supplementary Fig. 4d ). By contrast, ZyD p300-bound regions show more robust p300 recruitment during gastrulation compared with p300 MaD regions. These data show a pervasive maternal influence on the developmental acquisition of key histone modifications. Figure 3: Developmental acquisition of chromatin states. ( a ) Inhibition of embryonic transcription with α-amanitin, adapted from Tan, M.H. et al . Genome Res. 23 , 201–216 (2013), under a Creative Commons License (Attribution-NonCommercial 3.0 Unported License), as described at . ( b ) RNAPII on the TSS of genes in control and α-amanitin-injected embryos (stage 11). ( c ) Box plots showing RNA expression levels (RPKM) of maternal and embryonic transcribed genes in control and α-amanitin-injected embryos (stage 11). Box: 25th (bottom), 50th (internal band), 75th (top) percentiles. Whiskers: 1.5 × interquartile range of the lower and upper quartiles, respectively. ( d ) ChIP-sequencing on chromatin of α-amanitin-injected and control embryos reveals maternal and zygotic origins of H3K4me3, H3K27me3 or p300 binding. Data from two biological replicates, see Supplementary fig. 4 . Full size image DNA methylation logic of maternal control Trimethylation of H3K4 and H3K27 has been associated with CpG density and a lack of DNA methylation. The Set1 and related MLL complexes are responsible for H3K4me3 (ref. 10 ). Set1 is recruited to hypomethylated CpG domains via the Cxxc1 protein (Cfp1) 30 , 31 , 32 . In the absence of H3K4me3, PRC2 binding to hypomethylated CpGs results in H3K27me3 and inhibition of gene activation 13 , 33 . Using our whole-genome bisulfite sequencing data we determined that MaD H3K4me3 promoters are predominantly hypomethylated ( Fig. 4a ; Supplementary Fig. 5a ; Supplementary Data 1 ). Conversely, promoters decorated with ZyD H3K4me3 almost exclusively have highly methylated promoters. Demethylation of ZyD promoters was not detected, and methylation levels of MaD and ZyD regions were similar in stage 9 and stage 10.5 ( Supplementary Fig. 5a,b ). In addition, H3K4me3 often extends asymmetrically from promoters into gene bodies (+1–2 kb from transcription start site (TSS); Supplementary Fig. 5c ), likely representing the second and third nucleosomes that are trimethylated via RNAPII-recruited Set1 in actively transcribed genes 34 . Concordantly, α-amanitin reduces H3K4me3 at downstream positions. Interestingly, we also find poised enhancers that gain H3K4me3 in α-amanitin-injected embryos and which exhibit intermediate to high levels of DNA methylation ( Supplementary Fig. 5d,e ). Figure 4: DNA methylation logic of maternally versus zygotically defined H3K4me3 and H3K27me3. ( a ) CpG density and methylation at stage 9 of promoters (H3K4me3: ±100 bp from TSS; H3K27me3: ±2.5 kb from TSS) that contain a zygotic defined (ZyD, lost in α-amanitin treated embryos, red) or maternal defined (MaD, maintained in α-amanitin treated embryos, grey) peak for H3K4me3 (left) or H3K27me3 (right) after inhibition of embryonic transcription. The size of the dot indicates the relative RPKM of the histone modification (background corrected). ( b ) Hoxd (MaD) and nodal1, -2 (ZyD) loci with stage 9 methylC-seq, H3K4me3 and H3K27me3 in control and α-amanitin-injected embryos. ( c ) Developmental profiles of H3K4me3 and H3K27me3 (median background corrected RPKM) at genes without detectable maternal mRNA do correlate with activation for methylated promoters (lower panels) but not for hypomethylated CpG island promoters (upper panels). Full size image The majority of promoters with ZyD H3K27me3 shows intermediate to high levels of DNA methylation ( Fig. 4a ; Supplementary Fig. 5a ; Supplementary Data 1 ). Some of the MaD H3K27me3 regions are methylated, but the highly enriched H3K27me3 domains (larger dots) are almost exclusively both maternally defined and hypomethylated. This is illustrated by the hoxd cluster which harbours a large hypomethylated domain with MaD H3K4me3 and H3K27me3 ( Fig. 4b ). There are also examples of reciprocal changes of H3K4 and H3K27 methylation, for example at the hypermethylated promoters of nodal1 and nodal2 . ZyD p300-bound regions are generally hypermethylated, whereas MaD p300-bound regions show a variable degree of DNA methylation ( Supplementary Fig. 5e ). However, promoters that overlap with MaD p300 peaks are hypomethylated in 77% of the cases, whereas 96% of the promoters that are associated with ZyD p300 peaks are hypermethylated ( Supplementary Fig. 5f ), showing that p300-recruiting hypomethylated promoters tend to be under complete maternal control, for both H3K4 methylation and p300 recruitment. To further explore the relationships between DNA methylation, histone modifications and developmental activation of transcription we determined correlations with different measures of gene activity such as RNA-seq and ChIP-seq of RNAPII and H3K36me3 ( Supplementary Fig. 6 ). We find that H3K36me3 and RNAPII in gene bodies correlate well with each other but less with transcript levels (RNA-seq), presumably due to the effects of RNA stability. A much lower correlation was found between either measure of gene activity and the promoter marks H3K4me3 and H3K9ac, especially at early stages. In part this may be caused by time delays of transcriptional activation relative to acquisition of permissive histone modifications 14 , 15 . It raises the question to what extent a lack of DNA methylation at promoters, which is associated with MaD H3K4me3, uncouples promoter marking and transcriptional activation. Therefore, we grouped transcribed genes without detectable maternal messenger RNA 35 based on the stage of maximum expression and DNA methylation ( Fig. 4c ). We find that developmentally activated promoters with hypomethylated CpG islands are trimethylated at H3K4 or H3K27 early on, irrespective of the time of transcriptional activation. By contrast, methylated promoters show a much closer relation between H3K4me3 and gene expression. Although H3K4me3 is known to stabilize the transcription initiation factor Taf3 (a subunit of TFIID) and can also interact with the chromatin remodeller Chd1 (refs 36 , 37 , 38 ), hypomethylated promoters gain H3K4me3 autonomously with their hypomethylated CpG island status, independent of embryonic transcription. ZyD p300-bound domains shape enhancer clusters P300 can be recruited by transcription factors that bind to regulatory elements. We therefore modelled transcription factor motif contributions to p300 binding across multiple developmental stages (see Methods). The results predict specific transcription factors to recruit p300 in a stage-specific manner ( Fig. 5a ). Clustering of MaD and ZyD p300-bound regions with H3K4me3, H3K4me1 and RNAPII data revealed that ZyD p300 is recruited to distal regulatory sequences that lose both p300 and RNAPII binding in the presence of α-amanitin, whereas MaD p300 binding mostly includes promoter-proximal regions that are H3K4me3-decorated and recruit RNAPII in the presence of α-amanitin but without elongating ( Fig. 5b ). Indeed, MaD p300 regions are enriched for promoter-related motifs ( Supplementary Fig. 7 ). Although some ZyD p300-bound regions overlap with annotated transcription start sites ( Supplementary Fig. 5f ), most of these sequences are decorated with H3K4me1 in the absence of H3K4me3, suggesting they correspond to distal regulatory sequences ( Fig. 5b ). Both MaD- and ZyD p300-bound regulatory regions recruit embryonically regulated transcription factors such as Otx2, Gsc, Smad2/3, Foxh1, T (Xbra), Vegt and Eomes ( Supplementary Fig. 8 ) 39 , 40 , 41 , suggesting that multiple transcription factors contribute to p300 recruitment. Figure 5: Zygotically controlled p300 recruitment shapes enhancer clusters (EC) domains. ( a ) Modelled transcription factor motif activity to p300 enrichment (see Methods). Activity reflects modelled contributions in p300 peak RPKM. ( b ) Heatmaps of MaD (upper panel) and ZyD (lower panel) p300 binding sites in α-amanitin treated and control embryos. ( c ) Developmental increase in genomic coverage of the gas1 EC by acquisition of p300 binding at enhancers. ( d ) EC dynamics of p300 enrichment (left panel), percentage of total EC region identified in each stage based on stage-dependent p300 binding (middle panel) and number of p300 peaks (per 12.5 kb) in EC. ( e ) Percentage of zygotic defined (ZyD, lost in α-amanitin treated embryos) and maternal defined (MaD, maintained in α-amanitin treated embryos) p300 peaks that map to ECs. Asterisks indicate significance as more or less p300 peaks than expected by chance calculated using cumulative hypergeometric test: * P =6E−14; ** P =5E−29 ( f ) Percentage of ECs that have a MaD or ZyD seeding peak at stage 9. ( g ) Box plot showing the percentage of the EC region that is defined by MaD or ZyD p300 peaks. Box: 25th (bottom), 50th (internal band), 75th (top) percentiles. Whiskers: 1.5 × interquartile range of the lower and upper quartiles, respectively. Outliers are indicated with black dots. Full size image Large enhancer clusters (ECs) are thought to improve the stability of enhancer–promoter interactions, are associated with genes coding for developmental regulators, and have been implicated in cell differentiation 42 , 43 , 44 . During development the cluster size of p300-bound enhancers grows dynamically by p300 seeding of individual enhancers ( Fig. 5c,d , see Methods). Histone modifications and transcript levels of EC-associated genes are developmental stage specific, confirming the association of ECs with developmental genes ( Supplementary Fig. 9 ; Supplementary Data 2 ). Analysis of the percentage of the total EC regions identified in each stage show that most p300-bound ECs increase in genomic coverage during development by newly gained p300 binding at enhancers (EC clusters 1 and 2), whereas a group of early ECs (EC cluster 3) decrease in coverage as a result of the decreasing number of p300 peaks that contribute to the EC. We next examined how MaD and ZyD p300-bound regions contribute to p300-bound ECs. Approximately 50% of all ZyD p300-bound enhancers are located in ECs at stage 11. Among MaD p300-bound enhancers this fraction is much reduced ( Fig. 5e ). Similarly, a much larger fraction of ZyD p300-bound promoters is found in ECs compared with MaD p300-bound promoters. Up to 20% of the developmental ECs that are seeded at stage 9 have a MaD p300 seeding site ( Fig. 5f ). However, very few ECs can be called based on MaD p300, showing that formation of p300-bound enhancer clusters requires embryonic transcription ( Fig. 5g ). Extended maternal epigenetic control We next examined the extent to which the MaD epigenome is maintained during development. Genes were grouped based on MaD or ZyD trimethylation of H3K4 and H3K27 in the promoter ( Supplementary Data 3 , see Methods). For p300 we counted the total number of MaD and ZyD peaks in the cis -regulatory landscapes of genes ( Fig. 6a ). Remarkably, MaD H3K4me3-regulated genes represent the majority of all H3K4me3-enriched genes in both early and late developmental stages. Even at neurula and tailbud stages only a small fraction of the H3K4me3-decorated genes are ZyD. Similarly, maternal control of H3K27me3 also extends late into development, albeit to a smaller degree. After gastrulation, the number of MaD H3K27me3 regulated genes slightly decreases, whereas ZyD increases. However, also at neurula stage more than 50% of the Polycomb (PRC2)-regulated genes are under MaD H3K27me3 control. By contrast, p300 in cis -regulatory regions of genes is almost exclusively ZyD in all stages ( Fig. 6a ). Figure 6: Maternal epigenetic control extends beyond gastrulation. Maternally defined (MaD) peaks emerge at or before stage 11 independent of embryonic transcription. Zygotically defined (ZyD) peaks appear before stage 11 and are lost in α-amanitin treated embryos, or emerge at or after stage 12. Not determined (ND) peaks are not consistently detected in replicates 1 and 2 and generally have low enrichment values. ( a ) Total number of genes with a MaD or ZyD peak in their promoter (H3K4me3 and H3K27me3), or total number of MaD and ZyD peaks per GREAT region (p300). ND peaks are not shown. ( b ) MaD and ZyD regulation of gastrula and neurula expressed genes. The pie charts show the number genes with a MaD or ZyD peak in their promoter (H3K4me3 and H3K27me3) or the number of MaD, ZyD and ND peaks per cis -regulatory region (p300). The H3K27me3 and p300 pie charts represent: Gastrula expressed genes with a MaD (far left) or ZyD (middle left) H3K4me3 peak; neurula expressed genes with a MaD (middle right) or ZyD (far right) H3K4me3 peak. Full size image Many genes may maintain MaD H3K4me3 because they are constitutively expressed throughout development. We therefore analysed the regulation of genes that are exclusively embryonically transcribed. We find that 487 of 983 (49.5%) genes which are expressed between blastula and tailbud stages but not expressed in oocytes or before the MBT, feature a MaD H3K4me3 promoter ( Supplementary Fig. 10a ). Most of the MaD H3K4me3 genes that are modified by PRC2 exhibit MaD H3K27me3. When separating embryonic transcripts based on developmental activation, we find MaD H3K4me3 for 58% of the gastrula genes and up to 74% of the neurula expressed genes ( Fig. 6b ; Supplementary Fig. 10b ). In most cases MaD H3K4me3-regulated genes also have MaD H3K27me3 control. This indicates an important role for the MaD epigenome in the regulation of embryonic transcripts. To explore the distinctions between expression inside and outside the maternal regulatory space, we analysed Wnt signalling targets. Early Wnt/beta-catenin signalling serves to specify dorsal fates following fertilization, leading to organizer gene expression. This has been shown to depend on Prmt2-mediated promoter poising before the MBT 45 . Indeed, we find that seven of eight early Wnt/beta-catenin targets have a hypomethylated island promoter marked with MaD H3K4me3 ( Fig. 7a ; Supplementary Fig. 10c ). Wnt signalling also plays an important role after the MBT, when it ventralises and patterns mesoderm. The majority of these later targets turn out to have a methylated promoter with ZyD H3K4me3. Notably, these ZyD H3K4me3 late Wnt targets are associated with high binding of p300 in their locus; many of the p300 binding events happen at distal regulatory regions. In contrast, MaD H3K4me3 Wnt targets have less p300 binding but are marked with H3K27me3 ( Fig. 7a,b ). These results illustrate the dichotomy in proximal and distal regulation that is associated with transcriptional activation of maternal and zygotic Wnt target genes, which is paradigmatic of the distinctive maternal and zygotic epigenetic programs that are orchestrated by DNA methylation and exert a long-lasting influence in development ( Fig. 8 ). Figure 7: Maternal and zygotic regulatory space separates early and late Wnt target genes. ( a ) The number of genes with MaD or ZyD H3K4me3 (pie charts) and relative RPKM (dot plots, horizontal line: median) of p300 in cis -regulatory regions of genes and H3K27me3 on promoters (±2.5 kb from TSS) at different developmental stages that have maternally or zygotically defined H3K4me3 at the promoter. Early targets sia1 and sia2 are not included, these genes lose H3K4me3 after stage 9 and cannot be assigned to MaD or ZyD space based on our stage 11 α-amanitin data. H3K4me3 on these genes is acquired at stage 8, before embryonic transcription. ( b ) Browser views of the early Wnt target nog (noggin) and the late Wnt targets gbx2.1 and gbx2.2 with ChIP-seq enrichment of H3K4me3, p300 and RNAPII on control and α-amanitin-injected embryos and RNAPII on stages 9 and 10.5. Full size image Figure 8: Model of maternal and zygotic regulatory space. This shows the segregation of maternal regulatory space, which contains hypomethylated promoters that are mainly controlled by maternal factors, and zygotic regulatory space, which includes methylated promoters and enhancers that are under zygotic control. Most p300-bound enhancers are in zygotic space, however, they can regulate promoters in both maternal and zygotic space, crossing the regulatory space border. This may contribute to varying degrees of permissiveness to transcriptional activation. Maternal regulatory space extends well into neurula and tailbud stages and includes many embryonic genes which are activated at specific stages of development. Zygotic regulatory space requires zygotic transcription, is established from the mid-blastula stage onwards but increases in relative contribution during development. Full size image Discussion The H3K4me3 modification poises promoters for transcription initiation by stabilizing Taf3/TFIID binding 36 , 37 . Promoter H3K4 methylation based on an underlying DNA methylation logic driven by maternal factors at the blastula stage sets the stage for a default programme of gene expression. Most constitutively expressed housekeeping genes are within this maternal regulatory space, as well as a subset of developmentally regulated genes. Remarkably, many late expressed genes have hypomethylated promoters and are already poised for activation by H3K4me3 during early blastula stages. H3K4me3 is not sufficient for gene transcription and additional embryonic factors are required for activation in many cases. Genes with MaD H3K4me3 generally have fewer p300-bound enhancers associated with them, suggesting they are regulated by promoter-proximal elements. This further underscores the permissive nature of this regulation, as opposed to zygotically regulated events at both promoters (H3K4me3) and enhancers (recruitment of p300). The H3K27me3 modification is gradually acquired between blastula and gastrula stages on spatially regulated genes, repressing lineage-specific genes in other lineages 13 , 14 . The acquisition of this modification in the absence of transcription indicates that it is uncoupled from the inductive events of the early embryo, suggesting a default maternal response to a lack of transcriptional activation. The results indicate that maternal factors set permissions and time-dependent constraints on a subset of genes with reduced CpG methylation at their promoter. These permissions and constraints are likely to channel embryonic cell fates into a limited number of directions by controlling hierarchical developmental progression by master regulators. Previously we observed that DNA methylation does not lead to transcriptional repression in early embryos, whereas it does in oocytes and late embryos 46 . The observations described here suggest a new role of DNA methylation in defining a maternal-embryonic programme of gene expression. In zebrafish, the maternal methylome is reprogrammed between fertilization and ZGA, to match the paternal methylome. This also occurs in maternal haploid fish, and appears to align with CG content 47 , 48 , suggesting an intrinsic maternal mechanism that sets the stage for the MaD epigenome. Gene expression outside maternal regulatory space could be mediated by p300-associated enhancers, most of which require new transcription for recruitment of p300. Promoter and enhancer activation in the ZyD regulatory space likely involves binding of specific factors. Indeed, we find that both MaD- and ZyD p300-bound regulatory regions recruit embryonically regulated transcription factors. Enhancers often contain binding sites for many different proteins, which can play different roles in opening up chromatin, recruitment of co-activators and establishing looping interactions with promoters. Future experiments will shed light on the maternal–zygotic hierarchy and the regulatory transitions underlying these events and the roles of maternal and zygotic pioneer factors. We find that ZyD p300-bound enhancers shape enhancer clusters. These form dense hubs of regulatory activity, and EC p300 binding is generally correlated with the expression of the associated genes. The work reported here suggests that recruitment of p300 to ‘seeding’ enhancers precedes establishing cluster-wide activity of the local enhancer landscape. Future work will also need to address to which extent seeding causes relaxation and opening of the local chromatin and activity of neighbouring enhancers. Key proteins of the molecular machinery involved in DNA methylation (Dnmt3a, Tet2), H3K4me3 (Mll1-4, Kdm5b/c), H3K27me3 (Ezh2, Eed, Kdm6a/b) and enhancer histone acetylation (p300) are not only highly conserved between species but also frequently mutated in cancer 49 , 50 , 51 . Moreover cancer-specific hypermethylated regions tend to correspond to Polycomb-regulated loci in embryonic stem cells and DNA methylation may restrict H3K27 methylation globally 52 , 53 . In addition, the sequence signatures of hypomethylated regions that acquire H3K4me3 or H3K27me3 are conserved between fish, frogs and humans 13 . These observations suggest that the molecular mechanisms that orchestrate the maternal and zygotic regulatory space are conserved. One key difference between mammals and non-mammalian vertebrates is the specification of extra-embryonic lineages between zygotic genome activation and the blastocyst stage in mammals 10 , so it is likely that the way this plays out for specific genes differs between species. In summary, our results provide an unprecedented view of the far reach of maternal factors in zygotic life through chromatin state. The dichotomy of maternal promoter-based and embryonic enhancer regulation demarcates an epigenetic maternal-to-zygotic transition that is maternal permissive to the expression of some embryonic genes and restrictive to others. This highlights the combinatorial interplay of maternal and zygotic factors through distinct mechanisms. Methods Animal procedures X. tropicalis embryos were obtained by in vitro fertilization, dejellied in 3% cysteine and collected at the indicated stage. Fertilized eggs were injected with 2.3 nl of 2.67 ng μl −1 α-amanitin and developed until the control embryos reached mid-gastrulation (stage 11). Animal use was conducted under the DEC permission (Dutch Animal Experimentation Committee) RU-DEC 2012–116 and 2014–122 to G.J.C.V. ChIP sequencing and RNA sequencing Chromatin for ChIP was prepared as previously described 54 , 55 , with minor modifications. Antibody was incubated with chromatin overnight, followed by incubation with Dynabeads Protein G for 1 h. The following antibodies were used: anti-H3K4me1 (Abcam ab8895, 1 μg per 15 embryo equivalents (Eeq)), anti-H3K4me3 (Abcam ab8580, 1 μg per 15 Eeq), anti-H3K9ac (Upstate/Millipore 06-942, 1 μg per 15 Eeq), anti-H3K36me3 (Abcam ab9050, 1 μg per 15 Eeq), anti-H3K27me3 (Upstate/Millipore 07-449, 1 μg per 15 Eeq), anti-H3K9me2 (Diagenode C15410060, 1 μg per 15 Eeq), anti-H3K9me3 (Abcam ab8898, 2 μg per 15 Eeq), anti-H4K20me3 (Abcam ab9053, 2 μg per 15 Eeq), anti-p300 (Santa Cruz sc-585, 1 μg per 15 Eeq) and anti-RNAPII (Diagenode C15200004, 1 μg per 15 Eeq). For all ChIP-seq samples of the epigenome reference maps and RNAPII ChIP-seq samples of the α-amanitin experiments three biological replicates of different chromatin isolations of 45 embryos were pooled. Two biological replicates for H3K4me3 (α-amanitin injected: 90 and 56 Eeq; control: 45 and 67 Eeq), H3K27me3 (α-amanitin injected: 90 and 180 Eeq; control: 45 and 202 Eeq) and p300 (α-amanitin injected: 112 and 56 Eeq; control: 112 and 67 Eeq) ChIP-seq samples of the α-amanitin experiments were generated. For RNA-seq samples of the α-amanitin experiments RNA from five embryos from one biological replicate was isolated and depleted of ribosomal RNA as previously described 35 . Samples were subjected to a qPCR quality check pre- and post preparation. Libraries were prepared with the Kapa Hyper Prep kit (Kapa Biosystems), and sequencing was done on the Illumina HiSeq2000 platform. Reads were mapped to the reference X. tropicalis genome JGI7.1, using STAR (RNA-seq) or BWA (ChIP-seq) allowing one mismatch. MethylC-seq Genomic DNA from Xenopus embryos stages 9 and 10.5 was obtained as described before 56 . MethylC-seq library generation was performed as described previously 57 . Library amplification was performed with KAPA HiFi HotStart Uracil+ DNA polymerase (Kapa Biosystems, Woburn, MA, USA), using six cycles of amplification. Single-read MethylC-seq libraries were processed and aligned as described previously 58 . Quantitative PCR PCR reactions were performed on a CFX96 Touch Real-Time PCR Detection System (BioRad) using iQ Custom SYBR Green Supermix (BioRad). We preformed RNA expression PCR (RT–qPCR (quantitative PCR)) and ChIP-qPCR for H3K4me3 and H3K9ac on promoters of odc1 , eef1a1o , rnf146 , tor1a , zic1 , cdc14b , eomes , xrcc1 , drosha , gdf3 , t , tbx2, fastkd3, gs17 (see Supplementary Methods for primer sequences). ChIP-qPCR enrichment over background was calculated using the average of 5 negative loci. Detection of enriched regions We used MACS2 (ref. 59 ) with standard settings and a q -value of 0.05. Fragment size was determined using phantompeakqualtools 60 . Broad settings (--BROAD) were used for H3K4me1, H3K36me3, H3K27me3, H3K9me2, H3K9me3, H4K20me3 and RNAPII. Broad and narrow peaks were merged for H3K4me3. For H3K9ac narrow peaks were used. For p300 broad peaks were used in the ChomHMM analysis, narrow p300 peaks were used for super-enhancer and MaD versus ZyD analyses. All peaks were called relative to an input control track. Peaks that showed at least 75% overlap with 1 kb regions that have more than 65 input reads, and peaks that have a ChIP-seq RPKM higher than the 95 percentile of random background regions are excluded from further analysis. Only scaffolds 1–10 (the chromosome-sized scaffolds) were included in the analysis. Relative RPKM was calculated by dividing the ChIP-seq RPKM of the peaks by the ChIP-seq RPKM of the 95 percentile of random background regions. We used MAnorm 61 to determine differentially enriched regions in α-amanitin and control embryos. We used merged peak sets of replicate 1, replicate 2 and stage 10.5 to avoid bias caused by peak calling. Lost, gained and unchanged peaks per biological replicate were determined using the following parameters: lost peaks have M -values >1 and a −log base 10( P value) >5 (for H3K27me3) or 1.3 (for H3K4me3 and p300) and have a relative RPKM (background corrected) >1 in stage 11 control (no cut-off was used for st.11 control of H3K27me3 rep.1), stage 10.5 (H3K4me3 and p300) or stage 12 (H3K27me3); increased peaks have M-values smaller than -1 and a -log base 10( P value) >5 (H3K27me3) or 1.3 (H3K4me3 and p300) and have a rel. RPKM greater than 1 in stage 11 α-amanitin, stage 10.5 (H3K4me3 and p300) or stage 12 (H3K27me3); unchanged peaks are neither gained nor lost and have a rel. RPKM >1 in stage 11 control (no cut-off was used for st.11 control of H3K27me3 rep.1), stage 11 α-amanitin, stage 10.5 (H3K4me3 and p300) or stage 12 (H3K27me3). Maintained peaks are peaks that are not lost and have a rel. RPKM >1 in stage 11 control (no cut-off was used for st.11 control of H3K27me3 rep.1), stage 11 α-amanitin, stage 10.5 (H3K4me3 and p300) or stage 12 (H3K27me3). Common lost, gained, unbiased and maintained peaks are present in both replicates. All other peaks are considered not defined (ND). Replicate-specific peaks were only used for Supplementary Fig. 4b , for all other figures the common peaks were used. DNA methylation levels in Supplementary Fig. 4d was calculated using previously published Bio-CAP data 62 . Bio-CAP RPKM levels of stage 11–12 were calculated for H3K4me3, H3K27me3 and p300 peaks, and corrected for input values. For Fig. 4c genes were considered ‘hypomethylated’ if the Bio-CAP/Input ratio on the promoter (±1 kb from TSS) was >1. RNA expression analysis was performed as previously published 35 . Embryonic transcripts were separated based on the clustering of maximum expression levels per stage in Fig. 3d of Paranjpe et al. 35 (cluster 1=blastula, cluster 5=gastrula, clusters 3 and 4=neurula, clusters 2 and 6=tailbud). Enhancer clusters were called as previously described 43 . Enhancer Clusters are called per stage and merged to determine the total Enhancer Cluster region. Percentage of the EC region is calculated relative to the total Enhancer Cluster region. MaD and ZyD classification MaD peaks emerge at or before stage 11 and are also acquired in α-amanitin treated embryos in both replicates. Zygotically defined (ZyD) peaks appear at or before stage 11 and are lost in α-amanitin treated embryos in both replicates, or emerge after stage 11. To classify MaD and ZyD H3K4me3 genes we ran MAnorm on promoters (±250 bp from TSS) only, using similar restrictions as described in ‘Detection of enriched regions’. MaD H3K4me3 genes have a maintained promoter in both replicates, ZyD H3K4me3 genes have a lost promoter H3K4me3 peak in both α-amanitin replicates, or a peak that emerges after stage 11. MaD H3K27me3 genes have at least one MaD peak in the vicinity of their promoter (±2.5 kb from TSS). ZyD H3K27me3 genes have at least one ZyD peak in their promoter and lack a MaD peak. ND peaks or genes do meet the criteria for neither MaD nor ZyD. For p300 the total number of ZyD and MaD peaks was counted in GREAT 63 regions of genes. ChomHMM analysis Chromatin states were discovered and characterized using ChromHMM v1.10 (ref. 26 ), an implementation of a hidden Markov model. As input we used the enriched regions from ten tracks (H3K27me3, H3K36me3, H3K4me1, H3K4me3, H3K9ac, H3K9me2, H3K9me3, H4K20me3, p300 and RNAPII) across five developmental stages. We trained and ran the model with a range of states, and determined the 19 emission states model as the optimal number of states that could sufficiently capture the biological variation in co-occurrence of chromatin marks. We subsequently classified the states into seven main groups based on the presence and absence of specific chromatin marks. The segmentation files of the seven main groups per stage were binned in 200 base pairs intervals. An m × n matrix was created, where m corresponds to the 200 base pair intervals and n to the developmental stages (9–30). Each element a ( i,j ) represents the chromatin state of interval i at stage j . For each chromatin group occurrences were counted per stage n . The changes between stage n and n +1 were plotted using Sankey diagrams ( ), a flow diagram closely related to alluvial diagrams. Motif analyses For the prediction of motif contribution to p300 recruitment ( Fig. 5a ) we have implemented the ISMARA method developed by Balwierz et al. 64 This method uses motif activity response analysis to determine the transcription factors that drive the observed changes in chromatin state across samples. As input we used the number of known motifs found per p300 binding site and the RPKM of the p300 peaks per developmental stage. The model infers the unknown motif activities from the equation in which the changes in signal levels are explained with the number of binding sites and the unknown motif activities. Motifs that showed a z -score activity that was >13 are shown in Fig. 5a . Enriched motifs ( Supplementary Fig. 7 ) were detected with gimme diff, a tool from the GimmeMotifs package 65 . The vertebrate motifs used in this script were obtained from CISBP ( ) 66 and clustered using gimme cluster from GimmeMotifs. The motifs are available at (ref. 67 ). Generation of plots and heatmaps All heatmaps were generated using fluff ( ) 13 or gplots ( ). For all heatmap clustering, the Euclidean distance metric was used. Other plots were generated using ggplot2 ( ). Additional information Accession codes : The data generated for this work have been deposited in NCBI's Gene Expression Omnibus and are accessible through GEO Series accession number GSE67974 . Visualization tracks are available at the authors’ web site ( ). How to cite this article : Hontelez, S. et al. Embryonic transcription is controlled by maternally defined chromatin state. Nat. Commun. 6:10148 doi: 10.1038/ncomms10148 (2015). Accession codes Accessions Gene Expression Omnibus GSE67974 Change history 07 July 2016 A correction has been published and is appended to both the HTML and PDF versions of this paper. The error has not been fixed in the paper.
Frog embryos do not fully control which genes they can turn on or off in the beginning of their development – but their mother does, through specific proteins in the egg cell. Molecular developmental biologists at Radboud University publish these results in Nature Communications on December 18. Frog embryos do not only receive half of the genetic information from their mother, but also the instructions on how to use that DNA. That is what molecular developmental biologist Saartje Hontelez and her colleagues, led by Gert Jan Veenstra at the Radboud Institute for Molecular Life Sciences, have discovered. For a long time, scientist believed that the gene regulation is not inheritable. Mother's tools How does the influence of the mother work exactly? Hontelez explains: 'The mother delivers all kinds of tools like proteins and RNA which control the gene regulation of the embryo. And because these tools are very specific, the embryo is limited in its possibilities. The mother sets strict boundaries regarding which genes can be turned on and which cannot. It is only after the twelfth cell division that the embryo can produce its own RNA and thus have some influence on the gene regulation. But this process is still largely controlled by the mother until much later in the embryonic development.' Surprise 'The amount of influence the mother has, surprised us. We always thought that gene regulation is not inheritable and therefore expected that the embryo is in control of it. But when we shut down the embryo's RNA production, this had surprisingly little effect. That was not only the case for important genes during early embryonic development, but also much later, well into the stages of organogenesis. This shows very clearly that the mother is responsible for the early stages of embryonic development, and that her influence is still strongly present in later stages as well.' Fast development For this publication, Hontelez and her colleagues investigated embryos of the western clawed frog (Xenopus tropicalis), because their embryonic development occurs very rapidly: there are only six hours between fertilization and the moment that the embryo's RNA production starts. For comparison: mammal embryos start producing their own RNA after twenty-four hours. 'When you consider the amount of eggs a frog lays, and how many of those eggs successfully develop into frogs, it is not surprising that embryos get a little help from their mother', Hontelez explains. 'It is a pre-programmed system, making sure that early embryonic development usually succeeds.' Can these results now be compared to the development of mice, or even humans? 'Yes, it probably works roughly the same. The biggest difference is that mammalian embryos start producing their own RNA after the first cell division. But the time until that moment takes much longer than in frogs. It is also worth noting that the genes involved in setting up the epigenome are all involved in human cancer.'
10.1038/NCOMMS10148
Biology
New digital tool could change the way we see cells
Laura Wiggins et al, The CellPhe toolkit for cell phenotyping using time-lapse imaging and pattern recognition, Nature Communications (2023). DOI: 10.1038/s41467-023-37447-3 Journal information: Nature Communications
https://dx.doi.org/10.1038/s41467-023-37447-3
https://phys.org/news/2023-04-digital-tool-cells.html
Abstract With phenotypic heterogeneity in whole cell populations widely recognised, the demand for quantitative and temporal analysis approaches to characterise single cell morphology and dynamics has increased. We present CellPhe, a pattern recognition toolkit for the unbiased characterisation of cellular phenotypes within time-lapse videos. CellPhe imports tracking information from multiple segmentation and tracking algorithms to provide automated cell phenotyping from different imaging modalities, including fluorescence. To maximise data quality for downstream analysis, our toolkit includes automated recognition and removal of erroneous cell boundaries induced by inaccurate tracking and segmentation. We provide an extensive list of features extracted from individual cell time series, with custom feature selection to identify variables that provide greatest discrimination for the analysis in question. Using ensemble classification for accurate prediction of cellular phenotype and clustering algorithms for the characterisation of heterogeneous subsets, we validate and prove adaptability using different cell types and experimental conditions. Introduction Heterogeneity in whole cell populations is a long-standing area of interest 1 , 2 , 3 and previous studies have identified cell-to-cell phenotypic and genotypic diversity even within clonally derived populations 4 . The emergence of methods such as single-cell RNA sequencing has enabled characterisation of subsets within a population from gene expression profiles 5 , yet these methods involve collection of data at discrete time points, missing the subtle temporal changes in gene expression on a continuous scale. Such methods exclude information on single-cell morphology and dynamics, yet cellular phenotype plays a crucial role in determining cell function 6 , 7 , disease progression 8 , and response to treatment 9 . There remains a demand for quantitative and temporal analysis approaches to describe the subtleties of single-cell heterogeneity and the complexities of cell behaviour. Modern microscopy advancements facilitate the ability to produce information-rich images of cells and tissue, at high-throughput and of high quality. Temporal changes in cell behaviour can be observed through time-lapse imaging and features describing the cells’ behaviour over time can be extracted for analysis. However, the task of identifying individual cells and following them over time is an ongoing computer vision challenge 10 , 11 . Initial processing requires segmentation, the detection of cells as regions of interest (ROIs) distinguished from background, and tracking, with each cell given a unique identifier that is retained over subsequent frames. Recent work using the similarity between cell metrics on consecutive frames highlighted the importance of accurate tracking to follow cell lineage 12 . Imaging artefacts vary between experiments and issues such as background noise, inhomogeneity of cell size and overlapping cells are still challenges for biomedical research 13 . Reliable cell segmentation protocols are non-deterministic and experiment-specific 14 but user-friendly software systems that use machine learning algorithms are emerging to provide objective, high-throughput cell segmentation and tracking 15 , 16 . Recent developments to TrackMate 17 allow the results of various segmentation software to be integrated with flexible tracking algorithms and provide visualisation tools to assess both segmentation and cell tracks. Although the time series for certain cell properties, such as cell area and circularity, can be displayed, the extraction and analysis of descriptive time series is not within the scope of the TrackMate software. Comparison of the tracked cells behaviour is challenging as cells are tracked for different numbers of frames with frames missing where cells leave the field of view. This has meant that analysis of any extracted features has been limited to visualisation. CellPhe interpolates the time series and then calculates a fixed number of variables that characterise each feature’s time series- the features of features! Here we present CellPhe, a pattern recognition toolkit that uses the output of segmentation and tracking software to provide an extensive list of features that characterise changes in the cells’ appearance and behaviour over time. Customised feature selection allows the most discriminatory variables for a particular objective to be identified. These extracted variables quantify cell morphology, texture and dynamics and describe temporal changes and can be used to reliably characterise and classify individual cells as well as cell populations. To ensure precise quantification of cell morphology and motility, and to monitor major cellular events such as mitosis and apoptosis, it is vital that instances of erroneous segmentation and tracking are removed from data sets prior to downstream analysis methods 18 . Manual removal of such errors is heavily labour-intensive, particularly when time-lapses take place over several days. To maximise data quality for downstream analysis, CellPhe includes the recognition and removal of erroneous cell boundaries induced by inaccurate segmentation and tracking. We demonstrate the use of ensemble classification for accurate prediction of cellular phenotype and clustering algorithms for identification of heterogeneous subsets. We exemplify CellPhe by characterising the behaviour of untreated and chemotherapy-treated breast cancer cells from ptychographic time-lapse videos. Quantitative phase images (QPI) 19 , 20 , 21 avoid any fluorescence-induced perturbation of the cells but segmentation accuracy can be affected by reduced differences in intensity between cells and background in comparison to fluorescent labelling. We show that our methods successfully recognise and remove a population of erroneously segmented cells, improving data set quality. Morphological and dynamical changes induced by chemotherapeutics, particularly at low drug concentration, are often more subtle than those that discriminate distinct cell types and we demonstrate the ability of CellPhe to automatically identify time series differences induced by chemotherapy treatment, with the chosen variables proving statistically significant even when not observable by eye. The complexities of heterogeneous drug response and the problem of drug resistance further motivate our chosen application. The ability to identify discriminatory features between treated and untreated cells can allow automated detection of “non-conforming” cells such as those that possess cellular drug resistance. Further investigation of such features could elucidate the underlying biological mechanisms responsible for chemotherapy resistance and cancer recurrence. We validate the adaptability of CellPhe with both a different cell type and a different drug treatment and show that variables are selected according to experimental conditions, tailored to properties of the cell type and drug mechanism of action. CellPhe is available on GitHub as an R package with a user-friendly interactive GUI that allows completely unbiased cell phenotyping using time-lapse data from fluorescence imaging as well as ptychography. A working example guides the user through the complete workflow and a video demonstrating the GUI is also provided. Results Overview of CellPhe CellPhe is a toolkit for the characterisation and classification of cellular phenotypes from time-lapse videos, a diagrammatic summary of CellPhe is provided in Fig. 1 . Experimental design is determined by the user prior to image acquisition where seeded cell types and pharmacology are specific to the user’s own analysis. Example uses are discrimination of cell types (e.g, neurons vs. astrocytes), characterisation of disease (e.g., healthy vs. cancer), or assessment of drug response (e.g., untreated vs. treated). The user can then time-lapse image cells for the desired amount of time, using an imaging modality of their choice. Once images are acquired and segmentation and tracking of cells are complete, cell boundary coordinates are exported and used for calculation of an extensive list of morphology and texture features. These together with dynamical features and extracted time series variables are used to aid removal of erroneous segmentation by recognition of error-induced interruption to cell time series. Once all predicted segmentation errors have been removed from data sets, feature selection is performed and only features providing separation above an optimised threshold are retained. This identifies a list of most discriminatory features and allows the user to explore biological interpretation of these findings. The extracted data matrices are then used as input for ensemble classification, where the phenotype of new cells can be accurately predicted. Furthermore, clustering algorithms can be used to identify heterogeneous subsets of cells within the user’s data, both inter- and intra-class. Fig. 1: Summary of the CellPhe toolkit. Following time-lapse imaging, acquired images are processed and segmentation and tracking recipes implemented. Cell boundary coordinates are exported, features extracted for each tracked cell and the time series summarised by characteristic variables. Predicted segmentation errors are excluded and optimised feature selection performed using a threshold on the class separation achieved. Finally, multiple machine learning algorithms are combined for classification of cell phenotype and clustering algorithms utilised for identification of heterogeneous cell subsets. This figure was created with BioRender.com. Full size image The remaining results exemplify the use of CellPhe with a biological application, characterisation and classification of chemotherapeutic drug response. We look at each of the CellPhe stages in detail (segmentation error removal, feature selection, ensemble classification and cluster analysis) and demonstrate that each step provides interpretable, biologically relevant results to answer experiment-specific questions and aid further research. CellPhe application: characterising chemotherapeutic drug response The 231Docetaxel data set, obtained from multiple experiments involving MDA-MB-231 cells, both untreated and treated with 30 μM docetaxel, is the main data set used to demonstrate our method. We show that the same analysis pipeline can be applied to other data sets by considering both a different cell line, MCF-7, in the MCF7Docetaxel data set, and a different drug, doxorubicin, with the 231Doxorubicin data set. In each case, we remove segmentation errors, as described in Section 2.5, before using feature selection (Section 2.6) to identify discriminatory variables tailored to the particular data set. We show that different variables are chosen depending on the inherent nature of the cell line and the effect of the drug in question. Using these features in classification algorithms, we characterise and compare the behaviour over time of untreated and treated cells. Segmentation error removal We improve the quality of our data sets prior to untreated vs. treated cell classification by automating detection of segmentation errors and optimising the exclusion criteria of predicted errors. Comparison of time series for cells with and without segmentation errors showed many of our features to be sensitive to such errors, motivating the need to remove these cells prior to treatment classification. Size metrics, such as volume, were particularly affected by segmentation errors as under- or over-segmentation could result in halving or doubling of cell volume respectively (Fig. 2 a, b). Such noticeable disruption to the time series of several features suggested that reliable detection of segmentation errors would be possible. Fig. 2: Characterisation of segmentation errors. a Volume time series for a correctly segmented cell and b a cell experiencing segmentation errors, demonstrating greater fluctuation in volume when a cell experiences segmentation errors. Examples of test set cells classified as c correct segmentation and d segmentation error. Note that the scale bar applies to all cell images in c , d . e Box and whisker plots of features that are significant for identifying segmentation errors in the 231Docetaxel training set (****: p < 0.0001). The median value is shown by the line within the box representing the interquartile range (IQR), with lines at the 25th and 75th percentile, whiskers extend to the maximum and minimum values. p values were calculated using a two-tailed, non-parametric Mann-Whitney U test at the 95% confidence interval. n = 1702 and 241 for correctly segmented cells and segmentation errors respectively. f A representative 231Docetaxel trained decision tree, demonstrating how size, shape, texture and density are used in combination to make classifications. Source data for e , f are provided in the Source Data file. Full size image After excluding 62 instances identified as tracked cell debris, a training data set for MDA-MB-231 cells (from the 231Docetaxel data set), was obtained, consisting of 1701 correctly segmented cells and 241 cells with segmentation errors. The number of cells in the segmentation error class was doubled using SMOTE and the resulting data set with 2184 observations used for the classification of segmentation errors as described in Section 2.5. The MDA-MB-231 cells (from 231Docetaxel and 231Doxorubicin, both untreated and treated) that were not used for training formed independent test sets (Table 1 ). Table 1 Segmentation error prediction on the test data Full size table A total of 223 of the 1478 cells in the 231Docetaxel test set were predicted to be segmentation errors. Of these, 217 were confirmed by eye to be true segmentation errors, most of which were due to under- or over-segmentation throughout their time series. Other segmentation issues observed included background pickup, cells swapping cell ID, and cells repeatedly entering and exiting the field of view, all of which result in problem time series (Fig. 2 c, d). Of the remaining six cells that were misclassified as segmentation errors, one was a large cell and the other five were cells tracked before, during, and after attempted mitosis. Further investigation showed that the removal of these cells did not exclude an important subset from the data. This classifier was also used to identify a further 78 segmentation errors from the 955 cells in the 231Doxorubicin data set, all 78 were confirmed by eye to be true segmentation errors (Table 1 ). It was necessary to train a new classifier for MCF-7 segmentation error detection due to differences between the cell lines. In this case, 308 correctly segmented cells and 192 segmentation errors were identified by eye. After applying SMOTE to double the number of segmentation error observations, a classifier was trained with the resulting 692 observations as described in section 2.5. 188 cells in the MCF7Docetaxel data set (848 cells in total) were classified as segmentation errors. 185 of these cells were confirmed by eye to be true segmentation errors, the remaining three were large cells or cells tracked before, during and after attempted mitosis. As decision trees are used in the identification of segmentation errors, our feature selection is not required. However, we still calculated separation scores for the MDA-MB-231 training data to investigate the effect of such errors. As might be expected, volume was most affected, with segmentation errors resulting in larger standard deviation, ascent and maximum value. Other features with high separation scores included area as well as spatial distribution descriptors with the highest thresholds, features that detect the clustering of high intensity pixels, characteristic of cell overlap and over-segmentation (Fig. 2 e). Analysis of the trained decision trees showed that a combination of size, shape, texture and density variables frequently formed the most important features for detecting segmentation errors with MDA-MB-231 cells, see Fig. 2 f for an example. For the MCF7Docetaxel data set, velocity was found to be important in determining whether or not a cell experienced segmentation errors in addition to texture and shape variables. The cell centroid, used to determine position and hence velocity, is affected by boundary errors and so high velocity, uncharacteristic of MCF-7 cells, is a good indication of segmentation error for these cells. Feature selection For the 231Docetaxel data set, the calculation of separation scores identified variables that provided good discrimination between untreated MDA-MB-231 cells and those treated with 30 μM docetaxel. As separation scores do not provide information on how these variables work in combination, we performed Principal Component Analysis (PCA) to explore relationships between discriminatory variables. Differences in the appearance of MDA-MB-231 cells induced by docetaxel treatment were observed by eye from cell time-lapses. Untreated cells displayed a spindle-shaped morphology (a circular cross-section with tapering at both ends), with contractions and protrusions facilitating migration. Cells that received treatment were generally dense and spherical, and increased in size following a failed attempt at cytokinesis (Fig. 3 a, b). Discriminatory features identified by calculation of separation scores were consistent with differences observed by eye, the 100 variables that achieved greatest separation are shown in Fig. 3 c. Texture, shape, and size variables provided greatest discrimination of untreated and treated cells. Untreated cells experienced increased elongation throughout the time-lapse and displayed irregular, spindle-shaped morphology in comparison to the generally spherical appearance of treated cells. Furthermore, separation scores highlighted differences in the texture of cells, with intensity quantile metrics characterising changes in granularity of cells induced by drug treatment. Fig. 3: Discrimination between treated and untreated cells for MDA-MB-231 with docetaxel. Images taken from cell time-lapses of a untreated MDA-MB-231 cells and b 30 μM docetaxel treated MDA-MB-231 cells. Scale bar = 200 μm. Increased cell count at 49h post-treatment demonstrates healthy proliferation of untreated cells. Static cell count at 49h for treated cells is a result of cell cycle arrest and failed cytokinesis, leading to enlarged cell phenotype. c Features with the top 100 highest separation scores, colour-coded according to feature type. Texture, shape, and size features provide greatest separation. d Principal Component Analysis (PCA) scores plot with points colour-coded according to true class label. Observable separation of classes along PC1 demonstrates that the greatest source of variance within the data arises due to class differences. Only features with the 100 highest separation scores were included in PCA. e PCA biplot demonstrating how features with the 100 highest separation scores work in combination to discriminate between untreated and 30 M docetaxel-treated MDA-MB-231 cells. Greater ascent and descent can be observed for untreated cells, indicating greater activity across a range of features for untreated cells. f Representative feature time series plots for untreated and 30 μM docetaxel-treated MDA-MB-231 cells. Untreated cells experience greater fluctuation within their time series in comparison to treated cells where activity is more stabilised. Source data for c – e are provided in the Source Data file. Full size image Principal Component Analysis (PCA) demonstrated that the main variance within the data arises due to class differences, with separation of classes observed across PC1 which explains 66% of the total variance (Fig. 3 d). The dispersion of points within the scores plot illustrates heterogeneity of cells both inter- and intra-class. The non-conformity of some cells, for example, treated cells behaving as untreated cells, is demonstrated by points clustering within the opposite class. Analysis of PCA loadings highlighted increased ascent, descent, and standard deviation for untreated cells, as can be observed from the PCA biplot in Fig. 3e . Although descent variables appear to have opposite loadings to all other variables, in fact, this is only due to their negative values. As the majority of untreated cells had negative PC1 scores we deduced that greater standard deviation, ascent and descent of features for untreated cells indicates that these cells experience increased fluctuation throughout their time series. As treated cells mainly had positive PC1 scores, they experience less fluctuation throughout their time series and instead display greater stability. Identified differences in feature time series are visualised in Fig. 3f . We assessed the adaptability of our feature selection method by calculating separation scores for both a different cell line and a different treatment, using PCA to evaluate the main sources of variance. We compared MCF-7 cells treated with 1 μM docetaxel with untreated MCF-7 cells, and MDA-MB-231 cells that were treated with 1μM doxorubicin with untreated MDA-MB-231 cells and found that changes in the morphology and motility of cells upon treatment were both drug and cell-line specific with different variables selected (Fig. 4 ). Fig. 4: Discrimination between treated and untreated cells for MCF-7 with docetaxel and MDA-MB-231 with doxorubicin. Images taken from cell time-lapses of a untreated and 1 μM docetaxel treated MCF-7 cells and b untreated and 1 μM doxorubicin treated MDA-MB-231 cells. Scale bar = 200 μm. Differences in cell count following treatment can be observed for both due to cell cycle arrest induced by docetaxel or doxorubicin respectively. Docetaxel treated MCF-7 cells display enlarged cell phenotype at the 49h time point due to failed cytokinesis. In comparison, differences in morphology are more subtle for doxorubicin treated MDA-MB-231 cells at the 49h time point. Features with the top 100 highest separation scores, colour-coded according to feature type for c MCF7Docetaxel, where cell density and texture provide greatest separation, and d 231Doxorubicin where shape and movement features provide greatest separation. Principal Component Analysis (PCA) scores plot with points colour-coded according to true class label for e MCF7Docetaxel and for f 231Doxorubicin. Only features with the 100 highest separation scores were included in PCA. Source data for c – e , f are provided in the Source Data file. Full size image As was observed within the 231Docetaxel time-lapses, cells increased in size due to failed cytokinesis. However, MCF-7 cells maintained a polygonal, epithelial-like morphology following treatment similar to that of the untreated population. Conversely, remarkable differences in cellular dynamics were observed within the 231Doxorubicin data set, with motility of cells being severely hindered following treatment, particularly after the 24-hour time point. Only subtle differences in size and morphology of cells were observed by eye, with doxorubicin treated cells appearing slightly enlarged as a result of cell cycle arrest. Both untreated and treated sets contained examples of cells in G1 and G2, hence varied cell morphology can be observed within both (elongated and adherent cells in G1, round and dense morphology of cells in G2.) The 100 variables that achieved greatest separation for each of the MCF7Docetaxel and 231Doxorubicin data sets are shown in Fig. 4 c, d. Density variables were highly discriminatory for untreated and docetaxel treated MCF-7 cells, characterising decreased proliferation and cell-cell adhesion induced by drug treatment. Size, shape and texture variables were also identified as most discriminatory with variables such as length, width and area characterising the enlarged cell shape of treated cells. Spatial distribution variables were chosen for several intensity thresholds, demonstrating differences in the clustering of pixels, following docetaxel treatment. As was observed by eye, movement features formed the majority of discriminatory variables for the 231Doxorubicin data set, with untreated cells having greater velocity, tracklength and displacement than treated cells. Differences in movement were also described through density ascent and descent, as cell density fluctuated more for untreated cells due to the increased likelihood of passing neighbouring cells when migrating. Subtle differences in cell shape and size observed by eye upon doxorubicin treatment were described by changes in rectangularity, width and radius variables. Notably both data sets received lower separation scores than the 231Docetaxel data set, with 231Doxorubicin having the lowest. This effectively provides a measure of class similarity, with high separation scores for 231Docetaxel indicative of significant changes to cells upon treatment and low separation scores for 231Doxorubicin suggesting these changes are more subtle. PCA scores plots obtained with the selected features are shown in Fig. 4 d. Differences between classes can be observed for the MCF7Docetaxel data set, with separation of classes along PC1 (40% of the total variance) and PC2 (13% of the total variance). The PCA scores plot for 231Doxorubicin shows the greatest source of variance to be due to class differences, with separation of classes along PC1 (49% of the total variance). All PCA scores plots demonstrated the potential to characterise untreated and treated cell behaviour, with feature-selected variables providing good distinction of classes which was improved by using variables in combination. Classification of treated and untreated cells We found that the distribution of separation scores differed for each data set, with the 231Docetaxel set having the greatest number of variables achieving high separation, followed by MCF7Docetaxel and 231Doxorubicin generally having much lower separation scores (Fig. 5 a & b). Optimal separation thresholds of 0.075, 0.025 and 0.025 were obtained for 231Docetaxel, MCF7Docetaxel and 231Doxorubicin respectively, resulting in 437, 539 and 442 variables (of a possible 1111) being selected for classifier training. Fig. 5: Analysis of misclassified cells. a The number of variables with separation scores above different thresholds. A greater number of variables achieve high separation for 231Docetaxel in comparison to 231Doxorubicin and MCF7Docetaxel. b Optimisation of separation threshold for each data set. Thresholds of 0.075, 0.025, and 0.025 were selected for 231Docetaxel, MCF7Docetaxel and 231Doxorubicin respectively resulting in 437, 539 and 442 variables being used for classifier training. c Sub-populations within each class, colour-coded according to the ideal final classification of each sub-population. Non-conforming cells for each class form a subset of misclassified cells. d Examples of docetaxel treated MDA-MB-231 cells misclassified as untreated. Time-lapse images demonstrate how these cells exhibit an elongated morphology characteristic of migratory untreated cells, note that the scale bar applies to all cell images. Time series plots for cell length demonstrate the fluctuation in shape of these cells, typical of untreated cells. e The percentage of cells predicted as untreated for a range of drug concentrations ( \({\log }_{10}\) scale). For all three data sets, this percentage decreases as drug concentration increases due to a greater number of cells responding to treatment at higher concentrations. Lines were fitted using asymmetric, five parameter, non-linear regression. f Positive correlation between the total volume rate of growth and the percentage of cells predicted as untreated, with higher volume growth rates associated with a higher number of cells being predicted as untreated. Linear regression slopes were found to be significant ( p values shown). R 2 correlation coefficients are also provided, demonstrating positive correlation for each data set. p values were calculated using an F -test with 6 degrees of freedom. Source data for a , b , e , f are provided in the Source Data file. Full size image Having chosen an optimal separation threshold, we trained an ensemble classifier for each data set as described in Section 2.6. Classification accuracy scores for training and test sets obtained using our ensemble classifier are provided in Table 2 . Through visual inspection, we found that misclassifications formed subsets of cells whose behaviour deviated from the behaviour of the main population, we call this subset non-conforming. (Fig. 5 c). For untreated cells, we found that healthy, proliferating cells were correctly classified whereas less motile cells, cell debris or large, non-motile mutant cells were instead classified as treated. For treated cells, we found that cells experiencing the drug-induced phenotypic differences identified through feature selection were classified as treated. However, treated cells displaying behaviour similar to that of an untreated cell, such as increased migration or fluctuation and elongation in cell shape, and were classified as untreated (Fig. 5 d). Table 2 Ensemble classification accuracy scores for each data set Full size table We found that the proportion of non-conforming treated cells, those classified as untreated, decreased as drug concentration increased for all three data sets (Fig. 5 e). To explore the connection between the proportion of non-conforming treated cells and the population drug response of each treated set, we considered the total volume growth rate at each drug concentration in relation to the percentage of cells predicted as untreated (Fig. 5 f). We found that the overall growth rate decreased with increased drug concentration due to more cells responding at higher concentrations. This correlated positively with the percentage of cells predicted as untreated, with a greater percentage of cells predicted as untreated for high volume growth rate with proliferation still occurring. Subset identification Classification accuracy scores for the untreated and treated cell populations were imbalanced across all three of the data sets (Table 2 ). Imbalance of classification accuracy scores in binary classification is often a result of hidden stratification 22 , where poor performance of one class is a result of misclassifications of important, unlabelled subsets. To investigate this phenomenon we performed hierarchical clustering on 231Docetaxel treated cells and the obtained dendogram is provided in Fig. 6 a, b, with examples of cells from each cluster. Fig. 6: Cluster analysis of treated cells. a Dendogram obtained from hierarchical clustering of 231Docetaxel treated cells, with 5 clusters coloured. b Examples of cells from each cluster with background colours identifying the cluster, note that the scale bar applies to all cell images. Cells within a cluster share similar properties but differ to cells in other clusters. c Density plots of mean cell volume, colour-coded according to cluster. The grey, dashed density plot represents 231Docetaxel untreated cells for reference. Cluster 4 (cell debris cluster) has the greatest leftward shift due to cells losing volume upon cell death. Clusters 1 and 2 primarily span the same range of volumes as the untreated set as cells in these clusters have not yet attempted cytokinesis. Clusters 3 and 5 have mean volumes greater than the untreated set as cells in these clusters have continued to grow following failed cytokinesis. d , e k -means clustering of 231Doxorubicin test set treated cells. Cells are colour-coded according to which cluster they were assigned. f The number of cells predicted as treated for each of the clusters. Cluster 1 was formed of successfully treated cells with 91% (30/33) of cells correctly classified as treated, whereas cluster 1 formed a subset of non-conforming treated cells, with only 31% (10/32) correctly classified as treated. g Increased velocity and ascent in cell elongation are characteristic of untreated cells. These metrics show extremely significant decrease for cells in cluster 1 but no significant difference for cells in cluster 2. Extremely significant differences are observed between cluster 1 and cluster 2, highlighting the presence of subsets within the treated cell population (ns: p ≥ 0.05, ****: p < 0.0001, dashed lines in violin plots are representative of the lower quartile, median and upper quartile). Exact p values were as follows for comparison of mean velocity: Untreated vs. cluster 1: p = 1.5 × 10 −12 , cluster 1 vs. cluster 2: p = 1.8 × 10 −9 , untreated vs. cluster 2: p = 0.3368. Exact p values were as follows for comparison of ascent in elongation: Untreated vs. cluster 1: p = 2 × 10 −14 , cluster 1 vs. cluster 2: 5 × 10 −8 , untreated vs. cluster 2: p = 0.1983. p values were calculated using a two-tailed, non-parametric Mann–Whitney U test at the 95% confidence interval. Source data for a , c – e , g are provided in the Source Data file. Full size image Figure 6 c shows the distribution of mean volumes for each cluster in comparison to the untreated MDA-MB-231 population. Clusters 1 and 2 span a similar range of volumes to the untreated set, whereas clusters 3 and 5 have greater mean volumes. Cluster 4 is formed primarily of cell debris as a result of cell death with mean volumes much lower than those of the untreated set. Cells in the same cluster share similar properties and morphological differences between clusters of different cell cycle states can be observed. For example cells in clusters 1 and 2 are much smaller and brighter than cells in clusters 3 and 5 as the cells are heading towards attempted mitosis, confirmed by visual inspection of cell time-lapses, and hence resemble untreated mitotic cells. The PCA biplot in Fig. 6 d shows how variables work in combination to determine cell clusters. Clusters 1 and 2 are generally bright and spherical, similar to a mitotic-treated cell, as these cells are tracked prior to failed cytokinesis. Cells that have attempted to split, clusters 3 and 5, are larger, longer, wider and display greater irregularity in shape. These cells become less dense and are often multinucleated resulting in changes to texture features. Cell debris is best distinguished by granularity, hence texture metrics are fundamental in identifying these instances. Clusters also spanned a range of mean cell volumes beyond those of the untreated set when hierarchical clustering was repeated for MCF7Docetaxel-treated cells. However, this was not the case for 231Doxorubicin-treated cells and therefore k -means clustering was used to explore the connection between misclassifications and hidden subsets in the 231Doxorubicin treated cell test set. Two distinct clusters were obtained (Fig. 6 e), cluster 1 was formed of 33 cells and cluster 2 of 32 cells. We calculated classification accuracy scores for the two clusters individually and found that 91% of cells in cluster 1 were correctly classified as treated but only 31% in cluster 2 (Fig. 6 f). The increased migration and fluctuation in shape of cells in cluster 2 mean these cells have greater similarity to the untreated population (Fig. 6 g). These non-conforming treated cells form the majority of treated cell misclassifications in the 231Doxorubicin test set and highlight the presence of heterogeneous subsets within a population. Notably there was a greater number of misclassifications for untreated MCF-7 cells in comparison to the docetaxel-treated set. Cluster analysis demonstrated the presence of heterogeneous subsets within the untreated population, with one cluster, in particular, consisting mainly of misclassified cells (Supplementary Figure 1 ). Texture metrics discerned this cluster from other untreated cell clusters, containing several instances of cell debris that were understandably classified as non-conforming. Other cells within this cluster shared similarities in texture to cell debris. Compatibility with fluorescence images and TrackMate TrackMate-Cellpose 17 was used to demonstrate the compatibility of CellPhe with outputs obtained from alternative segmentation and tracking software and show that CellPhe extends to fluorescence time-lapse imaging. Ptychographic and fluorescence time-lapse images of untreated and docetaxel-treated MDA-MB-231 cells stably expressing dsRed were acquired in parallel (Fig. 7 a). Cell segmentation from the fluorescence images was performed using Cellpose and segmented cells were then tracked using TrackMate resulting in 123 cell tracks of greater than or equal to 50 frames (Fig. 7 b). The resulting folders of cell ROIs and TrackMate feature tables were used as input for CellPhe to extract single-cell phenotypic metrics to describe cell behaviour over time. An optimal separation threshold of 0.3 was determined for discrimination between untreated and treated cells, with 231 variables achieving separation scores greater than the threshold (Fig. 7 c). As observed with the phase images, size, shape, and texture variables provide the greatest separation, with cell density amongst the most discriminatory variables. Good separation of untreated and treated cells can be observed within the PCA scores plot in Fig. 7 d, supporting the use of CellPhe for cell phenotyping from fluorescence images. Fig. 7: Application of CellPhe to fluorescence images. a Images taken from cell time-lapses of untreated and 1 μM docetaxel treated MDA-MB-231 cells stably expressing dsRed. Phase and fluorescence images were acquired in parallel. Scale bar = 200 μm. b Representative image of Cellpose segmentation on a fluorescent image of MDA-MB-231 cells stably expressing dsRed with cell tracks obtained from TrackMate for untreated MDA-MB-231 cells stably expressing dsRed. Only cell tracks greater than or equal to 50 frames are displayed. c Features with separation scores greater than or equal to 0.3, the optimal separation threshold, colour-coded according to feature type. Texture, density, shape and size features provide greatest separation. d Principal Component Analysis (PCA) scores plot with points colour-coded according to true class label. Observable separation of classes along PC1 demonstrates that the greatest source of variance within the data arises due to class differences. Only features with separation score greater than or equal to 0.3 were included in PCA. Source data for c , d are provided in the Source Data file. Full size image Discussion The CellPhe toolkit complements existing software for automated cell segmentation and tracking, using their output as a starting point for bespoke time series feature extraction and selection, cell classification and cluster analysis. Erroneous cell segmentation and tracking can significantly reduce data quality but such errors often go undetected and can negatively influence the results of automated pattern recognition. CellPhe’s extensive feature extraction followed by customised feature selection not only allows the characterisation and classification of cellular phenotypes from time-lapse videos but provides a method for the identification and removal of erroneous cell tracks prior to these analyses. Attribute analysis showed that different features were chosen to identify segmentation errors for different cell lines. For example, sudden increases in movement resulting from large boundary changes can indicate segmentation errors for MCF-7 cells, contrasting with their innate low motility. On the other hand, size and texture variables provide better characterisation of the unexpected fluctuations in cell size and clusters of high-intensity pixels induced by segmentation errors for MDA-MB-231 cells. Current approaches for removal of segmentation errors are subjective and labour-intensive, requiring manual input of parameters such as expected cell size that need to be fine-tuned for different data sets. CellPhe provides an objective, automated approach to segmentation error removal with the ability to adapt to new data sets. For cell characterisation, we have shown that CellPhe’s feature selection method is able to adapt to different experimental conditions, providing discrimination between untreated and treated groups of two different breast cancer cell lines (MDA-MB-231 and MCF-7) and two different chemotherapy treatments (docetaxel and doxorubicin). The discriminatory variables identified here coincide with previously reported effects of docetaxel or doxorubicin treatment and can be interpreted in terms of the mechanism of action of each drug. Previous studies have identified a subset of polyploid, multinucleated cells following docetaxel treatment due to cell cycle arrest and occasionally cell cycle slippage 23 . Our findings support this with shape and size variables providing the greatest separation for docetaxel treatment in both MDA-MB-231 and MCF-7 cells. Many texture variables were also identified as discriminatory following docetaxel treatment, providing label-free identification of the multiple clusters of high-intensity pixels in treated cells, likely a result of docetaxel-induced multinucleation. We found that at a higher, sub-lethal concentration of 1μM, migration of MDA-MB-231 cells was reduced with variables associated with movement providing greatest discrimination between untreated and doxorubicin treated cells. This is supported by studies that have identified changes in migration of doxorubicin-treated cells, noting that low drug concentrations in fact facilitate increased invasion 24 , 25 . We found an imbalance in untreated and treated classification accuracy scores, with a greater proportion of treated cells misclassified for all three data sets. This consistent imbalance suggests the misclassifications are in fact representative of a subset of non-conforming, and potentially chemoresistant, cells. The concept of hidden stratification, where an unlabelled subset performs poorly during classification, has been described previously 26 and poses a challenge in medical research as important subsets (such as rare forms of disease) could be overlooked. Here, the misclassified cells could be of most interest and the ability to identify non-conforming behaviour is precisely what is required from a classifier as treated cells that display behaviour similar to untreated cells could indicate a reduced response to drug treatment. The classification of cells treated with a range of concentrations supported this hypothesis as a greater proportion of cells were classified as untreated at lower drug concentrations, demonstrating that our trained ensemble classifier can be used to quantify drug response, at both single-cell and populational level. Cluster analysis revealed cell subsets that appear to represent different responses to drug treatment. Heterogeneity of cellular drug response is a commonly reported phenomenon in cancer treatment, yet mechanisms underlying this are not well understood 27 . Analysis of cell volumes showed the mean volume of treated and untreated cells to be comparable for doxorubicin reflecting the fact that this treatment can induce G1, S or G2 cell cycle arrest 28 . However, for docetaxel-treated cells, we found that clusters spanned a range of mean cell volumes beyond those of the untreated set for both cell lines. Clustering allowed identification of three general responses to docetaxel treatment: pre–"cytokinesis attempt”, with cells having similar volumes to the untreated MDA-MB-231 population; post-"cytokinesis attempt”, where cells were tracked following failed cytokinesis and therefore continued to grow to volumes beyond those of the late stages of the untreated cell cycle; and cell death, with a final cluster, composed primarily of cell debris. Furthermore, giant cell morphology has been linked with docetaxel resistance, a potential cause of relapse in breast cancer patients 9 and through cluster analysis, we were able to identify a potentially resistant subset of very large, treated cells that could be isolated for further investigation. Our chosen application demonstrated the breadth of quantification and biological insight that can be made by following our workflow, with characterisation of drug response and detection of potentially resistant cells just two of many potential applications for CellPhe. CellPhe offers several benefits for the quantification of cell behaviour from time-lapse images. First, errors in cell segmentation and tracking can be identified and removed, improving the quality of input for downstream data analysis. This is particularly important with machine learning where automation means that such errors can easily be missed, and algorithms consequently trained with poor data. Although different cell lines have different properties that allow segmentation errors to be recognised, we have shown that ground truth data for a particular cell-line can be re-used for different experiments, in our case, different drug treatments. Second, cell behaviour is characterised over time by extracting variables from the time series of various features whereas many studies explore temporal changes by collecting data at discrete time points (for example, 0 and 24 hours post-treatment) and using metrics from each static image, missing behavioural changes experienced by cells on a continuous level. With CellPhe, changes over time in features that provide information on morphology, movement and texture are quantified not just by summary statistics but by variables extracted from wavelet transformation of the time series allowing changes on different scales to be identified. Third, whilst most studies use a limited number of metrics, assessed individually for discrimination between groups 29 , 30 , CellPhe provides an extensive list of metrics and automatically determines the combination that offers greatest discrimination. The bespoke feature selection frequently found the most discriminatory variables to be those with the ability to detect changes in cell behaviour over time. Previous research in this field has focused on identification of cell types from co-cultures 31 for use in automated diagnosis of disease such as cancer. Analysis methods for these studies are often cell-line specific whereas CellPhe’s feature selection method is successful in identifying discriminatory variables tailored to different experimental conditions. Finally, CellPhe uses an ensemble of classifiers to predict cell status with high accuracy and we show that separation scores can be used to identify the variables associated with different cell subsets identified in cluster analysis to explore cell heterogeneity within a population, even when subtle differences are not readily visible by eye. The interactive, interpretable, high-throughput nature of CellPhe deems it suitable for all cell time-lapse applications, including drug screening or prediction of disease prognosis. We provide a comprehensive manual with a working example and real data to guide users through the workflow step-by-step, where users can interact with each stage of the workflow and customise to suit their own experiments. Here we demonstrated the abundance of information and insight that can be made by following the CellPhe workflow to quantify cell behaviour from QPI images. CellPhe can be used with tracking information from multiple segmentation and tracking algorithms and different imaging modalities, including fluorescence, and would be suitable for all time-lapse studies including clinical applications. Methods Cell Culture MDA-MB-231 and MCF-7 cells (American Type Culture Collection [ATCC] catalogue numbers HTB-26 and HTB-22, respectively) were a gift from Prof. Mustafa Djamgoz, Imperial College London. MDA-MB-231 cells and MCF-7 cells were cultured separately in Dulbecco’s modified eagle medium supplemented with 5% fetal bovine serum and 4 mM l -glutamine 32 . Fetal bovine serum was filtered using a 0.22μm syringe filter prior to use to reduce artefacts when imaging. Cells were incubated at 37 ∘ C in plastic filter-cap T-25 flasks and were split at a 1:6 ratio when passaged. No antibiotics were added to cell culture medium. Cells were confirmed to be mycoplasma-free by \({4}^{{\prime} }\) ,6-diamidino-2-phenylindole (DAPI) method 33 . The molecular identity of MDA-MB-231 and MCF-7 cells was verified by short tandem repeat analysis 34 . Authenticated cell stocks were stored in liquid nitrogen and thawed for use in experiments. Thawed cells were sub-cultured 1–2 times prior to discarding and thawing a new stock to ensure that the molecular identity of cells was retained throughout. In cases where dsRed expressing MDA-MB-231 cells were used, cells were sorted via FACS prior to imaging to enrich for a transfected cell population. To image the following day, cells were counted and then seeded in a Corning Costar plastic, flat bottom 24-well plate. Cells were seeded at a density of 8000 cells per well with a final volume of 500 μL in each of the 24 wells. Pharmacology Docetaxel (Cayman Chemical Company) was prepared as 5 mg/mL in DMSO and doxorubicin (AdooQ Bioscience) as 25 mg/mL in DMSO; both were then frozen into aliquots. Once thawed, docetaxel and doxorubicin stock solutions were diluted in culture medium to give final working concentrations. Docetaxel dose-response analysis for both MDA-MB-231 and MCF-7 cells involved imaging eight wells treated with the following concentrations of docetaxel: 0 nM, 1 nM, 3 nM, 10 nM, 30 nM, 100 nM, 300 nM, 1 μM, with additional concentrations 3 μM, 10 μM and 30 μM imaged for MDA-MB-231 cells. Doxorubicin dose-response analysis for MDA-MB-231 cells involved imaging eight wells treated with the following concentrations of doxorubicin: 0 nM, 10 nM, 30 nM, 100 nM, 300 nM, 1 μM, 3 μM, 10 μM. Medium was removed from wells selected to receive treatment 30 mins prior to image acquisition, and 500 μL of desired drug concentration was added to each well. Control wells received a medium change and were treated with DMSO vehicle on the day of imaging to maintain consistent DMSO concentration throughout. Image acquisition and exportation Cells were placed onto the Phasefocus Livecyte 2 (Phasefocus Limited, Sheffield, UK) to incubate for 30 minutes prior to image acquisition to allow for temperature equilibration. One 500 μm × 500 μm field of view per well was imaged to capture as many cells, and therefore data observations, as possible. Selected wells were imaged in parallel for 48 hours at ×20 magnification with 6-minute intervals between frames, resulting in full time-lapses of 481 frames per imaged well. Phase and fluorescence images were acquired in parallel for each well. For phase images, Phasefocus’ Cell Analysis Toolbox® software was utilised for cell segmentation, cell tracking and data exportation. Segmentation thresholds were optimised for a range of image processing techniques such as rolling ball algorithm to remove background noise, image smoothing for cell edge detection and local pixel maxima detection to identify seed points for final consolidation. The Phasefocus software outputs a feature table for each imaged well. Information on missing frames for tracked cells can be obtained from this table which also provides descriptive features. However, most features are calculated within CellPhe and we only utilise the Phasefocus’ features that rely on phase information, these being the volume of the cell and sphericity 35 . For fluorescence images, the TrackMate-Cellpose ImageJ plugin was used for cell segmentation and tracking. Cells were segmented using Cellpose’s pre-trained cytoplasm model and image contrast was enhanced prior to segmentation to improve detection of cell boundaries. Once complete, TrackMate feature tables and individual cell ROIs were exported from ImageJ v2.9.0-153t. Prior to use with CellPhe, it was necessary to interpolate TrackMate-Cellpose ROIs to obtain a complete list of cell boundary coordinates. Interpolation of ROIs was performed using a custom ImageJ macro. Implementation of CellPhe Using cell boundary information from Regions of Interest (ROIs) produced by the Phasefocus software or TrackMate, a range of morphological and texture features were extracted for each cell that was tracked for at least 50 frames. Image data were imported into CellPhe using the R package tiff v0.1-11. In addition to size and shape descriptors calculated from the cell boundaries, a filling algorithm was used to determine the interior pixels from which texture and spatial features were extracted. The local density was also calculated as the sum of inverse distances from the cell centroid to those of neighbouring cells within three times the cells diameter. A complete list of features together with their definitions is provided in Supplementary table 1 . By considering the position of a cell’s centroid on subsequent frames, variables describing the cell’s movement were extracted from the images. The current speed of the cell estimated by considering its position in consecutive frames, taking into account any missing frames. The measure provided is proportional to rather than equal to velocity as this would require the rate at which frames were produced to be entered by the user for no gain in discriminatory power. The displacement, or straight line distance between the cell centroid on the current frame and the frame it was first detected in, and the tracklength or total path length travelled by the cell up to the current frame, are also calculated. To see how these vary, the quotient current tracklength/current displacement is also calculated. In addition to volume, calculated using phase information, the size variables determined are cell area, as the number of pixels within (or on) the cell boundary, the length, and width of the cell, determined from the minimal rectangular box that the cell can be enclosed by 36 , and the radius, as the average distance of boundary pixels from the cell centroid. We make use of an imported feature, sphericity, which requires phase information for calculation, but extract a number of other shape features within CellPhe. As well as determining the length and width from the arbitrarily oriented minimum bounding box, we use this to provide a measure of rectangularity as \(\max (x,y)/(x+y)\) where x and y are the length and width of the minimal bounding box 37 . We also consider the shape of the cell by calculating the fraction of the minimal box area that the cell area covers and by comparing the number of pixels on the boundary with the total pixels within the cell 37 . Here the number of boundary pixels is squared in the quotient to avoid the effect of cell size. We also calculate the variance on the distance from the centroid to the boundary pixels, with more circular cells having less variance 37 and an measure of boundary curvature based of the triangle inequality 38 . Finally 4 shape descriptors are obtained from a polygon fitted to the cell boundary, being the mean and variance of both edge length and interior angle 39 . Textural features of each cell are represented in terms of three first order statistics calculated from the pixel intensities within the cell: mean, variance and skewness 40 . For second order texture features, we used grey-level co-occurrence matrices (GLCMs) 41 but, rather than consider the positions of pixels within a cell, we calculated GLCMs between the image of the cell at different resolutions to differentiate textures that are sharp and would be lost at lower resolution from those that are smooth and would remain. This was achieved by performing a two-level 2-D wavelet transform 42 on the pixels within the axis-aligned minimum rectangle containing a cell. GLCMs were then calculated between the original interior pixels and the corresponding values from the first and second levels of the transform as well as between the two sets of transformed pixels (levels 1 and 2). Statistics first described by Haralick 43 were then calculated from each GLCM. We use 14 of the 20 Haralick features described by Löfstedt et al. 44 : Angular Second Moment, Contrast, Correlation, Variance, Homogeneity, Sum Average, Sum Variance, Entropy, Sum Entropy, Difference Variance, Difference Entropy, Information Measure of Correlation 2, Cluster Shade, Cluster Prominence. With three co-occurrence matrices, this gives 42 Haralick features. We calculated spatial distribution descriptors to quantify the uniformity or clustering of cell interior pixels at different intensity levels. IQ n is a measure of dispersion calculated for the subset of interior pixels with intensities greater than or equal to the ( n × 10)th quantile. Based on a Poisson distribution, for which the mean is equal to the variance, the measure is calculated as the variance divided by the mean, calculated over the pairwise distances between pixels within the n th subset. IQ n = 1 indicates a random distribution whereas a value of IQ n less than 1 indicates that the pixels are more uniformly distributed and a value >1 indicates clustering. Cell tracking provides a time series for each of the 74 features extracted for a cell. The length of the time series depends on how many frames the cell has been tracked for and so differs between cells. In order to apply pattern recognition methods, we extracted a fixed number of characteristic variables for each cell from the time series for each feature. Statistical measures (mean, standard deviation, and skewness) summarise time series of varying length, but may not be representative of changes throughout the time series. Therefore, in addition to summary statistics, we calculated variables inspired by elevation profiles in walking guides, that is, the sum of any increases between consecutive frames (total ascent), the sum of any decreases (total descent) and the maximum value of the time series (maximum altitude gain). Similar variables were calculated for different levels of the wavelet transform of the time series to allow changes at different scales to be considered. The wavelet transform decomposes a time series to give a lower resolution approximation together with different levels of detail that need to be added to the approximation to restore the original time series. Using the Haar wavelet basis 45 with the multiresolution analysis of Mallat 42 allows increases and decreases in the values of the variables to be determined over different time scales. With Haar wavelets, a negative detail coefficient represents an increase from one point to the next, and so we used the sum of the negative detail coefficients to provide the equivalent to total ascent and the sum of the positive detail coefficients as total descent. Rather than an overall maximum, we use the maximum detail coefficient for the transformed time series. Occasionally the automated cell tracking misses a frame or even several frames, for example when a cell temporarily leaves the field of view. To prevent jumps in the time series, we interpolated values for the missing frames, although these values were not used to calculate statistics. After interpolation, the three elevation variables were calculated from the original time series and three wavelet levels which, together with the summary statistics, provided 15 variables for each feature (Supplementary table 2 ). The 72 extracted features together with the 2 imported features would have given 74 × 15 = 1110 variables in total, but, as one feature, the tracklength or total distance travelled up to the current frame, is monotonically increasing, the total descent is always zero and therefore variables related to tracklength descent were not used. Similarly, as the tracklength and displacement are the same for the first frame and the displacement can never be greater than the tracklength, the maximum value for their quotient will always be 1 and this variable is also not used. One further variable was introduced to summarise cell movement as the area of the minimal bounding box around a cell’s full trajectory. This area will be large for migratory cells and small for cells whose movement remains local for the duration of the time series. If, within a cell’s trajectory, \(\min X\) and \(\min Y\) are the minimal X and Y positions respectively with \(\max X\) and \(\max Y\) the corresponding maximal positions, then the trajectory area is defined as $$\,{{\mbox{trajectory area}}}\,=(\max X-\min X)\times (\max Y-\min Y).$$ (1) Thus, a total of 1106 characteristic variables were available for analysis and classification. To improve characterisation of cellular phenotype, we only included cells that were tracked for at least 50 frames in our analyses. Whilst the majority of these cells were correctly tracked, others had segmentation errors, with confusion between neighbouring cells, missing parts of a cell or multiple cells included. In order to increase the reliability of our results, we developed a classification process to identify and remove such cells prior to further analysis. Cells (both treated and untreated) were classified by eye to provide a training data set. Due to class imbalance, with the number of segmentation errors far less than the number of correct segmentations, the Synthetic Minority Oversampling Technique (SMOTE) 46 was performed using the smotefamily package v1.3.1 in R, with the number of neighbours K set to 3, to double the number of instances representing segmentation errors. The resulting data set with all 1111 variables was used to train a set of 50 decision trees using the tree package v1.0-4.2 in R with default parameters. For each tree, the observations from cells with segmentation errors were used together with the same number of observations randomly selected from the correctly segmented cells to further address class imbalance. For each cell, a voting procedure was used to provide a classification from the predictions of the 50 decision trees. To minimise the number of correctly tracked cells being falsely classified as segmentation errors, this class was only assigned when it received at least 70% of the votes (i.e., 35). To add further stringency, the training of 50 decision trees was repeated ten times and a cell only given a final classification of segmentation error if predicted this label in at least five of the ten runs. MDA-MB-231 cells that were not used for training formed an independent test set. All cells either manually labelled as segmentation error or predicted as such were excluded from further analyses. After removing segmentation errors, the remaining data were used to form training and test sets for the classification of untreated and treated cells. Training sets were balanced prior to classifier training to mitigate bias and data from cells in the independent test sets were never used during training. A separate classifier was trained for each cell line—treatment combination, as shown in Table 3 and feature selection performed to determine the most appropriate variables in each case. Each variable was assessed using the group separation, S = V B / V W , where V B is the between-group variance: $${V}_{B}=\frac{{n}_{1}{({\bar{x}}_{1}-\bar{\bar{x}})}^{2}+{n}_{2}{({\bar{x}}_{2}-\bar{\bar{x}})}^{2}}{({n}_{1}+{n}_{2}-2)}$$ (2) and V W is the within-group variance: $${V}_{W}=\frac{({n}_{1}-1){s}_{1}^{2}+({n}_{2}-1){s}_{2}^{2}}{({n}_{1}+{n}_{2}-2)}.$$ (3) Here n 1 and n 2 denote the sample size of group 1 and group 2 respectively, \({\bar{x}}_{1}\) and \({\bar{x}}_{2}\) are the sample means, \(\bar{\bar{x}}\) the overall mean, and \({{s}_{1}}^{2}\) and \({{s}_{2}}^{2}\) are the sample variances. The most discriminatory variables were chosen for a particular data set by assessing the classification error on the training data to optimise the threshold on separation. Starting with a threshold of zero, the nth separation threshold was minimised such that the classification error rate did not increase by more than 2% from that obtained for the (n−1)th threshold. The aim here was to reduce the risk of overfitting by only retaining variables achieving greater than or equal to this threshold for the next stage of classifier training. Table 3 The three data sets used in this study with the number of cells in training and test sets used for untreated vs treated classification Full size table Data were scaled to prevent large variables dominating the analysis and ensemble classification used to take advantage of different classifier properties. The predictions from three classification algorithms, Linear Discriminant Analysis (LDA), Random Forest (RF) and Support Vector Machine (SVM) with radial basis kernel were combined using the majority vote. Model performance was evaluated by classification accuracy, taking into account the number of false positives and false negatives. All classification was performed in RStudio V1.2.5042 47 using open-source packages. LDA was performed using the lda function from the MASS library 48 , SVM classification used the svm function from the package e1071 v1.7-12 49 with a radial basis kernel and the package randomForest v4.7-1.1 50 was used to train random forest classifiers with 200 trees and 5 features randomly sampled as candidates at each split. Both hierarchical clustering and k -means clustering were used to investigate subgroups within single-class data sets (i.e. treated and untreated cells separately). Data were scaled prior to clustering and analyses performed in R. Hierarchical clustering was implemented with the factoextra package v1.0.7 51 using the hcut function to cut the dendrogram into k clusters. Agglomerative nesting (AGNES) was used with Ward’s minimum variance as the agglomeration method and the Euclidean distance metric to quantify similarity between cells. k -means clustering was performed using the R stats package v4.1.3, with the number of random initial configurations set to 50. The number of clusters k was chosen to obtain clusters with meaningful interpretation. Similarities and differences between clusters were identified through evaluation of separation scores to determine discriminatory features, as well as through observation of cells within each cluster by eye. Statistics and reproducibility All tests of statistical significance within this study were performed using Graphpad Prism 9.1.0 (GraphPad Software, San Diego, CA). Data were tested for normality using the D’Agostino & Pearson test. Parametric tests ( t tests and F tests) were used where suitable with non-parametric Mann-Whitney U tests in place of t tests where data did not follow a normal distribution. Results were considered significant if p < 0.05. Levels of significance used: * < 0.05, ** < 0.01, *** < 0.001, **** < 0.0001. Full details of statistical tests used for each analysis are provided in the figure legend for the corresponding figure. Three data sets were used to demonstrate our pipeline for the classification of untreated and treated cells. For brevity we use abbreviations throughout to refer to each data set, for example, 231Docetaxel is a data set consisting of MDA-MB-231 cells, both untreated and treated with 30 μM docetaxel. This is the main data set used to develop the methods, with a training data set compiled from 6 experiments performed on different days and an independent test data set compiled from a further 3 experiments, also performed on separate days and by a different individual. We validate our methods using two further data sets, the 231Doxorubicin and MCF7Docetaxel data sets, details of which are given in Table 3 . This table also includes details of the number of cells within each training and test set. We show that the classification pipeline can be successfully reproduced using fewer experimental repeats for the 231Doxorubicin and MCF7Docetaxel data sets. The 231Doxorubicin training set consists of data from one experiment with a further, independent experiment performed on a separate day used as a test set. Training and test sets for MCF7Docetaxel are from the same two experiments, with random sampling used to produce independent training and test sets. Each training data set contains a balanced number of untreated and treated cells, treated with a single drug concentration. We selected 30 μM docetaxel and 1 μM doxorubicin for the experiments with MDA-MB-231 cells as the optimal doses with which to induce changes in cell morphology and migration without inducing cell death. However, a lower concentration (1 μM) of docetaxel was used for MCF-7 cells as we found that this induced similar morphological and dynamical changes to those induced by higher concentrations but with reduced cell death (Table 3 ). Reporting summary Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article. Data availability All data used to produce the results in the manuscript, including separate data that will allow the user to follow the worked example in the CellPhe user guide, are available from the Dryad Database 52 . This includes the file example_data.zip which contains all the data required to follow the worked example. A video CellPhe_GUI_demo_vid.mov that explains how to use the GUI is available from . Source data are provided with this paper. Code availability The source code for algorithms developed during this research has been deposited in GitHub, linked from 53 . The interactive CellPhe GUI can be accessed at .
Being able to observe and track the way cells change and develop over time is a vital part of scientific and medical research. Time-lapse studies of cells can show us how cells have mutated in certain environments or their reactions to external influences, such as medical treatment. This information can shine a light on how disease spreads and why some patients' cells do not respond to treatment such as chemotherapy. Tracking the development of specific cell features is a difficult process however, scientists at the departments of Biology and Mathematics at the University of York have now created a free digital tool which can help. The software package, called CellPhe, is the first of its kind as it can extract a series of features from a cell in a time-lapse study and characterize the cells based on their behavior and internal structure. The package also automatically removes errors in cell tracking to improve data quality. In a study published April 3 in Nature Communications, CellPhe correctly identified two different sets of breast cancer cells—one which was treated with chemotherapy drugs and the other which was not. Comparing the two groups, CellPhe was also able to identify a potentially resistant subset of treated cells. Data like this is particularly important in understanding breast cancer and designing its treatment, as chemoresistance commonly leads to relapse in breast cancer patients. CellPhe could also have further practical applications in processes such as drug screening and the prediction of disease prognosis. Laura Wiggins from the Department of Biology said, "It is hugely exciting to be able to quantify cell behavior over time in such unprecedented detail. "A lot of hard work has gone into making CellPhe user-friendly and adaptable to new applications so we look forward to seeing how our toolkit will be used by the community. We foresee CellPhe playing a pivotal role in our understanding of cellular drug response as well as the ways in which cells communicate with one another, via signaling as well as through direct contact." CellPhe is freely available online and will run on any operating system and comes with a manual and an instruction video.
10.1038/s41467-023-37447-3
Physics
Building 3-D atomic structures atom by atom using lasers
Daniel Barredo et al. Synthetic three-dimensional atomic structures assembled atom by atom, Nature (2018). DOI: 10.1038/s41586-018-0450-2 Abstract A great challenge in current quantum science and technology research is to realize artificial systems of a large number of individually controlled quantum bits for applications in quantum computing and quantum simulation. Many experimental platforms are being explored, including solid-state systems, such as superconducting circuits1 or quantum dots2, and atomic, molecular and optical systems, such as photons, trapped ions or neutral atoms3,4,5,6,7. The latter offer inherently identical qubits that are well decoupled from the environment and could provide synthetic structures scalable to hundreds of qubits or more8. Quantum-gas microscopes9 allow the realization of two-dimensional regular lattices of hundreds of atoms, and large, fully loaded arrays of about 50 microtraps (or 'optical tweezers') with individual control are already available in one10 and two11 dimensions. Ultimately, however, accessing the third dimension while keeping single-atom control will be required, both for scaling to large numbers and for extending the range of models amenable to quantum simulation. Here we report the assembly of defect-free, arbitrarily shaped three-dimensional arrays, containing up to 72 single atoms. We use holographic methods and fast, programmable moving tweezers to arrange—atom by atom and plane by plane—initially disordered arrays into target structures of almost any geometry. These results present the prospect of quantum simulation with tens of qubits arbitrarily arranged in space and show that realizing systems of hundreds of individually controlled qubits is within reach using current technology. Journal information: Nature
http://dx.doi.org/10.1038/s41586-018-0450-2
https://phys.org/news/2018-09-d-atomic-atom-lasers.html
Abstract A great challenge in current quantum science and technology research is to realize artificial systems of a large number of individually controlled quantum bits for applications in quantum computing and quantum simulation. Many experimental platforms are being explored, including solid-state systems, such as superconducting circuits 1 or quantum dots 2 , and atomic, molecular and optical systems, such as photons, trapped ions or neutral atoms 3 , 4 , 5 , 6 , 7 . The latter offer inherently identical qubits that are well decoupled from the environment and could provide synthetic structures scalable to hundreds of qubits or more 8 . Quantum-gas microscopes 9 allow the realization of two-dimensional regular lattices of hundreds of atoms, and large, fully loaded arrays of about 50 microtraps (or ‘optical tweezers’) with individual control are already available in one 10 and two 11 dimensions. Ultimately, however, accessing the third dimension while keeping single-atom control will be required, both for scaling to large numbers and for extending the range of models amenable to quantum simulation. Here we report the assembly of defect-free, arbitrarily shaped three-dimensional arrays, containing up to 72 single atoms. We use holographic methods and fast, programmable moving tweezers to arrange—atom by atom and plane by plane—initially disordered arrays into target structures of almost any geometry. These results present the prospect of quantum simulation with tens of qubits arbitrarily arranged in space and show that realizing systems of hundreds of individually controlled qubits is within reach using current technology. Main Three-dimensional atomic arrays at half filling have been obtained using optical lattices with large spacings 12 , which facilitate single-site addressability and atom manipulation 13 . As an alternative approach, here we use programmable holographic optical tweezers to create three-dimensional (3D) arrays of traps. Holographic methods offer the advantage of higher tunability of the lattice geometry because the design of optical potential landscapes is reconfigurable and only limited by diffraction 14 , 15 , 16 . In our experiment 14 , arbitrarily designed arrays of up to about 120 traps are generated by imprinting a phase pattern on a dipole trap beam at 850 nm with a spatial light modulator (Fig. 1a ). This phase mask is calculated using the 3D Gerchberg–Saxton algorithm, simplified for the case of point traps 17 . The beam is then focused with a high-numerical-aperture (0.5) aspheric lens under vacuum, creating individual optical tweezers with a measured 1/e 2 radius of about 1.1 μm and a Rayleigh length of approximately 5 μm. After recollimation with a second aspheric lens, the intensity of the trapping light is measured using a standard charge-coupled device (CCD) camera. An electrically tunable lens (ETL1) in the imaging path allows us to acquire series of stack images along the optical axis z , from which we reconstruct the full 3D intensity distribution. The imaging system covers a z -direction scan range of 200 μm. Fig. 1: Experimental setup and trap images. a , We combine a spatial light modulator (SLM) and a high-numerical-aperture aspheric lens (AL) under vacuum to generate arbitrary 3D arrays of traps. The intensity distribution in the focal plane is measured with the aid of a second aspheric lens, a mirror (M) and a diagnostics CCD camera (d-CCD). The fluorescence of the atoms in the traps at 780 nm is separated from the dipole trap beam with a dichroic mirror (DM) and detected using an electron-multiplying CCD camera (EMCCD). For atom assembly we use moving tweezers superimposed on the trap beam with a polarizing beam splitter (PBS). This extra beam is deflected in the plane perpendicular to the beam propagation with a 2D acousto-optical deflector (AOD), and its focus can be displaced axially by changing the focal length of an electrically tunable lens (ETL3). The remaining electrically tunable lenses (ETL1 and ETL2) in the camera paths allow imaging of different planes along z . The inset depicts the intensity distribution of the trap light forming a bilayer array (red) and the action of the moving tweezers on an individual atom (purple). b–d , Intensity reconstructions of exemplary 3D patterns obtained from a collection of z -stack images taken with the diagnostics CCD camera. The regions of maximum intensity form a trefoil knot ( b ), a 5 × 5 × 5 cubic array ( c ) and a C 320 fullerene-like structure ( d ). The dimensions, L x , L y , L z , of the images are the same in all the examples. Full size image Figure 1b–d shows some examples of patterns suitable for experiments with single atoms. The images are reconstructed using a maximum-intensity projection method 18 from 200 z images obtained with the diagnostics CCD camera. With about 3.5 mW of power per trap we reach depths of U 0 / k B ≈ 1 mK, where k B is the Boltzmann constant, and radial (longitudinal) trapping frequencies of around 100 kHz (20 kHz). We produce highly uniform microtrap potentials (with peak intensities differing by less than 5% root mean square) via a closed-loop optimization 14 . Rubidium-87 atoms are then loaded in the traps from a magneto-optical trap (MOT), with a final temperature of 25 μK. We detect the occupancy of each trap by collecting the fluorescence of the atoms at 780 nm with an electron-multiplying CCD camera for 50 ms. A second tunable lens (ETL2) in the imaging path is used to focus the fluorescence of different atom planes. In Fig. 2 we show the fluorescence of single atoms trapped in various complex 3D structures, some of which are relevant, for instance, to the study of non-trivial properties of Chern insulators 19 , 20 , 21 . Each example is reconstructed from a series of 100 z -stack images covering an axial range of about 120 μm. With no further action, these arrays are randomly loaded with a filling fraction of about 0.5; we thus average the fluorescence signal over 300 frames to reveal the geometry of the structures. Fig. 2: Single-atom fluorescence in 3D arrays. a – f , Maximum-intensity-projection reconstruction of the average fluorescence of single atoms loaded stochastically into exemplary arrays of traps. The x , y , z scan range of the fluorescence ( L x , L y , L z ) is the same for all the 3D reconstructions. Full size image For deterministic atom loading, we extend our two-dimensional (2D) atom-by-atom assembler 11 to 3D geometries. For that, we superimpose a second 850-nm laser beam (with 1/e 2 radius of about 1.3 μm) on the trapping beam, which can be steered in the x–y plane using a 2D acousto-optical deflector and in the z direction by changing the focal length of a third tunable lens (ETL3). Combined with a real-time control system, the moving tweezers can perform single-atom transport with fidelities exceeding 0.993, as shown in ref. 11 , and produce fully loaded arrays by using independent and sequential rearrangement of the atoms for each of the n p planes in the 3D structures. To explore the feasibility of plane-by-plane atom assembly, we first determine the minimal separation between layers so that each target plane can be reordered without affecting the others. To quantify this, we perform the following experiment in a 2D array containing 46 traps. We randomly load the array with single atoms and demand the atom assembler to remove all the atoms. We average over about 50 realizations and then repeat the experiment for different axial separations between the position of the moving tweezers and the trap plane. The result is shown in Fig. 3a , where we see that for separations beyond about 17 μm the effect of the moving tweezers on the atoms is negligible. This distance can be further reduced to about 14 μm by operating the moving tweezers with less power, without any degradation in the performance of the sorting process. In a complementary experiment, where we fully assembled small arrays, we also checked that the assembling efficiency is not affected by slight changes (below about 3 μm) in the exact axial position of the moving tweezers. Fig. 3: Fully loaded 3D arrays of single atoms. a , Recapture probability as a function of the axial distance between the focus of the moving tweezers and the plane of the atoms measured experimentally by trying to remove all the atoms from a 46-trap array. Error bars denote the standard error of the mean and are smaller than the symbol size. The line is a guide to the eye. b , Time control sequence of the experiment. We start the experiment by recording sequentially an image for each target plane. The analysis of the resulting n p images reveals the initial position of the atoms in the traps. The 2D atom assembler, in combination with an electrically tunable lens (ETL3), arranges the atoms plane by plane. Finally, a new set of sequential images is collected to capture the result of the 3D assembly. c – h , Fully loaded arrays with arbitrary geometries. All images are single shots. The models of the 3D configurations are shown for clarity; the colours of the frames around the images encode successive atomic planes. Source data Full size image We now demonstrate full loading of arbitrary 3D lattices using plane-by-plane assembly. We start by creating a 3D trap array that can be decomposed in several planes normal to z . In each plane we generate approximately twice the number of traps that we need to load, so that we can easily load enough atoms to assemble the target structure. The sequence used to create fully loaded patterns (see Fig. 3b ) starts by loading the MOT and monitoring the atoms entering and leaving the traps by sequentially taking a fluorescence picture for each plane. We trigger the assembler as soon as there are enough atoms in each plane to fully assemble it. We then freeze the loading by dispersing the MOT cloud and record the initial positions of the atoms by another series of z -stack images. Analysis of the images reveals which traps are filled with single atoms. We use this information to compute (in about 1 ms) the moves needed to create the fully loaded target array and perform plane-by-plane assembly by changing the z position of the moving tweezers after the assembly in each plane is completed. Finally, we detect the final 3D configuration with another series of z -stack images. Figure 3c–h shows a gallery of fully loaded 3D atomic arrays arbitrarily arranged in space. We can create fully loaded 3D architectures with up to 72 atoms distributed in several layers with different degrees of complexity. The selected structures include simple cubic lattices (Fig. 3d ), bilayers with square or graphene-like 22 arrangements (Fig. 3c, e, g ), lattices with inherent geometrical frustration such as pyrochlore 23 (Fig. 3f ) and lattices with cylindrical symmetry (Fig. 3h ), which are suitable, for example, for studying quantum Hall physics with neutral atoms 24 . The arrays are not restricted to periodic arrangements, and the positions of the atoms can be controlled with high accuracy (<1 μm). The minimum interlayer separation that we can achieve depends on the type of underlying geometry. This is illustrated in Fig. 3e , which shows the full 3D assembly of a bilayer square lattice (with a layer separation of d z = 5 μm). There, sites corresponding to the second layer are displaced by half the lattice spacing. Because traps belonging to neighbouring layers do not have the same ( x , y ) coordinates, there is no limitation to the minimum interlayer distance that we can produce. In both images we can observe a defocused fluorescence at intersite positions due to atoms trapped in the neighbouring layer. By contrast, whenever traps are aligned along the z axis (for example, in Fig. 3d ), we set a minimum axial separation of about 17 μm to avoid any disturbance from the moving tweezers on the atoms while assembling neighbouring planes. However, for some trapping geometries this constraint can be overcome by applying a small global rotation of the 3D trap pattern around the x or y axis, so that neighbouring traps do not share the same ( x , y ) coordinates. The minimum interlayer spacing ultimately depends on the Rayleigh range of our trapping beam (about 5 μm) and could be further reduced, for example, by using an aspheric lens with higher numerical aperture. The range of interatomic distances that we can achieve (3– 40 μm) is suitable for implementing fast qubit gates 25 or simulating excitation transport 26 and quantum magnetism with Rydberg atoms, because interaction energies between Rydberg states at those distances are typically in the megahertz range. To illustrate this possibility, we performed a proof-of-principle experiment with two atoms belonging to the cylindrical lattice displayed in Fig. 3h . The atoms are separated by a total distance of R 12 = 20 μm ( d x = 10 μm, d z = 17 μm); see Fig. 4 . We first initialize the atoms in state |g〉 = |5S 1/2 , F = 2, m F = 2〉, where F and m F are the hyperfine and magnetic quantum numbers, respectively, by optical pumping in a 47-G magnetic field that defines the quantization axis and is aligned perpendicular to the internuclear axis. Then, the dipole trap is switched off and a two-photon Rydberg stimulated Raman adiabatic passage 27 excites both atoms to the |↑〉 = |60S 1/2 , m j = 1/2〉 Rydberg state, where m j is the spin projection along the magnetic field direction. We further use a resonant microwave field and local addressing 28 to transfer the second atom to the |↓〉 = |60P 1/2 , m j = −1/2〉 state, while the first atom remains in |↑〉. In these two Rydberg levels, the atoms are coupled by a direct dipole–dipole interaction with a strength of \(U={C}_{3}/{R}_{12}^{3}\) , and a calculated C 3 coefficient of C 3 = h ×1,357 MHz μm 3 , where h is the Planck constant. The prepared pair-state |↑↓〉 evolves under the XY-spin Hamiltonian \(H=\left({C}_{3}/{R}_{12}^{3}\right)\left({\sigma }_{1}^{+}\hspace{2.77626pt}{\sigma }_{2}^{-}+{\sigma }_{1}^{-}\hspace{2.77626pt}{\sigma }_{2}^{+}\right)\) (where \({\sigma }_{i}^{\pm }\) denotes the Pauli matrices acting on atom i = {1, 2}) and undergoes coherent spin-exchange oscillations between |↑↓〉 and |↓↑〉 as a function of the variable interaction time, T . Finally, a de-excitation sequence projects the population in |↑〉 to |g〉, but leaves the population in |↓〉 unaffected. After switching the dipole trap on again, atoms in |g〉 are recaptured, while atoms in the excited state |↓〉 are repelled by the trapping potential of the optical tweezers and appear as atom losses in the final fluorescence images. The outcome of this experiment is shown in Fig. 4 . We observe coherent ‘flip-flops’ between |↑↓〉 and |↓↑〉 with a measured frequency of 2 U / h = 333 ± 5 kHz. This value is consistent with the frequency 2 U / h = 339 kHz expected from our distance calibration ( R 12 = 20 ± 1 μm), which was performed by optical means. The finite contrast and the small damping of the oscillations arise from experimental imperfections (errors in state preparation and readout, residual atomic temperature), as reported in ref. 29 . This proof-of-principle experiment demonstrates the feasibility of performing quantum simulations using our defect-free 3D atomic arrays of single atoms. Excitations hopping under the influence of this Hamiltonian are equivalent to a system of hard-core bosons. The dipole–dipole interactions observed here can be further exploited to engineer Hamiltonians containing complex hopping amplitudes, which are suitable for the study of, for example, topological insulators 30 . Fig. 4: Spin-exchange dynamics between two Rydberg atoms in different z layers. Excitation-hopping oscillations between |↑↓〉 and |↓↑〉, observed in the populations P ↑↓ , P ↓↑ , driven by the dipole–dipole interaction between two Rydberg states, |↑〉 = |60S 1/2 , m j = 1/2〉 and |↓〉 = |60P 1/2 , m j = −1/2〉, at a distance of about 20 μm ( d x = 10 μm; d y = 17 μm). Error bars represent the standard error of the mean and are mostly smaller than the symbol size. Solid lines are damped sine fits to the data. The direction of the magnetic field, B , is indicated. Source data Full size image Besides the unique tunability of the geometries that it provides, our atom-assembling procedure is highly efficient: we reach typical filling fractions of 0.95. This measured efficiency is slightly dependent on the number of planes and is mainly limited by the lifetime of the atoms in the traps (about 10 s) and the duration of the sequence (we typically need 60 ms per plane to acquire the fluorescence images and about 50 ms per plane to perform atom sorting). The repetition rate of the experiment is about 1 Hz. The number of traps and the filling fraction of the arrays could be further increased with current technology: (i) the volume of the trap array and the maximum number of traps can be extended by increasing the field of view of the aspheric lens and the laser power; (ii) the lifetime of the atoms in the traps can realistically be increased by an order of magnitude; (iii) the repetition rate of the experiment can be increased by optimizing the atom assembler 11 , in particular by transferring atoms also between different planes 31 ; and (iv) the initial filling fraction of the arrays could reach values exceeding 0.8 by using tailored light-assisted collisions 32 , 33 . Therefore, the generation of three-dimensional structures containing several hundred atoms at unit filling seems within reach, opening up many new possibilities in quantum information processing and quantum simulation with neutral atoms. Data availability The data presented in the figures and that support the other findings of this study are available from the corresponding author on reasonable request.
A team of researchers at Centre National de la Recherche Scientifique (CNRS) in France has developed a technique for arranging cold atoms into useful 3-D arrayed structures. In their paper published in the journal Nature, the group describes their technique and the ways the structures could be useful. As work toward the development of a functional quantum computer continues, groups of scientists have worked on technologies required for the development of such a machine. One such requirement is the development of atomic structures—if atoms are to serve as qubits, they must be arranged in precise and useful ways that allow for interactions between one another. Most envision such arrangements to consist of 3-D arrayed structures. In this new effort, the researchers report on a technique they have developed to build 3-D atomic structures in arrayed shapes likely to be needed for quantum computer applications. The technique involves building microtraps using spatially modulated light. Such traps and other instruments use the energy in light to move single neutral atoms around in desired ways and then to hold them in place. To build a desired structure, the group moved a small mass of rubidium atoms into a trap that filled it up just halfway. Doing so situated the atoms in random spots inside the trap. They then activated deflectors that used both sound and light to serve as tweezers that they used to move the atoms in the trap in desired ways. After that, they used the tweezers to grab single atoms outside of the trap and placed them into desired spots inside the trap. The end result was a 3-D structure in a desired shape. The researchers note that their technique allows for creating 3-D structures in a variety of shapes, all of which are precisely ordered. Notably, the results are free of defects because each atom is placed individually into the structure. To prove the effectiveness of their technique, the researchers bathed a structure they had built with light and studied the result with a CCD camera—it was able to highlight the fluorescence of the rubidium atoms showing their locations within the microtrap.
10.1038/s41586-018-0450-2
Space
Astronomers observe the magnetic field of the remains of supernova 1987A
Detection Of Linear Polarization In The Radio Remnant Of Supernova 1987a: arxiv.org/pdf/1806.04741.pdf Journal information: Astrophysical Journal
https://arxiv.org/pdf/1806.04741.pdf
https://phys.org/news/2018-06-astronomers-magnetic-field-supernova-1987a.html
Abstract The near-Earth asteroid (3200) Phaethon is the parent body of the Geminid meteor stream. Phaethon is also an active asteroid with a very blue spectrum. We conducted polarimetric observations of this asteroid over a wide range of solar phase angles α during its close approach to the Earth in autumn 2016. Our observation revealed that Phaethon exhibits extremely large linear polarization: P = 50.0 ± 1.1% at α = 106.5°, and its maximum is even larger. The strong polarization implies that Phaethon’s geometric albedo is lower than the current estimate obtained through radiometric observation. This possibility stems from the potential uncertainty in Phaethon’s absolute magnitude. An alternative possibility is that relatively large grains (~300 μm in diameter, presumably due to extensive heating near its perihelion) dominate this asteroid’s surface. In addition, the asteroid’s surface porosity, if it is substantially large, can also be an effective cause of this polarization. Introduction (3200) Phaethon is a well-studied near-Earth asteroid. Ever since its discovery by a survey using the Infrared Astronomical Satellite (IRAS) in 1983 1 , this asteroid has exhibited several interesting characteristics of small solar system bodies. Phaethon is an Apollo-type near-Earth asteroid that has large inclination \((i \sim 22^\circ )\) , large eccentricity \((e \sim 0.89)\) , and small perihelion distance \((q \sim 0.14{\kern 1pt} {\mathrm{au}})\) . This asteroid is also recognized as the parent body of the Geminid meteor stream due to the orbital similarities they share 2 , 3 , but no cometary activities such as coma have been detected 4 , 5 unlike in the parent bodies of other meteor streams. On the other hand, this asteroid exhibits weak but certain dust ejections near its perihelion passages 6 , 7 , which is the reason why it is now regarded as an active asteroid 8 . A feature that makes Phaethon intriguing is its very blue spectrum 9 , 10 . This asteroid’s spectrum is categorized into B-type in the SMASS II (Bus) classification 11 and F-type in the Tholen classification 3 , characterized by a negative slope over 0.5–0.8 μm without any diagnostic absorption bands in the visible to near-infrared wavelengths 12 . Phaethon’s blue spectrum is also similar to that of the Pallas family asteroids, particularly in the near-infrared wavelengths 9 , 10 . This is one item of evidence of the connection between this asteroid and the Pallas family. It is suggested that thermally metamorphosed CI/CM chondrites 12 or CK4 chondrites 10 , 13 are the meteorite analogue of Phaethon due to their spectral similarity. Another interesting aspect of Phaethon is that, this asteroid seems to possess at least one disruption fragment, (155140) 2005 UD, deduced from the two asteroids’ strong orbital similarity and their surface color affinities 14 , 15 , 16 . Multicolor photometry of (155140) 2005 UD indicates that the surface color of this object is inhomogeneous 16 . The surface heterogeneity of this fragment may be related to the spectral variability recognized on Phaethon’s surface 12 , serving as evidence of the past breakup event that split them. The curious surface property of this asteroid, together with the existence of a fragment, is an important clue for understanding the dynamical and thermal evolution of the near-Earth asteroids in this orbital category, and not just of Phaethon, whose dynamical lifetime is generally as short as some million years 17 , 18 . However, it is also true that many unknowns and uncertainties remain to be cleared up. For example, we still have little understanding of what microscopic structure or physical process yields such very blue spectra of Phaethon. The details of the mechanism that causes the sporadic dust ejections from Phaethon, and the mechanism that made the fragment split from the parent body, are not well understood either, although it is suggested that strong solar heating is involved 19 . We can solve several of the above-stated enigmas of Phaethon (and those of the B-type asteroids collectively) by investigating their surface polarimetric properties. Polarimetric studies of airless bodies are generally useful for understanding their surface physical properties, particularly geometric albedo and grain size. For example, we know that geometric albedo and polarization degree of the small solar system bodies have a strong correlation 20 . Also, the maximum values of the linear polarization degree ( P max ) and the solar phase angle ( α ) where P max happens are correlated with grain size 21 , 22 . While spectroscopic observation measures the reflected spectrum from the surface of the object, its result depends not only on the surface texture but also chemical composition of the surface material. In general it is not easy to decouple these combined effects just from spectroscopic observation. Therefore, polarimetric observation that directly measures the status of light scattering, which strongly depends on the surface texture, is complementary to and sometimes superior to spectroscopic observation in terms of studies of the small solar system bodies. So far, most polarimetric studies of the small solar system bodies have been performed at small to moderate solar phase angles such as α < 35° 23 . Polarimetric measurement of the small bodies at a large solar phase angle is technically difficult, not only because the observational opportunities are limited to some near-Earth asteroids that get inside Earth’s orbits, but also because the observation should be conducted at small solar elongation angles. However, as polarimetric measurement of the small bodies over a wide range of α better reveals their surface material property, its implementation is always desirable whenever it is feasible. In this paper, we report the result of our series of polarimetric observations of Phaethon over a wide range of solar phase angles. Our observation revealed that this asteroid exhibits a very strong linear polarization (>50%). This implies that Phaethon’s geometric albedo is lower than the current estimate. An alternative is that relatively large grains (~300 μm in diameter) dominate this asteroid’s surface. Results Dependence of polarization on solar phase angle We carried out our series of polarimetric observations using the 1.6-m Pirka telescope at the Nayoro Observatory in Hokkaido, Japan, over six nights from September to November 2016. We provide details of the observation and analysis procedure in Methods. As an initial result, we made a plot of Phaethon’s polarization degree P r as a function of solar phase angle α (Fig. 1 ). For reference, we show measurement results in past studies of some other solar system objects in this figure: Mercury, a Q-type near-Earth asteroid (1566) Icarus, and several others that are renowned for their large P r . Fig. 1 Dependence of the linear polarization degree P r of Phaethon (the blue-filled circles) on solar phase angle ( α ) in the R C -band centered at the wavelength of 0.641 μm. For comparison, P r values of some other solar system objects are also shown: Phaethon measured by other authors (0.5474 μm) 47 . Mercury 33 , 62 , 63 , (1566) Icarus 31 , Phobos 64 , Deimos (0.57 μm) 64 , Deimos (0.43 μm) 65 , (2100) Ra-Shalom 66 , 2P/Encke in 0.642 μm (“observed”) 67 , and 209P/Linear (nucleus) 68 . Note that for Phaethon, Icarus, and Mercury, we fit the observational data by a regular, analytic function called the Lumme and Muinonen function 31 , 69 . Error bars of Phaethon’s P r represent the sum of random errors and systematic errors that our polarimetric measurement contains. They are calculated in a manner described in Methods (see subsection Estimate of errors). Error bars of other objects’ P r are adopted from the literature. The data for Phaethon are available from Table 1 , and the data for Icarus are available from Supplementary Table 1 . Dependence of Phaethon’s linear polarization degree on its rotation phase is presented as Supplementary Figure 1 , together with the actual data values tabulated in Supplementary Table 4 Full size image In general, linear polarization degree of the small solar system bodies has a local maximum ( P max ) at large α and a local minimum ( P min ) at small α 24 , although the values of P max , P min , and α where the extremums happen differ from object to object. The P r ( α ) curve for Mercury in Fig. 1 typically exemplifies what we just described—we clearly recognize P min at α ~ 10° and P max at α ~ 90°. For Icarus, although there is no observational point that tells us the location of P min , we see P max at α ~ 120°. We may read P min from the plots for Deimos (0.43 μm). Compared with the other objects plotted on Fig. 1 , we easily see how strong Phaethon’s polarization degree is. Its P r exhibits a steep increase toward large α . Here, we must be particularly aware that the observed largest value of P r (~50% at α = 106.5°) is not equivalent to Phaethon’s P max . As is expected from Fig. 1 , P max of this asteroid is probably located beyond our observational coverage, at much larger α than 106.5°. Consequently, we can say that P max of Phaethon is substantially larger than 50%. Albedo and maximum polarization degree As for small solar system bodies, P max has a correlation inverse to the geometric albedo p V in general: Material that has large p V tends to have small P max . This is so-called Umow’s law 25 which is caused by the general fact that multiple scattering of light is more effective on surfaces with high albedo than on surfaces with low albedo. This makes the polarization degree of the surface with higher albedo weaker, and makes that with lower albedo stronger 26 . Using Phaethon’s currently estimated geometric albedo ( p V = 0.122 ± 0.008 27 ), we made another plot that shows the dependence of P max on albedo (Fig. 2 ). In this figure, we again gathered data from several bodies and materials in addition to Phaethon: Mercury, (1566) Icarus, comets 2P and 209P as in Fig. 1 , and various terrestrial, meteoritic, and lunar samples obtained from laboratory observations 28 , 29 . We included 2P and 209P in this figure because their P r values plotted in Fig. 1 can be regarded as being close to P max . We excluded Phobos, Deimos, and (2100) Ra-Shalom because it is not certain whether their P r values in Fig. 1 are close to P max . Fig. 2 Relationship between A (the geometric albedo measured at α = 5°) and P max for Phaethon and other objects. In this figure, Phaethon is represented by three kinds of blue squares: The blue-filled square that uses the current albedo estimate in ref. 27 and its error bars, the blue open square with a horizontal bar that uses the albedo estimate brought by a recent radar observation 36 , and the blue open square with a vertical bar that uses an albedo estimate calculated from the absolute magnitude value of H = 14.6 presented in ref. 38 . See Discussion for the latter two estimates. Note that the vertical value for Phaethon is not actually P max , but the observed largest value of P r during our observation (Fig. 1 ). Therefore, we have added an upper arrow to the blue-filled square for showing that Phaethon’s P max is larger than this value. Data for the terrestrial, meteoritic, and lunar samples are all taken from the tables described on ref. 29 except for those designated as “Lunar fines Ref 28 ” adopted from ref. 28 ’s Fig. 2. The lunar fine data adopted from ref. 29 ’s Table 1 are designated as “Lunar fines Ref 29 .” Note that in the legend we use the abbreviation “Terr rock” for terrestrial rock. The data for Mercury is from ref. 33 , and that for (1566) Icarus is from ref. 31 . The albedo of 2P is adopted from ref. 70 , and that of 209P is from ref. 68 . Note also that wavelengths for each of the measurements differ from sample to sample. Error bars seen on symbols of Icarus, Mercury, 209P, and 2P are adopted from the literature. The data values for Phaethon and Icarus plotted in this figure are available from Supplementary Table 2 Full size image Note that the albedo used in the horizontal axis of Fig. 2 is not p V itself, but the geometric albedo measured at α = 5°. Hereafter, we call this A . This conversion is done to avoid the so-called opposition effect 30 that is extraordinarily eminent around α ~ 0°. To convert Phaethon’s p V into A , we adopted the ratio of reflectance intensity \(( \mathscr J )\) at α = 0.3° and 5° for three asteroids ((24) Themis, (47) Aglaja, and (59) Elpis) presented in ref. 30 . They are all B-type in the SMASS II classification, as is Phaethon. The arithmetic average of their intensity ratios \(\left\langle {\frac{{{\mathscr J}(0.3^\circ )}}{{{\mathscr J}(5^\circ )}}} \right\rangle\) turns out to be 1.31. Using this value, we carried out the conversion of Phaethon’s albedo from p V to A as \(A = \frac{{p_{{V}}}}{{1.31}} \sim 0.0931\) . A similar conversion is applied to (1566) Icarus 31 . When drawing Fig. 2 , we need to be aware that Phaethon's visual geometric albedo ( p V = 0.122 ± 0.008) reported in ref. 27 is defined in the V -band (centered at 0.545 μm), not in the R C -band (centered at 0.641 μm) where our observation was carried out. This wavelength difference can affect albedo. For deriving Phaethon's geometric albedo in the R C -band from the reported p V in the V -band, we adopted the averaged spectral intensitydifference of Phaethon between 0.641 μm and 0.545 μm measured in ref. 32 . This conversion yields an estimate of Phaethon's albedo in the R C -band as A = 0.0910, which is slightly lower than that in the V -band ( A = 0.0931). As for (1566) Icarus, we used the observed P max in the V -band and the A value at this wavelength 31 . As for Mercury, we adopted the values described in ref. 33 : A = 0.130 at α = 5° in 0.585 μm. As for the lunar fine data presented in ref. 28 ’s Fig. 2, there is no specific description of wavelength in the paper. Therefore, we assumed 0.6 μm (“orange light”) depicted in a closely relevant paper by the same author 34 on the same subject. For the comets 2P and 209P, the measurement wavelengths are denoted in Fig. 1 . All the laboratory samples described in ref. 29 are measured in the wavelength of 0.58 μm. Figure 2 largely realizes Umow’s law, representing the inverse correlation between geometric albedo and P max . As you see in this figure, the albedo of Mercury and Icarus is moderate, so is their P max . The albedo of comets 2P and 209P is very low, and their P max is large. On the other hand, Phaethon exhibits very large P max while its albedo is not as low as that of comets 2P and 209P. This obviously looks odd, and it must be accounted for by reasonable physical explanation. Discussion As we see in Figs. 1 and 2 , our polarimetric measurement showed that Phaethon possesses a very strong linear polarization on its surface. The straightforward application of Umow’s law to this result tells us that, Phaethon’s geometric albedo can be lower than what is currently estimated. Phaethon’s geometric albedo, as well as that of many other asteroids, is estimated through the combination of radiometric observation in infrared wavelengths and photometric observation in visible wavelengths. Accuracy of an asteroid’s albedo estimate in this way largely depends on how accurately its absolute magnitude ( H ) is determined. And, accuracy of the absolute magnitude determination depends on the accuracy of phase curve function determined by photometric observation in visible wavelengths at solar phase angle α from small to large values. However, ground-based observation of Phaethon at very small solar phase angle is intrinsically difficult due to the relative orbital configuration between this asteroid and the Earth. Therefore, Phaethon’s absolute magnitude determination is based on the phase curve observations whose minimum solar phase angle is no smaller than 12° 27 , 35 . This means that the influence of the opposition effect that can happen at very small α has not been directly measured. Consequently, inclusion of uncertainty into Phaethon’s absolute magnitude is inevitable. Hence, Phaethon’s albedo estimate can contain a relatively large uncertainty as long as it comes through radiometric measurement. Recently, Phaethon’s effective diameter ( D ) was determined as D = 5.7 km through a radar observation at its close approach to the Earth in December 2017 36 . Measurement of size and shape of near-Earth asteroids through active radar observation is known to be reliable 37 , and we presume the estimated effective diameter is accurate. In ref. 36 , Phaethon’s geometric albedo is preliminarily revised as 0.10 by assuming the absolute magnitude H = 14.3. On the other hand, several different estimates of Phaethon’s absolute magnitude have been published. One of the faint estimates is H = 14.6 38 . If we apply the combination of D = 5.7 km and H = 14.6 to the common formula 39 between asteroid’s diameter D , geometric albedo p V , and absolute magnitude H $${\mathrm{log}}_{10}{\kern 1pt} D = 0.5(6.259 - {\mathrm{log}}_{10}{\kern 1pt} p_{V} - 0.4H),$$ (1) we obtain p V = 0.081 for Phaethon, a much smaller albedo than the currently accepted value. Using these albedo estimates, we placed two more symbols for Phaethon in Fig. 2 . As for the uncertainty of absolute magnitude determination based on phase curves, we know that it can reach 0.1 magnitude even for asteroids whose phase curve is accurately measured down to very small solar phase angle 30 . Recalling the fact that Phaethon’s phase curve is measured only down to α ~ 12°, it is not hard to imagine that its absolute magnitude estimate contains uncertainties substantially larger than 0.1 magnitude. This fact endorses the prospect of Phaethon’s albedo uncertainty: This asteroid’s geometric albedo is presumably lower than the currently accepted estimate. The lower albedo can cause Phaethon’s strong P max that our polarimetric measurement found out. If the current albedo estimate of Phaethon is accurate enough and not quite low, what else could cause its strong polarization? In this case we would direct our attention to the fact that in Fig. 2 , the terrestrial samples with larger grains (50–340 μm) yield stronger P max than those with smaller grains (<50 μm) and lunar fines. In other words, we would suspect that Phaethon’s strong polarization has something to do with its surface grain size. A few empirical formulas are known between the grain size d of regolith-like material and the converted albedo A obtained from various laboratory measurements. When expressing d in μm, P max in %, and A at the wavelength of 0.65 μm in %, one of the formulas 40 is expressed as $$d = 0.03{\kern 1pt} {\mathrm{exp}}\left[ {2.9\left( {{\mathrm{log}}_{10}{\kern 1pt} A + 0.845{\kern 1pt} {\mathrm{log}}_{10}{\kern 1pt} 10P_{{\mathrm{max}}}} \right)} \right].$$ (2) Equation ( 2 ) tells us that the larger the grain is, the stronger the polarization gets, as long as albedo remains constant. When larger grains dominate an object’s surface, there would be fewer grains down to unit optical depth. Consequently, multiple scattering of incident light would happen less often, which leads to stronger polarization. Substituting the actual values of Phaethon ( A = 9.1% derived from the current albedo estimate, and P max = 50% from the observed largest P r ) into Eq. ( 2 ), we get d ~ 360 μm. Although uncertainty is unavoidable as to how appropriate it is to apply Eq. ( 2 ) obtained from laboratory measurements of terrestrial and lunar samples to the surface state of a small body such as Phaethon, it is worth noting that the estimated value ( d ~ 360 μm) belongs to the largest category among the laboratory samples. Incidentally, let us note that the newly estimated albedo value (0.10) through the radar observation 36 yields d ~ 280 μm. The hypothesis of the dominance of larger grains on Phaethon’s surface has an affinity as well as an inconsistency with observational facts. As for the affinity, let us remember that Phaethon is famous for its very blue spectrum 9 , 10 , 12 . And we know from experiments that the spectra of meteoritic and asteroidal materials tend to get bluer when we increase their effective grain size 13 , 41 , 42 . A possible mechanism that produces the large grains is sintering. Phaethon’s surface can be heated up to 1000 K during its perihelion passage 43 , which is as high as the metamorphic temperature of some types of carbonaceous chondrites 44 , 45 . Such an extreme heating can cause sintering on this asteroid’s surface, making the grain coarsening happen 46 . Let us mention the inconsistency. While our polarimetric measurement result obtained at large solar phase angle may imply the possible dominance of relatively large grains on Phaethon’s surface, polarimetric measurements in the negative branch obtained at smaller phase angle presented in past studies 23 , 47 suggest that this asteroid behaves rather typically as B-type with a moderate inversion phase angle (which divides the negative and positive polarimetric branches). From this viewpoint, Phaethon’s surface texture does not seem quite dominated by large grains. Currently, this discrepancy cannot be solved by our observation result alone, and it should be further investigated in future. Let us add yet another possibility that can enhance Phaethon’s polarization degree: Large surface porosity. It has been numerically confirmed that large porosity significantly increases polarization of material surface 48 , 49 . We also know that this trend holds true regardless of wavelength of incident light 50 , 51 , although its theoretical understanding is not yet completely established, particularly when the wavelength is shorter than the characteristic size of light scatterers on the object surface. In terms of geometry, surface porosity can be larger in general when the surface grain shape is rough or irregular regardless of the grains’ average size. Although the irregularity of surface grains itself may not significantly affect the P max –albedo relationship 52 , it is principally possible that the polarization degree of an object could be enhanced if the irregularity of its surface particles substantially raises porosity. Phaethon’s surface porosity has not been directly measured, and is not yet well constrained. Hence, we cannot rule out the possibility that larger surface porosity of Phaethon contributes to its large P max . Note that the above-mentioned potential causes of Phaethon’s strong polarization (lower geometric albedo, prevalence of larger grains, and large surface porosity) are not mutually exclusive, and some of their combinations can be effective. Whichever of these processes (or their combinations) is causing the strong polarization of this asteroid, various ways of investigating this would serve as an important means of characterizing physical properties of Phaethon and the small solar system bodies in this category. To disentangle the combined physical processes and surface properties that involve both texture and chemical composition of small bodies of this kind, we need polarimetric observations both at small and large solar phase angles together with spectroscopic observation over a wide wavelength range as a complementary tool. A recent infrared observation that revealed that Phaethon has no absorption features at 3 μm 53 can be one step. And, partly to obtain direct answers to the questions listed above, a space mission to Phaethon named DESTINY + is planned and has now been approved by JAXA, and is awaiting its launch in 2022 54 , 55 . The spacecraft is supposed to make a flyby of this asteroid at a distance of 500 km or less, and is expected to provide us with high spatial resolution images containing significantly detailed information about the surface state of this asteroid. The mission outcome will unveil the nature of Phaethon’s enigmatic characteristics that our polarimetric observation revealed. Methods Observations Nayoro Observatory, which houses the Pirka telescope, is located at the middle latitude (+44°22′25.104″N, 142°28′58.008″E, 161 m above sea level). Fortunately, several conditions, including the observatory’s latitudinal location, helped us overcome the aforementioned technical difficulties of polarimetric observation of our target at large α : the observatory’s location at a relatively high latitude, the season of observation (from late summer to autumn) when the Earth’s North Pole is still inclined to the Sun, the location and direction of Phaethon’s motion that was high above the Earth’s northern hemisphere at that time, and the telescope capable of functioning safely even at very low elevation angles down to 5° without any obstacles along the line of sight. These conditions made the observation of this asteroid at large solar phase angle possible with sufficient signal-to-noise ratio ( \(\gtrsim\) 100). Table 1 shows the details of our observation. We used the Multi-Spectral Imager (MSI) 56 installed at the f /12 Cassegrain focus of the Pirka telescope. MSI comprises several polarimetric devices (Wollaston prism, half-wave plate, and polarization mask), and it produces polarimetric images that cover a 3.3′ × 0.7′ field of view. We employed the standard Johnson–Cousins R C -band filter for this study. This is mainly because the measurement accuracy is better in the R C -band than in the V -band, particularly at large airmass. Individual exposure time is set to 60–180 seconds for each image, depending on the weather conditions and apparent magnitude of the asteroid. After each exposure, we routinely rotate the half-wave plate in sequence from 0° to 45.0°, from 45.0° to 22.5°, and from 22.5° to 67.5° to complete a set of polarimetric data. Data reduction We analyzed the raw data in a standard manner of astronomical image processing: All object frames are bias-subtracted and flat-fielded. Cosmic rays are removed using the L.A. Cosmic tool 57 . We extracted individual source fluxes from ordinary and extraordinary images using the aperture photometry technique implemented in IRAF. We set the aperture size for the photometry 2.5 times the full-width at half-maximum (FWHM). To derive the polarization degree and its position angle of objects, we followed the technique implemented in ref. 31 . Specifically, we applied the following corrections: Correction of polarization efficiency, that of instrumental polarization, and that of instrumental offset in the position angle. In what follows we use the notation \(q{\prime}_{{\mathrm{pol}}}\) and \(u{\prime}_{{\mathrm{pol}}}\) for the normalized Stokes parameters instead of the conventional notation 24 Q , U , and I (see Eq. ( 4 )). We derive these parameters using the ordinary part \(({\mathscr I}_{\mathrm{o}})\) and the extraordinary part \(({\mathscr I}_{\mathrm{e}})\) in the extracted (observed) fluxes on the images obtained at the half-wave plate angle Ψ. More specifically, we first define quantities R q and R u as follows: $$R_{\mathrm{q}} = \sqrt { {\frac{{{\mathscr I}_{\mathrm{e}}(0)}}{{{\mathscr I}_{\mathrm{o}}(0)}}} {/} {\frac{{{\mathscr I}_{\mathrm{e}}(45^\circ )}}{{{\mathscr I}_{\mathrm{o}}(45^\circ )}}}} ,\quad R_{\mathrm{u}} = \sqrt {\frac{{{\mathscr I}_{\mathrm{e}}(22.5^\circ)}}{{{\mathscr I}_{\mathrm{o}}(22.5^\circ)}} {/} \frac{{{\mathscr I}_{\mathrm{e}}(67.5^\circ)}}{{{\mathscr I}_{\mathrm{o}}(67.5^\circ)}}} .$$ (3) Then, the normalized Stokes parameters \(q{\prime}_{{\mathrm{pol}}}\) and \(u{\prime}_{{\mathrm{pol}}}\) are defined as: $$q{\prime}_{{\mathrm{pol}}} \equiv \frac{Q}{I} = \frac{1}{{p_{{\mathrm{eff}}}}}\frac{{R_{\mathrm{q}} - 1}}{{R_{\mathrm{q}} + 1}},\quad u{\prime}_{{\mathrm{pol}}} \equiv \frac{U}{I} = \frac{1}{{p_{{\mathrm{eff}}}}}\frac{{R_{\mathrm{u}} - 1}}{{R_{\mathrm{u}} + 1}},$$ (4) where p eff denotes the polarization efficiency of the total instrument system. We examined p eff on 1 October 2016 (during our observation runs of Phaethon) by taking dome flat images through a pinhole, and then through a Polaroid-like linear polarizer. This combination produces artificial stars with P = 99.98 ± 0.01% in the R C -band. By measuring the polarization of these artificial stars, we determined p eff = 0.9948 ± 0.0003 in the R C -band. The accumulation of various polarimetric observations that have been conducted at Nayoro Observatory tells us that the instrumental polarization of Pirka/MSI depends on the instrument rotator angle. We quantify this effect by inspecting the components of the Stokes parameters originating in the instrumental polarization, q inst and u inst . For determining their values, we carried out an observation of an unpolarized star HD 212311 58 on 1 October 2016. The resulting values in the R C -band are q inst = 0.705 ± 0.017% and u inst = 0.315 ± 0.016%, respectively. Also, we define θ rot1 as the average instrument rotator angle during the exposures with Ψ = 0° and Ψ = 45.0°, and θ rot2 as the average instrument rotator angle during the exposures with Ψ = 22.5° and Ψ = 67.5°. Then, the effect of instrumental polarization is corrected by the following conversion from \((q\prime_{{\mathrm{pol}}} ,u{\prime}_{{\mathrm{pol}}} )\) to \((q_{{\mathrm{pol}}}'' ,u_{{\mathrm{pol}}}'' )\) : $$\left( {\begin{array}{*{20}{r}} \hfill {q_{{\mathrm{pol}}}'' }\\ \hfill {u_{{\mathrm{pol}}}'' }\end{array}} \right) = \left( {\begin{array}{*{20}{r}} \hfill {q_{{\mathrm{pol}}}' }\\ \hfill {u_{{\mathrm{pol}}}'' }\end{array}} \right) - \left( {\begin{array}{*{20}{r}} \hfill {{\mathrm{cos}}{\kern 1pt} 2\theta _{{\mathrm{rot}}1}} & \hfill { - {\mathrm{sin}}{\kern 1pt} 2\theta _{{\mathrm{rot}}1}}\\ \hfill {{\mathrm{sin}}{\kern 1pt} 2\theta _{{\mathrm{rot}}2}} & \hfill {{\mathrm{cos}}{\kern 1pt} 2\theta _{{\mathrm{rot}}2}}\end{array}} \right)\left( {\begin{array}{*{20}{r}} \hfill {q_{{\mathrm{inst}}}}\\ \hfill {u_{{\mathrm{inst}}}}\end{array}} \right).$$ (5) Next, we correct instrumental offset in the position angle. For this purpose, we first determined the instrumental position angle offset θ off through an observation of three strongly polarized stars whose position angles are well known 58 (HD 204827, HD 154445, and HD 155197). The observation was carried out on 1 October 2016 in the R C -band, and it yields θ off = 3.94 ± 0.31°. Then, using a parameter θ ref that specifies the position angle of the instrument (which is usually set by observers, and stored as the parameter INST-PA in the FITS header), we introduce an angle \(\theta _{{\mathrm{off}}}'\) as $$\theta _{{\mathrm{off}}}' = \theta _{{\mathrm{off}}} - \theta _{{\mathrm{ref}}}.$$ (6) Now we implement the correction using the following conversion formula from \((q_{{\mathrm{pol}}}'' ,u_{{\mathrm{pol}}}'' )\) into another set of parameters \((q_{{\mathrm{pol}}}''',u_{{\mathrm{pol}}}''')\) as $$\left( {\begin{array}{*{20}{r}} \hfill {q_{{\mathrm{pol}}}'''}\\ \hfill {u_{{\mathrm{pol}}}'''}\end{array}} \right) = \left( {\begin{array}{*{20}{r}} \hfill {{\mathrm{cos}}{\kern 1pt} 2\theta _{{\mathrm{off}}}' } & \hfill {{\mathrm{sin}}{\kern 1pt} 2\theta _{{\mathrm{off}}}' }\\ \hfill { - {\mathrm{sin}}{\kern 1pt} 2\theta _{{\mathrm{off}}}' } & \hfill {{\mathrm{cos}}{\kern 1pt} 2\theta _{{\mathrm{off}}}' }\end{array}} \right)\left( {\begin{array}{*{20}{r}} \hfill {q_{{\mathrm{pol}}}''}\\ \hfill {u_{{\mathrm{pol}}}'' }\end{array}} \right).$$ (7) Using the corrected, normalized Stokes parameters \(q_{{\mathrm{pol}}}'''\) and \(u_{{\mathrm{pol}}}'''\) , we finally obtain the linear polarization degree P as $$P = \sqrt {{{q_{{\mathrm{pol}}}'''} \hskip -4pt ^{ 2}} + {{u_{{\mathrm{pol}}}'''}\hskip -4pt ^{2}}} ,$$ (8) and the position angle of polarization θ P as $$\theta _{\mathrm{P}} = \frac{1}{2}{\kern 1pt} {\mathrm{tan}}^{ - 1}\frac{{u_{{\mathrm{pol}}}'''}}{{q_{{\mathrm{pol}}}'''}}.$$ (9) The linear polarization degree of an object with respect to the scattering plane (the plane where the Sun, the object, and the observer exist together) is expressed as $$P_{\mathrm{r}} = P{\kern 1pt} {\mathrm{cos}}{\kern 1pt} 2\theta _{\mathrm{r}},$$ (10) where θ r is given by $$\theta _{\mathrm{r}} = \theta _{\mathrm{P}} - \left( {\phi \pm 90^\circ } \right).$$ (11) ϕ is the angle that determines the direction of the scattering plane on sky, and the sign in bracket is chosen to guarantee 0 ≤ ϕ ± 90° < 180° 59 . It is not that we obtain the above correction parameters ( p eff , q inst , u inst , θ off ) every night. However, let us emphasize that their values did not significantly change over the two-month period of our observation. We also confirmed that the difference of their values between May 2015 31 and September–November 2016 (when we carried this work out) is smaller than the measurement error of these parameters. The reason for this parameter stability is probably that we permanently installed the imager instrument (MSI) on the telescope, and that we have not added any modifications to the instrument at all since the installation. Estimate of errors To estimate measurement errors that P r in Eq. ( 10 ) and θ r in Eq. ( 11 ) contain, we went through the following error estimate procedure 31 . Here we divide the errors into two classes: random errors and systematic errors. Let us first consider random errors. We denote the normalized Stokes parameters \(q_{{\mathrm{pol}}}'''\) and \(u_{{\mathrm{pol}}}'''\) in Eq. ( 7 ) obtained from the i -th exposure set as \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) ( i = 1… n ). As for sources of random errors that \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) contain, we can think of shot noise from the background sky, shot noise from the asteroid itself, readout noise from the CCD, and so on. These are included in the measurement of the 4 + 4 observed fluxes ( \({\mathscr I}_{\mathrm{e}}(0)\) , \({\mathscr I}_{\mathrm{o}}(0)\) , \({\mathscr I}_{\mathrm{e}}(45^\circ )\) , \({\mathscr I}_{\mathrm{o}}(45^\circ )\) , \({\mathscr I}_{\mathrm{e}}(22.5^\circ)\) , \({\mathscr I}_{\mathrm{o}}(22.5^\circ)\) , \({\mathscr I}_{\mathrm{e}}(67.5^\circ)\) , \({\mathscr I}_{\mathrm{o}}(67.5^\circ)\) ) appearing in Eq. ( 3 ), and are estimated by the phot function implemented in IRAF. Let us presume that each of \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) is a function of four variables: \({\mathscr I}_{\mathrm{e}}(0)\) , \({\mathscr I}_{\mathrm{o}}(0)\) , \({\mathscr I}_{\mathrm{e}}(45^\circ )\) , \({\mathscr I}_{\mathrm{o}}(45^\circ )\) for \(q_{{\mathrm{pol}},i}'''\) , and \({\mathscr I}_{\mathrm{e}}(22.5^\circ)\) , \({\mathscr I}_{\mathrm{o}}(22.5^\circ)\) , \({\mathscr I}_{\mathrm{e}}(67.5^\circ)\) , \({\mathscr I}_{\mathrm{o}}(67.5^\circ)\) for \(u_{{\mathrm{pol}},i}'''\) . We assume that no correlation exists between the errors that the four variables (fluxes) contain. Then, using the common formula for error propagation 60 , we calculate variance of the random errors that are propagated to each of \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) . We denote the variances as \(\sigma _{q_i'''}^2\) and \(\sigma _{u_i'''}^2\) . Then, we derive the nightly averages of \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) through the inverse-variance weighting as follows: $$\overline q _{{\mathrm{pol}}}''' = \sigma _{\overline q _{{\mathrm{pol}}}'''}^2{\kern 1pt} \mathop {\sum}\limits_{i = 1}^n {\kern 1pt} \frac{{q_{{\mathrm{pol}},i}'''}}{{\sigma _{q_i'''}^2}},\quad \overline u _{{\mathrm{pol}}}''' = \sigma _{\overline u _{{\mathrm{pol}}}'''}^2{\kern 1pt} \mathop {\sum}\limits_{i = 1}^n {\kern 1pt} \frac{{u_{{\mathrm{pol}},i}'''}}{{\sigma _{u_i'''}^2}},$$ (12) where $$\sigma _{\overline q _{{\mathrm{pol}}}'''}^2 = \frac{1}{{\mathop {\sum}\nolimits_{i = 1}^n {\sigma _{q_i'''}^{ - 2}} }},\quad \sigma _{\overline u _{{\mathrm{pol}}}'''}^2 = \frac{1}{{\mathop {\sum}\nolimits_{i = 1}^n {\sigma _{u_i'''}^{ - 2}} }}.$$ (13) The aggregated variances \(\sigma _{\overline q _{{\mathrm{pol}}}'''}^2\) and \(\sigma _{\overline u _{{\mathrm{pol}}}'''}^2\) in Eq. ( 13 ) can be regarded as synthesized random errors of \(\overline q _{{\mathrm{pol}}}'''\) and \(\overline u _{{\mathrm{pol}}}'''\) in Eq. ( 12 ). As for systematic errors, we presume that the four parameters mentioned in Data reduction section are major contributors to these errors: polarization efficiency of the total instrument system ( p eff ), instrumental polarization ( q inst and u inst ), and instrumental position angle offset ( θ off ). Similar to the discussion when we estimate the random errors, let us regard each of \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) as a function of p eff , q inst , u inst , and θ off . Again, let us assume that no correlation exists between the errors that the four variables contain. Using the formula for error propagation 60 again, we calculate variance of the systematic errors that are propagated to each of \(q_{{\mathrm{pol}},i}'''\) and \(u_{{\mathrm{pol}},i}'''\) . We denote the variances as \(\delta _{q_i'''}^2\) and \(\delta _{u_i'''}^2\) . Then, we define the eventual systematic errors of \(q_{{\mathrm{pol}}}'''\) and \(u_{{\mathrm{pol}}}'''\) for each night by making an arithmetic average of \(\delta _{q_i'''}^2\) and \(\delta _{u_i'''}^2\) as: $$\delta _{\overline q _{{\mathrm{pol}}}'''}^2 = \frac{1}{n}{\kern 1pt} \mathop {\sum}\limits_{i = 1}^n {\kern 1pt} \delta _{q_i'''}^2,\quad \delta _{\overline u _{{\mathrm{pol}}}'''}^2 = \frac{1}{n}{\kern 1pt} \mathop {\sum}\limits_{i = 1}^n {\kern 1pt} \delta _{u_i'''}^2.$$ (14) Adding up the random errors calculated in Eq. ( 13 ) and the systematic errors calculated in Eq. ( 14 ), the total of errors that each of \(\overline q _{{\mathrm{pol}}}'''\) and \(\overline u _{{\mathrm{pol}}}'''\) contain are expressed as follows: $$\varepsilon _{\overline q _{{\mathrm{pol}}}'''} = \sqrt {\sigma _{\overline q _{{\mathrm{pol}}}'''}^2 + \delta _{\overline q _{{\mathrm{pol}}}'''}^2} ,\quad \varepsilon _{\overline u _{{\mathrm{pol}}}'''} = \sqrt {\sigma _{\overline u _{{\mathrm{pol}}}'''}^2 + \delta _{\overline u _{{\mathrm{pol}}}'''}^2} .$$ (15) Consequently, we can calculate the errors that P in Eq. (8) and θ P in Eq. ( 9 ) contain by replacing \((q_{{\mathrm{pol}}}''',u_{{\mathrm{pol}}}''')\) in Eqs. ( 8 ) and ( 9 ) for \((\overline q _{{\mathrm{pol}}}''',\overline u _{{\mathrm{pol}}}''')\) in Eq. ( 12 ) with each of their errors defined in Eq. ( 15 ). During our observations at a large solar phase angle (when α > 100° in early September 2016), the Moon was relatively bright (the phase of the Moon was approximately from 13 to 16). However, as seen in our description of the estimate process of random errors, we took shot noise from the background sky into account. This means that the influence of bright objects such as the Moon is automatically taken into consideration. Consequently, the resulting error bars appearing in Fig. 1 naturally and appropriately incorporate the influence of the Moon. Polarimetric dependence on airmass Conventionally, most astronomical observations for photometry and spectroscopy are conducted at low airmass \(\lesssim\) 2 (i.e., at elevation higher than about 30°). This is to eliminate the effect of atmospheric extinction. On the other hand, polarimetric analysis often ignores the airmass correction 61 . One reason is that the Earth’s atmosphere has not been considered to significantly change the apparent polarimetric status of the target objects. It is also important to note that, in most cases, polarimetric observation of small solar system bodies is based on relative photometry. Intensities of the scattered light polarized along the planes perpendicular \((I_ \bot )\) and parallel \((I_\parallel )\) to the scattering plane are simultaneously measured, and their relative value such as \(\frac{{I_ \bot - I_\parallel }}{{I_ \bot + I_\parallel }}\) matters 23 , 24 . Therefore, the influence of atmospheric conditions is largely suppressed in most scenes. This is a significant difference from ordinary photometric or spectroscopic observations. The polarimetric data we present in this paper were obtained over a wide range of solar phase angles, spanning a wide range of airmasses (from 1.013 to 6.336. see Table 1 ). To justify the validity and correctness of our reduction procedure applied to the data obtained at large airmass, we made additional observations of polarimetric standard stars whose polarization degrees are well determined. Specifically, we picked two unpolarized stars ( θ UMa and HD 212311) listed in ref. 58 , observed them, and plotted the dependence of their P r on airmass in Fig. 3 . These two standard stars are known to have very small polarization degrees ( P r < 0.1%) in visible wavelengths. As we see in this figure, these stars’ P r remain very small (unpolarized), regardless of the large variety of airmass used for the observation. This result justifies the validity of our measurement at large airmass, and practically guarantees the correctness of our analysis. Table 1 The journal of our observations Full size table Fig. 3 Linear polarization degree of the two unpolarized standard stars ( θ UMa and HD 212311) and its dependence on airmass (elevation). We obtained the leftmost point (at airmass ~7) through the observation of θ UMa, and the other four points through the observation of HD 212311. The observation was carried out at the same observatory using the same instrument we used for our observation of Phaethon. We converted the elevation angle into airmass through the airmass function implemented in IRAF. Error bars of P r represent the sum of random errors and systematic errors that our polarimetric measurement contains. They are calculated in a manner described in Methods (see subsection Estimate of errors). Supplementary Table 3 provides actual numerical values used for this plot Full size image Code availability IRAF is the Image Reduction and Analysis Facility, a general-purpose software system for the reduction and analysis of astronomical data. It is written and supported by the National Optical Astronomy Observatories (NOAO) in Tucson, Arizona, USA. NOAO is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. The code is available from . Data availability We declare that the data supporting this study’s findings are available within the article and its Supplementary Information file. In addition, raw polarimetric images of the target object (Phaethon) together with bias and dome flat images that we obtained for this article are available from figshare online digital repository: . Raw polarimetric images of standard stars used for inspecting the dependence of P r on airmass (Fig. 3 ) are available from the authors upon reasonable request.
For the first time, astronomers have directly observed the magnetism in one of astronomy's most studied objects: the remains of Supernova 1987A (SN 1987A), a dying star that appeared in our skies over thirty years ago. In addition to being an impressive observational achievement, the detection provides insight into the early stages of the evolution of supernova remnants and the cosmic magnetism within them. "The magnetism we've detected is around 50,000 times weaker than a fridge magnet," says Prof. Bryan Gaensler. "And we've been able to measure this from a distance of around 1.6 million trillion kilometres." "This is the earliest possible detection of the magnetic field formed after the explosion of a massive star," says Dr. Giovanna Zanardo. Gaensler is Director of the Dunlap Institute for Astronomy & Astrophysics at the University of Toronto, and a co-author on the paper announcing the discovery being published in the Astrophysical Journal on June 29th. The lead author, Zanardo, and co-author Prof. Lister Staveley-Smith are both from the University of Western Australia's node of the International Centre for Radio Astronomy Research. SN 1987A was co-discovered by University of Toronto astronomer Ian Shelton in February 1987 from the then Southern Observatory of the University of Toronto in northern Chile. It is located in the Large Magellanic Cloud, a dwarf galaxy companion to the Milky Way Galaxy, at a distance of 168,000 light-years from Earth. It was the first naked-eye supernova to be observed since the astronomer Johannes Kepler witnessed a supernova over 400 years ago. A map of the SN 1987A remnant with short orange lines showing the orientation of the magnetic field. Credit: Giovanna Zanardo In the thirty years since the supernova occurred, material expelled by the blast, as well as the shockwave from the star's death throes, have been travelling outward through the gas and dust that surrounded the star before it exploded. Today, when we look at the remnant, we see rings of material set aglow by the supernova's expanding debris and shockwave. Using the Australia Telescope Compact Array at the Paul Wild Observatory, Gaensler and his colleagues observed the magnetic field by studying the radiation coming from the object. By analyzing the properties of this radiation, they were able to trace the magnetic field. "The picture shows what it would look like if you could sprinkle iron filings over the expanding cloud of debris, 170 thousand light years away", says Gaensler. What they found was that the remnant's magnetic field was not chaotic but already showed a degree of order. Astronomers have known that as supernova remnants get older, their magnetic fields are stretched and aligned into ordered patterns. So, the team's observation showed that a supernova remnant can bring order to a magnetic field in the relatively short period of thirty years. The magnetic field lines of the Earth run north and south, causing a compass to point to the Earth's poles. By comparison, the magnetic field lines associated with SN 1987A are like the spokes of a bicycle wheel aligned from the centre out. "At such a young age," says Zanardo, "everything in the stellar remnant is moving incredibly fast and changing rapidly, but the magnetic field looks nicely combed out all the way to the edge of the shell." Gaensler and his colleagues will continue to observe the constantly evolving remnant. "As it continues to expand and evolve," says Gaensler, "we will be watching the shape of the magnetic field to see how it changes as the shock wave and debris cloud run into new material."
arxiv.org/pdf/1806.04741.pdf
Nano
Researchers create organic nanoparticle that uses sound and heat to find, treat tumors
DOI: 10.1038/NMAT2986
http://dx.doi.org/10.1038/NMAT2986
https://phys.org/news/2011-03-nanoparticle-tumors.html
Abstract Optically active nanomaterials promise to advance a range of biophotonic techniques through nanoscale optical effects and integration of multiple imaging and therapeutic modalities. Here, we report the development of porphysomes; nanovesicles formed from self-assembled porphyrin bilayers that generated large, tunable extinction coefficients, structure-dependent fluorescence self-quenching and unique photothermal and photoacoustic properties. Porphysomes enabled the sensitive visualization of lymphatic systems using photoacoustic tomography. Near-infrared fluorescence generation could be restored on dissociation, creating opportunities for low-background fluorescence imaging. As a result of their organic nature, porphysomes were enzymatically biodegradable and induced minimal acute toxicity in mice with intravenous doses of 1,000 mg kg −1 . In a similar manner to liposomes, the large aqueous core of porphysomes could be passively or actively loaded. Following systemic administration, porphysomes accumulated in tumours of xenograft-bearing mice and laser irradiation induced photothermal tumour ablation. The optical properties and biocompatibility of porphysomes demonstrate the multimodal potential of organic nanoparticles for biophotonic imaging and therapy. Main Therapeutic and diagnostic techniques benefiting from components that efficiently absorb light include fluorescent and colorimetric detection 1 , 2 , photothermal and photodynamic therapy 3 , 4 , 5 , photoacoustic tomography (also known as optoacoustic tomography) 6 , 7 , 8 , 9 , optical frequency domain imaging 10 and multimodal techniques 11 , among others. As inorganic nanoparticles often interact strongly with light, they can be used as agents for these techniques. For instance, quantum dots are valuable fluorescent probes and have extinction coefficients in the range of 10 5 –10 6 M −1 cm −1 (ref. 12 ). Gold nanoparticles are useful for colorimetric detection, photothermal and photoacoustic techniques owing to their much higher extinction coefficients, of the order of 10 9 –10 11 M −1 cm −1 (ref. 13 ). Despite recent progress 14 , optically active inorganic nanoparticles have not yet achieved broad clinical implementation, possibly stemming from drug loading typically being limited to the nanoparticle surface and concerns regarding long-term safety 15 , 16 , 17 , 18 . In contrast, organic nanoparticles (including liposomes, lipoproteins, micelles, nanospheres and polymersomes) have found many human therapeutic applications as a result of robust biocompatibility and drug-delivery capacity 18 . However, as these organic nanoparticles generally do not intrinsically absorb light in the near-infrared region, they have been of limited use for biophotonics. Although supramolecular assemblies can be formed entirely by porphyrin conjugates, intensely light-absorbing organic small molecules, these constructs have not been thoroughly explored as biophotonic tools owing to a lack of stability, solubility or biological utility 19 . Here we introduce ‘porphysomes’; organicnanoparticles self-assembled from phospholipid–porphyrin conjugates that exhibit liposome-like structure and loading capacity, high absorption of near-infrared light, structure-dependent fluorescence quenching and excellent biocompatibility, and show promise for diverse biophotonic applications. Porphysomes were formed by supramolecular self-assembly. The porphysome subunits consisted of porphyrin–lipid conjugates generated by an acylation reaction between lysophosphatidylcholine and pyropheophorbide, a chlorophyll-derived porphyrin analogue. This hydrophobic chromophore was positioned in place of an alkyl side chain, maintaining an amphipathic structure ( Fig. 1 a). This conjugate could be self-assembled in aqueous buffer with extrusion to form porphysomes. A concentration of 5 molar% polyethylene glycol (PEG)–lipid was included in the formulation to enhance in vivo pharmacokinetics 20 . Transmission electron microscopy showed that these porphysomes were spherical vesicles of 100 nm in diameter ( Fig. 1 b). At higher magnifications, the porphysome structure was revealed as two layers of higher-density material separated by a 2 nm gap, corresponding to two separate monolayers of porphyrin. Pyropheophorbide porphysomes exhibited two absorption peaks, one at 400 nm and one in the near-infrared window at 680 nm ( Fig. 1 c). Further redshifted porphysomes (760 nm) were produced by using subunits generated from another type of porphyrin; a bacteriochlorophyll analogue that was synthesized in the same manner as pyropheophorbide–lipid. Alternatively, a protocol was developed to insert metal ions into the porphyrin–lipid structure, resulting in shifted optical density bands (440 nm and 670 nm) and demonstrating the unique phenomenon that porphysomes can form metal-chelating bilayers. These different types of porphysome could be useful in scenarios in which specific operating wavelengths are required (for example, to match a given laser excitation source). To verify that the absorbance spectra corresponded to light absorption, rather than scattering, we compared porphysomes with wavelength-matched gold nanorods (with 680 nm extinction peaks) using resonance scattering 21 . Porphysomes exhibited up to 100 times less resonance light scatter at the optical density wavelength peak at which the samples were normalized ( Fig. 1 d). The monodisperse 100 nm sizes exhibited by various types of porphysome ( Fig. 1 e) are in a suitable range to take advantage of the enhanced permeability and retention effect for passive accumulation in tumours 22 , 23 . Flexibility in size control was demonstrated as sonication of porphyrin–lipid in water produced smaller, 30 nm, nanoparticles ( Supplementary Fig. S1 ), which could be useful for applications requiring smaller nanoparticle sizes. Geometric calculations for vesicles of 100 nm in diameter composed of subunits with phosphatidylcholine headgroups indicate that there are approximately 8×10 4 porphyrin conjugates per porphysome 24 . On the basis of pyropheophorbide absorbance (accounting for differences in the absorbance of the intact porphysome measured in PBS and the dissociated porphyrin–lipid obtained by diluting 1–2 μl of porphysomes in 1 ml of methanol, as shown in Supplementary Fig. S2 ), we estimate a pyropheophorbide-porphysome extinction coefficient, ɛ 680 , of 2.9×10 9 M −1 cm −1 . This large, near-infrared extinction coefficient is a reflection of the dense porphyrin packing in the bilayer that generates the unique nanoscale optical behaviour of porphysomes. Figure 1: Porphysomes are optically active nanovesicles formed from porphyrin bilayers. a , Schematic representation of a pyropheophorbide–lipid porphysome. The phospholipid headgroup (red) and porphyrin (blue) are highlighted in the subunit (left) and assembled nanovesicle (right). b , Electron micrographs of negatively stained porphysomes (5% PEG–lipid, 95% pyropheophorbide–lipid). c , Absorbance of the porphyrin–lipid subunits incorporated in porphysomes formed from pyropheophorbide (blue), zinc-pyropheophorbide (orange) and bacteriochlorophyll (red) in PBS. d , Resonance light scattering spectra ratio between gold nanorods and pyropheophorbide porphysomes. Nanorod and porphysome concentration was adjusted to have equal optical density at 680 nm. e , Dynamic light scattering size profiles of indicated porphysomes recorded in PBS. Full size image To understand the implications of such a high number of porphyrin–lipid conjugates in a 100-nm-diameter nanovesicle, fluorescence self-quenching was examined. As increasing amounts of porphyrin–lipid were included in the formulations of standard liposomes (3:2 molar ratio of egg-yolk phosphatidylcholine/cholesterol), self-quenching increased up to 1,200-fold when porphysomes were formed completely by porphyrin–lipid subunits ( Fig. 2 a). This is much greater than typical porphyrin quenching 25 and indicates an energetically favourable supramolecular structure in which the porphyrin–lipid orientation facilitates extensive porphyrin interaction and quenching. As PEG–lipid was added to enhance in vivo pharmacokinetics, its potential to modulate porphysome self-quenching was assessed. Whereas incorporating 5 molar% distearoylphosphatidylcholine (the lipid portion of the PEG–lipid) did not affect quenching, 5 molar% PEG–lipid modestly enhanced self-quenching to over 1,500-fold ( Fig. 2 b). This increase was due to the stabilizing effect of PEG, consistent with observations that porphysomes containing PEG maintained their size and monodispersity for at least nine months, whereas those without PEG aggregated rapidly. To assess whether any nanostructure composed of dye–lipid subunits would be sufficient to generate extreme self-quenching, vesicles formed from 7-nitro-2-1,3-benzoxadiazol-4-yl (NBD)–lipid (a non-ionic dye conjugated to a lipid in a manner similar to that for porphyrin–lipid) were examined. NBD–lipid could not form monodisperse 100 nm vesicles (data not shown) and self-quenching was only 20-fold, highlighting the role of porphyrin interaction in defining porphysome structure and nanoscale properties. Differential scanning calorimetry revealed that the porphyrin–lipid had no apparent transition temperature, indicating that porphyrin stacking is distinct from the typical acyl chain interactions that drive normal lipid transitions in liposomes ( Supplementary Fig. S3 ). To determine whether quenching was solely a characteristic of porphyrin confinement in a bilayer, the behaviour of free porphyrin in liposomes was examined. The maximum amount of free pyropheophorbide that could be incorporated into liposomes was only 15 molar%, because manual extrusion became physically impossible beyond this amount. Porphysomes exhibited five times more self-quenching at corresponding levels of porphyrin–lipid incorporation ( Fig. 2 a,b), demonstrating again that the porphyrin bilayer structure is essential for extensive self-quenching. Porphyrin-loaded liposomes have been described for biological applications, but can accommodate only a small molar fraction of porphyrin and cannot prevent porphyrin redistribution to serum proteins 26 . Other porphyrin vesicles and diblock co-polymers have been described that incorporate porphyrin subunits, but lower porphyrin density resulted in lower extinction coefficients and an absence of significant fluorescence self-quenching 27 , 28 . Figure 2: Porphysomes demonstrate extensive and structurally driven self-quenching. a , Porphysome quenching as a function of molar% pyropheophorbide–lipid (mean ±s.d. from four experiments). The F DET / F 0 values are annotated in the graph. F 0 corresponds to the fluorescence of the porphysomes in PBS and F DET is the fluorescence after disruption of the porphysomes using 0.5% Triton X-100. Nanovesicles were formed from films containing the indicated molar% porphyrin–lipid and the remainder egg-yolk phosphatidylcholine/cholesterol (3:2). b , Self-quenching of various nanovesicle formulations (mean ± s.d. from four experiments). DSPC, distearoylphosphatidylcholine; DSPE, distearoylphosphatidylethanolamine. The maximum free porphyrin that could be loaded in liposomes before manual extrusion became physically impossible was 15 molar%. Full size image As porphysomes are highly self-quenched, energy that is normally released to fluorescence and singlet-oxygen generation (pyropheophorbide has a combined fluorescence and singlet-oxygen quantum yield approaching unity) is dissipated elsewhere. As seen in Fig. 3 a, on exposure to laser irradiation, energy was released thermally, with an efficiency comparable to gold nanorods (photothermally active inorganic nanoparticles), whereas laser irradiation of standard liposomes generated no significant increase in solution temperature. As photoacoustic signal generation is related to thermal expansion, porphysomes also generated strong photoacoustic signals, proportional to concentration and detectable as low as 25 picomolar, although detection in this range was slightly nonlinear ( Supplementary Fig. S4 ). Although photoacoustic signal is correlated to absorption, when detergent was added to disrupt the porphysome structure (actually generating an increase in absorption), photoacoustic signal decreased up to sixfold ( Fig. 3 b). The detergent had no effect on the photoacoustic signal of the clinically used contrast agent methylene blue, indicating that the structurally based self-quenching of porphysomes is requisite for nanoscale photoacoustic properties. This basic phenomenon of photoacoustic signal attenuation on detergent-induced porphysome dissociation is demonstrated in the photoacoustic images in Fig. 3 c. Figure 3: Multimodal optical utility of porphysomes. a , Photothermal transduction. Solutions were irradiated with a 673 nm laser and imaged with a thermal camera. b , Ratio of photoacoustic amplitudes (P.A.) measured for porphysomes and methylene blue ±0.5% Triton X-100 (mean ± s.e.m. from 10 measurements). det., detergent. c , Photoacoustic images of tubing containing porphysomes and PBS measured ±0.5% Triton X-100. d , Dual modality for photoacoustic contrast and activatable fluorescence. Top, lymphatic mapping. Rats were imaged using photoacoustic tomography before and after intradermal (i.d.) injection of porphysomes (2.3 pmol). Secondary lymph vessels (cyan), lymph node (red), inflowing lymph vessel (yellow) and 5 mm scale bar are indicated. Bottom, fluorescence activation after i.v. injection of porphysomes (7.5 pmol) in a KB xenograft-bearing mouse. e , Triggered fluorescence activation on folate-receptor-mediated uptake in KB cells. Porphysomes were incubated for 3 h with KB cells, and porphyrin–lipid (yellow) and nuclei (blue) were visualized with confocal microscopy. Full size image We next examined the unique quality that porphysomes are intrinsically suited for both photoacoustic tomography and fluorescenceimaging in vivo . Photoacoustic techniques are gaining recognition and have recently been used to non-invasively detect circulating cancer cells in blood vessels 29 , as well as in sentinel lymph nodes 30 . When porphysomes were injected intradermally in rats, the local lymphatic network became clearly detectable within 15 min as porphysomes drained to the lymph vessels and nodes ( Fig. 3 d, top). Porphysomes exhibited a strong photoacoustic signal, permitting the visualization of the first draining lymph node (red), the inflowing lymph vessel (yellow) and surrounding lymph vessels (cyan). The presence of porphysomes in these lymphatic vessels was directly confirmed by the distinct spectral signature of porphysomes in comparison with that of blood ( Supplementary Fig. S5 ). Other lymph nodes could be traced over time ( Supplementary Fig. S6 ). By using a 6.5 ns pulse width, 10 Hz laser, photoacoustic measurements did not generate sufficient heating to damage surrounding tissues. Next, to investigate whether porphysomes are suited for in vivo fluorescence imaging, they were injected intravenously into mice bearing KB cell xenografts. At 15 min post-injection, there was low overall fluorescence signal, demonstrating the self-quenching of porphysomes in vivo ( Fig. 3 d, bottom left). After 2 days, high tumour fluorescence was observed, as porphysomes accumulated in the tumour and became unquenched ( Fig. 3 d, bottom right), potentially through an enhanced permeability and retention effect or receptor-mediated endocytosis (the porphysomes used for fluorescence imaging included 1 molar% of folate–PEG–lipid). The concept of porphysome quenching in vivo was more markedly illustrated when we injected detergent-disrupted porphysomes into mice and observed much higher initial fluorescence ( Supplementary Fig. S7 ). Thus, on the basis of unique self-assembled and nanoscale properties, porphysomes are intrinsically multimodal for both photoacoustic tomography and low background fluorescence imaging. To examine the behaviour of porphysomes on uptake by cancer cells, folate-receptor-targeted porphysomes were produced by including 1 molar% folate–PEG–lipid. The folate receptor is overexpressed in a variety of cancers and effectively internalizes liposomes conjugated to folate 31 . When KB cells (which overexpress the folate receptor) were incubated with folate porphysomes, specific uptake was observed by confocal microscopy and could be inhibited by free folate ( Fig. 3 e). As intact porphysomes in the incubation media were essentially non-fluorescent, confocal imaging was carried out without a need to change the media. Control experiments revealed that the porphyrin–lipid ended up in endosomes and lysosomes, on the basis of partial co-localization with transferrin and lysotraker ( Supplementary Fig. S8 ). We next assessed factors relevant to potential clinical applications of porphysomes. To bypass the unknown, long-term side effects of inorganic nanoparticle accumulation in body organs, luminescent silica nanoparticles have been developed that decompose in aqueous solution over a period of hours 32 . Porphysomes are stable for months when stored in aqueous solutions, but they were prone to enzymatic degradation ( Fig. 4 a). On incubation with detergent and lipase, the phospholipid structure was cleaved, with the main aromatic product being pyropheophorbide, which was the starting material in the synthetic reaction generating the porphyrin–lipid. Similarly to chlorophyll, pyropheophorbide is known to be enzymatically cleaved into colourless pyrroles when incubated with peroxidase and hydrogen peroxide 33 . We verified this degradation by monitoring the loss of porphyrin absorption and confirmed that pyropheophorbide could be efficiently degraded by peroxidase. To our knowledge, this is the first example of an enzymatically biodegradable, intrinsically optical active nanoparticle. We next carried out a preliminary study to assess the potential toxicity of porphysomes. When mice were treated with a high dose of porphysomes (1,000 mg kg −1 ), they remained healthy over a two-week period, as demonstrated by a lack of major behaviour changes or weight loss ( Fig. 4 b). At the two-week time point, mice were euthanized and blood tests were carried out ( Fig. 4 c). Liver function tests indicated that the hepatic function of the mice was generally normal, with the exception of elevated levels of bile acids and alanine transferase (less than two times the upper range of normal). Red blood cell counts and attributes were unaffected by the large dose of porphyrin–lipid, which did not interfere with the physiological regulation of endogenous porphyrin (haem). Unaffected white blood cell counts imply that porphysomes were not immunogenic at the two-week time point, even at the high doses given to mice. Post-mortem histopathological examination of the liver, spleen and kidneys indicated that these organs were in good condition and were not impacted by the high intravenous (i.v.) porphysome dose ( Fig. 4 d). Figure 4: Porphysomes are enzymatically biodegradable and well tolerated in vivo . a , Enzymatic degradation of porphysomes. Porphysomes were lysed with 1% Triton X-100 and incubated with lipase in PBS. Degradation was probed using high-performance liquid chromatography/ mass spectrometry analysis. Purified pyropheophorbide was incubated with peroxidase and degradation was verified by monitoring the loss of absorbance at 680 nm. b , Mouse mass change after i.v. administration of 1,000 mg kg −1 porphysomes or PBS (mean ± s.d., n=3). c , Blood test parameters for mice with i.v. administration of porphysomes or PBS (mean ± s.d., n=3). As some test values for γ-globulin transferase results were given as less than 5 U l −1 , all values less than 5 U l −1 are reported as 5 U l −1 . d , Representative haematoxylin and eosin stained sections of indicated organs from mice two weeks after i.v. injection of 1,000 mg kg −1 porphysomes or PBS. Full size image The large aqueous core of the porphysome, contained within the porphyrin bilayer, has potential for cargo loading ( Fig. 1 b). When porphysomes (containing 5% PEG–lipid) were hydrated using a 250 mM carboxyfluorescein solution and extruded, only a limited amount of carboxyfluorescein was stably entrapped in the porphysomes, as determined by gel filtration ( Fig. 5 a, left). As cholesterol is known to enhance loading of compounds within phosphatidylcholine-based liposomes 34 , we included 30 molar% cholesterol into the formulation and repeated the passive carboxyfluorescein loading. The cholesterol-containing porphysomes were able to load ∼ 20 times more carboxyfluorescein when compared with the porphysomes lacking cholesterol ( Fig. 5 a, right). At this high loading concentration, carboxyfluorescein itself was self-quenched in the porphysome ( Fig. 5 b, left). Furthermore, the porphysome remained fluorescently self-quenched ( Fig. 5 b, right), indicating that most of the light absorbed by the porphyrin bilayer was converted to heat. As expected, passive loading of carboxyfluorescein entrapped only a small fraction of the total fluorophore in the hydration solution. One of the most powerful drug loading techniques is active loading, which uses pH or ion gradients to concentrate amphipathic weakly basic molecules into liposomes 35 and polymersomes 36 . The importance of this loading technique is reflected by Doxil, the first clinically implemented nanoparticle 37 , which is a liposomal formulation of actively loaded doxorubicin. We applied the ammonium sulphate gradient method 35 with a doxorubicin to pyropheophorbide–lipid molar ratio of 1:5 to actively load doxorubicin into porphysomes. Without addition of cholesterol, some loading of doxorubicin was observed by gel filtration, but the fraction of the total doxorubicin incorporated from the solution was approximately 10% ( Fig. 5 c, left). However, when 50 molar% cholesterol was added to the porphysome formulation, strong active loading was achieved and porphysomes loaded 90% of all free doxorubicin in solution into the porphysome core ( Fig. 5 c, right). These porphysomes also maintained a self-quenching porphyrin bilayer ( Fig. 5 d). Both actively and passively loaded porphysomes exhibited monodisperse sizes between 150 nm and 200 nm ( Fig. 5 e). Figure 5: Active and passive loading of porphysomes. a , Passive loading of carboxyfluorescein (C.F.). Porphysomes composed without (Porph.) or with 30 molar% cholesterol (Chol. porph.) were extruded with 250 mM carboxyfluorescein and gel filtration was carried out. Fluorescence emission (em.) of pyropheophorbide (blue) and carboxyfluorescein (green) was measured in 0.5% Triton X-100 to avoid quenching. b , Fluorescence quenching of porphysomes composed with 30 molar% cholesterol (blue) loaded with carboxyfluorescein (green). Spectra were taken before (dashed) and after (solid) addition of detergent and normalized to maximum fluorescence. c , Active loading of doxorubicin (Dox.). Fluorescence of gel filtration fractions (collected when porphysomes began to elute) of porphysomes without or with 50 molar% cholesterol. Fluorescence of pyropheophorbide (blue) and doxorubicin (green) was measured with detergent. d , Fluorescence quenching of pyropheophorbide in porphysomes composed with 50 molar% cholesterol and loaded with doxorubicin. Normalized spectra were measured before (dashed) and after (solid) addition of detergent. e , Size distributions of porphysomes loaded with carboxyfluorescein (black) or doxorubicin (grey). Full size image Photothermal therapy is an emerging technique that can make use of contrast agents that convert light into heat at target sites. Inorganic nanoparticles including gold nanoshells 14 , gold nanorods 38 , gold nanocages 39 and graphene 40 have been used to destroy tumours using photothermal therapy. To demonstrate the biophotonic therapeutic potential of an organic nanoparticle, we carried out preliminary experiments using porphysomes as agents for photothermal therapy. We used porphysomes containing 30 molar% cholesterol because they demonstrated favourable biodistribution following systemic administration, with more accumulation in the tumour (3% injected dose per gram) and less accumulation in the liver and spleen than standard porphysomes ( Supplementary Fig. S9 a). Cholesterol porphysomes also had a 35% longer serum half-life of 8.5 h ( Supplementary Fig. S9 b). A 658 nm laser outputting 750 mW (with a power density of 1.9 W cm −2 ) was used to irradiate the KB tumours in xenograft-bearing mice following porphysome administration ( Fig. 6 a). At 24 h before treatment, mice were injected intravenously with 42 mg kg −1 porphysomes or a PBS control. The tumour was then irradiated with the laser for 1 min and temperature was monitored using a thermal camera ( Fig. 6 b). The tumour temperature in the porphysome group rapidly reached 60 °C, whereas the tumours in mice injected with PBS were limited to 40 °C ( Fig. 6 c). Following treatment, mice in the porphysome- and laser-treated group developed eschars on the tumours, whereas the laser-alone group and the porphysomes-alone group did not. After two weeks, the eschars healed and the tumours in the treated group were destroyed ( Fig. 6 d). Unlike the tumours in mice treated with porphysomes and laser treatment, tumours in mice that received laser treatment alone or porphysome injection alone continued to grow rapidly and all of the mice in those groups had to be euthanized within 21 days ( Fig. 6 e). This photothermal experiment corresponded to a treatment with a therapeutic index of at least 25, given the safety of porphysomes at 1 g kg −1 i.v. doses. We believe that porphysomes could impact a range of clinical applications, potentially exploiting synergistic, multimodal optical imaging and therapeutic approaches. However, to achieve clinical relevance, the rapid attenuation of light in biological tissues must be dealt with by leveraging improving light delivery methods or targeting diseases that affect organs that are more accessible to light 41 . Figure 6: Porphysomes as photothermal therapy agents. a , Photothermal therapy set-up showing laser and tumour-bearing mouse. b , Representative thermal response in KB tumour-bearing mice injected intravenously 24 h before with 42 mg kg −1 porphysomes or PBS. Thermal image was obtained after 60 s of laser irradiation (1.9 W cm −2 ). c , Maximum tumour temperature during 60 s laser irradiation (mean ± s.d. for five mice per group). d , Photographs showing therapeutic response to photothermal therapy using porphysomes. e , Survival plot of tumour-bearing mice treated with the indicated conditions. Mice were euthanized when tumours reached 10 mm in size ( n =5 for each group). Full size image Similarly to liposomes, porphysomes are self-assembled from simple monomers, efficient nanocarriers, enzymatically biodegradable and highly biocompatible. A small molar percentage of lipid conjugated to targeting moieties, such as antibodies, aptamers, proteins or small targeting molecules, could be easily incorporated to potentially direct porphysomes to a range of different target cells. Similarly to optically active inorganic nanoparticles, porphysomes have large, tunable extinction coefficients and are effective agents for photothermal and photoacoustic applications. Porphysomes exhibit unique nanoscale optical properties and are intrinsically suited for multimodal imaging and therapeutic applications. Methods Formation and characterization of porphysomes. 1-palmitoyl-2-hydroxy- sn -glycero-3-phosphocholine (Avanti Polar Lipids) was acylated with the Spirulina pacifica -derived pyropheophorbide or Rhodobacter sphaeroides -derived bacteriochlorophyll to yield pyropheophorbide–lipid or bacteriochlorophyll–lipid, respectively, as acyl-migrated regioisomers. Porphysomes were formed by dispersion and evaporation of lipids and porphyrin–lipids to form a film. Films were rehydrated with PBS, subjected to freeze–thaw cycles and extruded with a 100 nm polycarbonate membrane at 65 °C. Porphysome size was characterized with a Nanosizer ZS90 (Malvern Instruments). Electron microscopy was carried out with 2% uranyl acetate negative staining and a Tecnai F20 electron microscope (FEI company). Porphysome self-quenching was characterized using a Fluoromax fluorometer (Horiba Jobin Yvon). Porphysome or liposome solutions were excited at 420 nm and emission was measured and integrated from 600 to 750 nm. Background subtraction of an equal concentration of 100 nm egg-yolk phosphatidylcholine/cholesterol (3:2) liposomes was carried out. The fluorescence self-quenching F DET / F 0 of each sample was determined by the ratio of the integrated fluorescence emission in the presence or absence of 0.5% Triton X-100. Resonance light scattering and initial photothermal response were carried out with wavelength-matched gold nanorods (kindly provided by the Kumacheva lab, University of Toronto), adjusted to the same absorbance at 680 nm. For resonance light scattering, excitation and emission were set to the same wavelength and scanned from 400 to 700 nm. After blank subtraction, the resonance scatter of the two samples was divided. Photothermal response was determined using a thermal camera (Mikroshot) following 60 s of laser irradiation with a 673 nm laser diode outputting 150 mW. Passive loading of porphysomes was accomplished by hydrating the porphyrin–lipid film with 250 mM carboxyfluorescein (Anaspec). Following porphysome preparation, unencapsulated carboxyfluorescein was removed by gel filtration using a PD-10 column (GE Healthcare). To actively load doxorubicin, a 0.45 mg ml −1 solution of doxorubicin hydrochloride (Sigma Aldrich) was loaded into porphysomes containing 155 mM ammonium sulphate at pH 5.5 by incubating for 2 h at 37 °C. Free doxorubicin was removed by gel filtration. See Supplementary Information for further details. Multimodal porphysome imaging and therapy. Photoacoustic measurements were carried out on a photoacoustic system with a Ti:sapphire tunable laser and an ultrasound transducer. The axial and transverse resolutions of the system were 150 μm and 590 μm, respectively. Measurements were carried out at 760 nm using bacteriochlorophyll porphysomes in PBS solution. For structural-dependent studies, the photoacoustic signal of porphysomes was compared with that of porphysomes that had been lysed with 0.5% Triton X-100. Animal experiments involving photoacoustic imaging were carried out in compliance with Washington University guidelines. In vivo lymphatic mapping with porphysomes was carried out using Sprague-Dawley rats ( ∼ 200 g) before and after an intradermal porphysome injection on the left forepaw. Mouse xenograft experiments were carried out in compliance with University Health Network guidelines. For fluorescence imaging, 3×10 6 KB cells were inoculated subcutaneously in nude mice and the xenograft grew for 2–3 weeks. Mice were injected through tail vein with bacteriochlorophyll porphysomes. Imaging was carried out using a Maestro imaging system (CRI) using a 710–760 nm excitation filter and an 800 nm long-pass emission filter. For photothermal therapy, KB tumours were grown in female nude mice by injecting 2×10 6 cells into the right flank of female nude mice. When tumour diameters reached 4–5 mm, 42 mg kg −1 of porphysomes containing 30 molar% cholesterol were injected through tail vein. At 24 h post-injection, mice were anesthetized with 2% (v/v) isofluorane and tumours were irradiated with a laser with 750 mW output at 660 nm with a 5 mm by 8 mm spot size. Tumour temperatures were recorded with an infrared camera. Tumour volume was measured daily and mice were euthanized once tumour diameter reached 10 mm. See Supplementary Information for further details. Porphysome degradation and toxicity. For enzymatic degradation, pyropheophorbide porphysomes were incubated with lipase from Rhizomucor miehei (Sigma) for 24 h at 37 °C in PBS containing 0.5% Triton X-100 and 10 mM CaCl 2 . The solution was then subjected to high-performance liquid chromatography/ mass spectrometry analysis to monitor the generation of the pyropheophorbide starting material. Pyropheophorbide was further degraded according to known procedures 33 by incubating 100 μM pyropheophorbide in 0.25% Triton X-100 with 25 units of horseradish peroxidase type II (Sigma), 250 μM of hydrogen peroxide and 500 μM 2,4-dichlorophenol, and absorption loss at 680 nm was monitored. Toxicity experiments were carried out with six-week-old male BALB/c mice (Charles River) in compliance with University Health Network guidelines. Blood was sampled 6 h before porphysome or saline injection. Blood was subjected to the Mammalian Liver Profile tests (Abaxis), and MASCOT haematology profiling (Drew Scientific) according to the manufacturer’s protocol. Mice were injected through tail vein with porphysomes (1,000 mg kg −1 ) or an equal volume of PBS. Over a two-week period, mice were observed for behavioural changes and weight was monitored. Mice were then killed, after cardiac puncture to obtain blood for analysis and then sent for histopathology analysis. See Supplementary Information for further details.
A team of scientists from Princess Margaret Hospital have created an organic nanoparticle that is completely non-toxic, biodegradable and nimble in the way it uses light and heat to treat cancer and deliver drugs. (A nanoparticle is a minute molecule with novel properties). The findings, published online today in Nature Materials are significant because unlike other nanoparticles, the new nanoparticle has a unique and versatile structure that could potentially change the way tumors are treated, says principal investigator Dr. Gang Zheng, Senior Scientist, Ontario Cancer Institute (OCI), Princess Margaret Hospital at University Health Network. Dr. Zheng says: "In the lab, we combined two naturally occurring molecules (chlorophyll and lipid) to create a unique nanoparticle that shows promise for numerous diverse light-based (biophotonic) applications. The structure of the nanoparticle, which is like a miniature and colorful water balloon, means it can also be filled with drugs to treat the tumor it is targeting." It works this way, explains first author Jonathan Lovell, a doctoral student at OCI: "Photothermal therapy uses light and heat to destroy tumors. With the nanoparticle's ability to absorb so much light and accumulate in tumors, a laser can rapidly heat the tumor to a temperature of 60 degrees and destroy it. The nanoparticle can also be used for photoacoustic imaging, which combines light and sound to produce a very high-resolution image that can be used to find and target tumors." He adds that once the nanoparticle hits its tumor target, it becomes fluorescent to signal "mission accomplished". "There are many nanoparticles out there, but this one is the complete package, a kind of one-stop shopping for various types of cancer imaging and treatment options that can now be mixed and matched in ways previously unimaginable. The unprecedented safety of this nanoparticle in the body is the icing on the cake. We are excited by the possibilities for its use in the clinic," says Dr. Zheng.
10.1038/NMAT2986
Medicine
Medical abortions obtained through online telemedicine shown to be effective, safe
Self reported outcomes and adverse events after medical abortion through online telemedicine: population based study in the Republic of Ireland and Northern Ireland , www.bmj.com/content/357/bmj.j2011 Editorial: Abortion by telemedicine: an equitable option for Irish women, www.bmj.com/content/357/bmj.j2237
http://www.bmj.com/content/357/bmj.j2011
https://medicalxpress.com/news/2017-05-medical-abortions-online-telemedicine-shown.html
Abstract Objectives To assess self reported outcomes and adverse events after self sourced medical abortion through online telemedicine. Design Population based study. Setting Republic of Ireland and Northern Ireland, where abortion is unavailable through the formal healthcare system except in a few restricted circumstances. Population 1000 women who underwent self sourced medical abortion through Women on Web (WoW), an online telemedicine service, between 1 January 2010 and 31 December 2012. Main outcome measures Successful medical abortion: the proportion of women who reported ending their pregnancy without surgical intervention. Rates of adverse events: the proportion who reported treatment for adverse events, including receipt of antibiotics and blood transfusion, and deaths reported by family members, friends, or the authorities. Care seeking for symptoms of potential complications: the frequency with which women reported experiencing symptoms of a potentially serious complication and the proportion who reported seeking medical attention as advised. Results In 2010-12, abortion medications (mifepristone and misoprostol) were sent to 1636 women and follow-up information was obtained for 1158 (71%). Among these, 1023 women confirmed use of the medications, and follow-up information was available for 1000. At the time women requested help from WoW, 781 (78%) were <7 weeks pregnant and 219 (22%) were 7-9 weeks pregnant. Overall, 94.7% (95% confidence interval 93.1% to 96.0%) reported successfully ending their pregnancy without surgical intervention. Seven women (0.7%, 0.3% to 1.5%) reported receiving a blood transfusion, and 26 (2.6%, 1.7% to 3.8%) reported receiving antibiotics (route of administration (IV or oral) could not be determined). No deaths resulting from the intervention were reported by family, friends, the authorities, or the media. Ninety three women (9.3%, 7.6% to 11.3%) reported experiencing any symptom for which they were advised to seek medical advice, and, of these, 87 (95%, 87.8% to 98.2%) sought attention. None of the five women who did not seek medical attention reported experiencing an adverse outcome. Conclusions Self sourced medical abortion using online telemedicine can be highly effective, and outcomes compare favourably with in clinic protocols. Reported rates of adverse events are low. Women are able to self identify the symptoms of potentially serious complications, and most report seeking medical attention when advised. Results have important implications for women worldwide living in areas where access to abortion is restricted. Introduction About a quarter of the world’s population lives in countries with highly restrictive abortion laws. 1 Women in these countries often resort to unsafe methods to end their pregnancies. Globally, each year an estimated 43000 women die as a result of lack of access to safe legal abortion services through their countries’ formal healthcare systems. 2 Millions more have complications. 3 Worldwide, the fourth leading cause of maternal mortality is unsafe abortion. 4 Yet in many countries, self sourced medical abortion provides a vital alternative to dangerous methods such as using sharp objects or noxious substances. Women source mifepristone and misoprostol (or misoprostol alone) themselves and use the medications outside the formal healthcare system. In some settings, women buy the medications from pharmacies or markets. 5 In other settings, they can self source using online telemedicine initiatives that provide medications as well as help and support by email or instant messaging. 6 One setting in which online telemedicine has dramatically changed women’s access to abortion is in the Republic of Ireland and Northern Ireland. Abortion laws in both the Republic and Northern Ireland are among the most restrictive in the world. 1 Abortion is allowed only to save a women’s life and, in Northern Ireland only, her permanent physical and mental health. 7 8 Although Northern Ireland is part of the UK, the Northern Irish legislature did not adopt the 1967 Abortion Act, which legalised abortion carried out by registered medical practitioners in England, Scotland, and Wales. As a result, abortion under most circumstances remains illegal there under the Offences Against the Person Act of 1861, which provides a maximum penalty for the woman undergoing the abortion of life imprisonment. 9 Women from the Republic and Northern Ireland who do not want to, or feel they cannot, continue with a pregnancy have traditionally had three options: those with the requisite financial and logistical means can travel abroad to access abortion in a clinic, while those who do not must either self induce using an unsafe method or continue their unwanted pregnancy. For the past decade, however, women of all financial means have also had the option to self source early medical abortion through online telemedicine. 10 11 Despite having been used by thousands of women in the Republic and Northern Ireland 11 and tens of thousands of women worldwide, 12 little is known about the outcomes of these self sourced medical abortions. Two existing studies have examined data from telemedicine initiatives: one across various settings 6 and the other in Brazil. 13 These studies showed encouraging results with respect to efficacy but were limited by small sample sizes and relatively high losses to follow-up. Moreover, no study has examined whether women are able to safely manage their own abortions by identifying the symptoms of serious complications and presenting for medical advice when appropriate. Using data from an online telemedicine initiative, we conducted a population based analysis of women in the Republic of Ireland and Northern Ireland who self sourced medical abortion during a three year period. We examined self reported outcomes and complications after medical abortion through online telemedicine and assessed women’s ability to self screen for the symptoms of potentially serious complications of abortion and their propensity to seek medical attention. In light of current policy debates, 9 14 the Republic and Northern Ireland provide a particularly important and timely opportunity to examine women’s self reported outcomes. Methods We examined data from Women on Web (WoW), a non-profit organisation that provides early medical abortion through online telemedicine in countries where access to safe abortion is restricted. The service is currently available for women up to 10 weeks’ gestation at the time of request. To make a request, women fill out a consultation form on the WoW website ( ). 15 A doctor reviews the medical information on the form and, if clinical criteria are met, provides a prescription according to the WHO recommended dose regimen for medical abortion. 16 Mifepristone and misoprostol are sent through the mail by a partner organisation. Women either make a donation to support the service or, if they cannot afford to do so, the service is donated to them. Real time instruction about how to use the medications, as well as help and support during and after the abortion process, are provided by a multilingual specially trained helpdesk team. 6 Women are invited to share their experiences four weeks later using an online evaluation form or by emailing the helpdesk. Our dataset includes all women in the Republic and Northern Ireland who filled out an online consultation form and to whom mifepristone and misoprostol were sent from 1 January 2010 to 31 December 2012. We chose this date range for three reasons. Firstly, although WoW began providing online telemedicine abortion in 2006, changes to the software used to handle requests mean that data are available only from 1 January 2010 onwards. Secondly, in January 2013, a change to the evaluation form reduced the level of detail available on symptoms of potential complications and help seeking behaviour. Thirdly, WoW was the only telemedicine service operating in the Republic and Northern Ireland in 2010-13. Our sample therefore represents all women accessing medical abortion through online telemedicine during this period. Overall, 2150 women contacted the WoW helpdesk to request an abortion from 1 January 2010 to 31 December 2012. Among these, 514 cancelled their request or discontinued contact with the helpdesk, and no medications were sent to them. The analytic sample size was determined by the number of women remaining to whom medications were sent and who subsequently confirmed using them and provided follow-up information about the outcome of their abortion. We did not distinguish between women who live in the Republic and Northern Ireland in our analyses. All women living in the Republic who access abortion through WoW have their medications sent to an address outside the Republic because the import by mail of prescription medications is prohibited and all incoming medications are confiscated by Irish customs. 17 Some women from the Republic are already aware of the customs situation and select Northern Ireland as their country of origin on the consultation form. The border between the two countries, however, is, at present, barely discernible and fully open to travel. Moreover, the practicalities women face in terms of accessing abortion in the two countries is similar. The online consultation form includes self reported information about age, weeks’ gestation, feelings about the decision to have an abortion, and any medical contraindications or conditions requiring additional screening. We categorised age as <20, 20-24, and into 5 year increments thereafter, with a final group of ≥45. Gestational age was reported as <7 weeks’ or 7-9 weeks’, which represents gestational age at the time of the consultation. During the time period of our study, WoW collected data on gestational age according to these categories to reflect the change in the registered use of mifepristone from up to 7 weeks’ to up to 9 weeks’ gestation in 2009. 18 In our sample, 58% of women reported having gestational age confirmed through ultrasonography, and the remainder used a pregnancy calculator based on the date of their last menstrual period, which has been shown to be an accurate method of determining gestational age for early medical abortion. 19 The estimated time for the medications to arrive in the Republic/Northern Ireland is between five and seven days, and 94% of women who used them reported doing so less than a week after they arrived. Thus, while we do not have information on women’s exact gestational ages at the time they took the medications, we can estimate that for most women it was less than two weeks after the consultation. Feelings about the decision to have an abortion were reported as “I can cope with my feelings regarding my decision” and “I have some worries about my decision and would like further information.” Women who expressed worries were directed to appropriate sources of information. Questions on medical history included the presence of any contraindications (such as bleeding disorders, inherited porphyrias, allergies to mifepristone or misoprostol) or medical conditions that required additional medical screening (such as hypertension and diabetes). We retrieved follow-up information for as many women as possible through either an evaluation form sent out four weeks after the medications were sent or email follow-up through the helpdesk. The evaluation form is based on similar follow-up instruments used in the clinical setting. Available information included the number of women who confirmed delivery of mifepristone and misoprostol, the number who confirmed whether or not they had used the medications, and, among those who confirmed use, the outcome of the abortion. The number of women who confirm using the medications is inevitably lower than the number to whom the medications are sent because some experience a spontaneous miscarriage, decide travel to obtain an abortion abroad, decide to continue with their pregnancy, or simply do not respond to follow-up from the helpdesk. Women who confirmed using the medications were asked whether or not they are still pregnant, whether they received any surgical intervention (dilatation and evacuation or vacuum aspiration), and whether they received any other treatment after their abortion, including antibiotics and blood transfusion. Women were also asked if they experienced any symptoms of potentially serious complications, including: “bleeding requiring more than two maxi pads an hour for more than two hours”; “fever over 39°C or abnormal vaginal discharge”; and “persistent pain continuing for several days after the abortion.” They were then asked whether they went to a hospital or to see a doctor. WoW advises women to seek medical attention if any of the above symptoms arise. For women who completed medical abortion at home and for whom self reported information on outcome was available, we first examined the age distribution, gestational age, feelings about abortion, and prevalence of contraindications and comorbidities. We then examined the proportion for whom medical abortion was successful according to the standard definition of success in the Medical Abortion Reporting of Efficacy (MARE) Guidelines—that is, the proportion who were able to expel their pregnancy without the need for surgical intervention. 20 Next, we examined the prevalence of reported adverse events, following to the extent possible the categories defined by Cleland and colleagues. 21 Information was available on antibiotics, blood transfusion, and death. The highly politicised nature of abortion in the Republic and Northern Ireland means that any death suspected to be related to abortion would be extremely high profile news. We think it is likely that WoW would either have known if the woman in question had accessed their service or would have been notified by the authorities or the woman’s family or friends. Finally, we examined the prevalence with which women reported symptoms of possibly serious complications and the frequency with which those who reported such symptoms sought medical attention. Data analysis was conducted with Stata version 13.1 (StataCorp. 2013. College Station, TX). We calculated point estimates and exact binomial 95% confidence intervals for the overall population and for the binary categories of gestational age available in our dataset. Characteristics and outcomes by category of gestational age were compared with the Fisher-Freeman-Halton test and Fisher’s exact test, respectively. Findings were considered significant at an α level of 0.05. Anonymised data were provided to us by WoW. Patient involvement Patients were not involved in the design or conduct of the study. The follow-up that WoW provides, however, is designed to deal with the priorities and experiences of women who access the service. Thus, though this study is an analysis of secondary, anonymised data, with no direct participant involvement, the research questions were informed by the needs of women who rely on WoW to access abortion. Results From 1 January 2010 to 31 December 2012, WoW sent 200 mg mifepristone and 1200 μg misoprostol to 1636 women living in the Republic and Northern Ireland (fig 1 ⇓ ). Among these, 1181 women confirmed subsequent use or non-use of the medications. Among the remainder, 24 women confirmed delivery but offered no further follow-up information, while 431 neither confirmed delivery nor offered further follow-up information. Among the 1181 women who confirmed use or non-use of the medications, 1023 used the pills, and 158 did not. Follow-up information on abortion outcome (that is, whether or not a woman was still pregnant after using the medications) was available for 1000 of the 1023 women who confirmed use of the medications. Thus, outcome data were available for 1158 women who were sent mifepristone and misoprostol, representing 71% follow-up. Reasons for not using the medications included having had a spontaneous miscarriage in the meantime, accessing abortion through another pathway such as travelling abroad, and deciding to continue the pregnancy. One woman elected not to use the medications because she discovered her pregnancy was ectopic. No other ectopic pregnancies were reported either before or after use of the medications. Fig 1 Women accessing medical abortion through WoW (Women on the Web) Download figure Open in new tab Download powerpoint Among the 1000 women who used the medications and for whom we had information about the outcome of the abortion, 781 (78%) reported being <7 weeks pregnant at the time of requesting help from WoW, and 219 (22%) reported being 7-9 weeks pregnant (table 1 ⇓ ). Almost a third (n=306) of women were aged 30-34, 236 were 35-39, and 195 were 25-29, 79 were aged ≤24, and 184 were ≥40. Virtually all (997) reported being able to cope with their decision to have an abortion. None had any contraindications to medical abortion, and 23 had a medical condition that required extra screening to ensure that it could be carried out safely. Comparison of the characteristics of the 1158 women for whom follow-up information was available versus the 478 women for whom no follow-up information was available showed no clinically or statistically significant differences between the two groups (see appendix). Table 1 Medical and demographic characteristics of women conducting self sourced medical abortion through online telemedicine. Figures are number (percentage) of women View this table: View popup View inline Virtually all women (99.2%, 95% confidence interval 98.4% to 99.7%) reported having ended their pregnancies, and 94.7% (93.1% to 96.0%) reported a successful medical abortion (that is, ending their pregnancies with no surgical intervention) (table 2 ⇓ ). Women with a reported gestational age of 7-9 weeks at the time they requested an abortion more commonly reported receiving a surgical intervention than women with a reported gestational age of <7 weeks (7.3% (4.2% to 11.6% v 3.7% (2.5% to 5.3%), respectively, P=0.04). There were, however, no statistically or clinically significant differences by gestational age in the proportions reporting a successful medical abortion (95.4% (93.7% to 96.8%) v 92.2% (87.9% to 95.4%); P=0.09). Table 2 Outcome of abortion reported by women conducting medical abortion through online telemedicine. Figures are number of women (percentage, 95% confidence interval) View this table: View popup View inline Among the 1000 women for whom information about the outcome of their abortion was available, information about adverse events and symptoms of potentially serious complications was available for 987 (99%). Rates of self reported treatment for adverse events were low: 3.1% (95% confidence interval: 2.1% to 4.4%) reported any treatment for a possible adverse event (table 3 ⇓ ). Overall, 2.6% (1.7% to 3.8%) reported receiving antibiotics by any route of administration, 0.7% (0.3% to 1.5%) reported receiving a blood transfusion, and no deaths were reported by family, friends, the authorities, or the media. Rates of reported adverse events were not significantly more prevalent in the 7-9 week group than in the <7 weeks group (4.6% (2.2% to 8.2%) v 2.7% (1.7% to 4.1%), respectively, P=0.19). Table 3 Treatment for adverse events reported by women conducting medical abortion through online telemedicine. Figures are number of women (percentage, 95% confidence interval) View this table: View popup View inline Among the 987 women for whom we had information on self reported symptoms, 9.3% (95% confidence interval 7.6% to 11.3%) reported experiencing symptoms of a potentially serious complication (table 4 ⇓ ). The prevalence of reporting any such symptom was higher in the 7-9 weeks group than the <7 weeks group (13.7% (9.4% to 19.0%) v 8.1% (6.2% to 10.2%), respectively, P=0.02). Bleeding requiring more than two maxi pads an hour for more than two hours was the most commonly reported of the symptoms (5.2%, 3.9% to 6.7%). Overall, 95% (87.8% to 98.2%) of women who reported symptoms of a potentially serious complication for which they were advised to seek medical assistance said they went to a hospital or clinic. There were no substantive or significant differences in the proportions of women who sought advised medical attention by reported gestational age (94% (84.3% to 98.2%) in the <7 weeks group v 96.7% (82.8% to 99.9%) in the 7-9 weeks group, P=1.0). None of the five women who did not seek medical help reported an adverse outcome or treatment for a complication, and none of the women who did not report symptoms of a potentially serious complication reported an adverse event. Table 4 Reported symptoms and care seeking for potentially serious complications among women conducting medical abortion through online telemedicine. Figures are number of women (percentage, 95% confidence interval) View this table: View popup View inline Discussion Among women in the Republic of Ireland and Northern Ireland, early medical abortion provided through online telemedicine was highly effective. The reported rate of successful medical abortion compares favourably with the rates of those carried out within the formal healthcare system, both when mifepristone and misoprostol are administered in clinic and when mifepristone is administered in clinic and misoprostol is taken at home. 22 The reported prevalence of adverse events is low, and, critically, when women reported experiencing symptoms of a potentially serious complication, almost all reported seeking medical attention as advised. Limitations and strengths of study The main limitation of our study is that we relied on women’s self reports with respect to the outcome and complications of abortion. Many studies in the clinical setting, however, have the same limitation as women often do not return to the clinic and either self report by phone or are simply lost to follow-up. Moreover, as the women in our study are by definition conducting their abortions outside the formal healthcare setting, self report is the only possible method of follow-up. While self reporting could be subject to recall or social desirability bias, the short time period between the abortion and the collection of follow-up information should minimise recall bias. A previous large randomised controlled trial showed that self assessment of the outcome of medical abortion was non-inferior to clinical follow-up, indicating that women are capable of determining on their own whether or not their abortion has been successful. 23 Although judgment of the symptoms of potentially serious complications is subjective, and not all will actually be indicative of an adverse event, it is reassuring that virtually all women who reported experiencing such symptoms said they sought medical advice. It is also unlikely that women had much incentive to give inaccurate reports of adverse events or complications. WoW was their main source of advice during their abortion, and so women who reported problems tended to have communicated with the helpdesk. Another important limitation is that we were unable to ascertain whether the treatment women received for potential adverse events was appropriate and necessary. Previous work has shown that rates of surgical intervention after medical abortion provided through online telemedicine vary widely by setting. 24 The surgical intervention rate of 4.5% that we found is similar to equivalent rates found in studies of medical abortion in the clinical setting (which typically range from 3% to 5%). 22 It is possible, however, that providers in countries where abortion is highly restricted might intervene inappropriately because of lack of experience or overcautious management. They could also cause further complications through unnecessary or inappropriate interventions, and we are unable to distinguish these from adverse events relating to the abortion itself (for example, surgical intervention could lead to the need for a blood transfusion). The rate of reported receipt of an antibiotic in our sample was higher than previous reports of receipt of intravenous antibiotics after medical abortion in a clinic. 21 We could not distinguish between antibiotics administered intravenously versus orally, the latter being much more common after medical abortion. Additionally, some healthcare professionals might provide oral antibiotics “just in case” or for an incidentally discovered urinary tract or sexually transmitted infection. Our rate of reported blood transfusion was also higher than in previous large studies of abortion in a clinic 21 25 but still low at less than 1%. We also lacked information on two other adverse events––hospital admission and emergency room treatment. 21 Previous studies, however, have shown considerable overlap between these events and the receipt of blood transfusion or antibiotics, 26 both of which we were able to include. Key strengths of our study include the large sample size and high follow-up rate, which is comparable with or, in some cases, better than studies of abortion within the formal healthcare setting. 27 Additionally, we included data on the entire population of women in the Republic and Northern Ireland who accessed medical abortion through online telemedicine. While we must acknowledge the limitations of necessary reliance on self reporting and incomplete follow-up, the questions we sought to answer about medical abortion provided outside the formal healthcare system in a setting where abortion is highly restricted cannot be dealt with by a randomised controlled trial, clinical trial, or prospective cohort study. We have drawn on the best available “real world” data to answer these important questions. We consider that the rates of reported complications and successful abortion in our study might be conservative estimates. A previous study using telephone follow-up among a smaller sample of women who self sourced their own abortions using WoW shows that those to whom medications were sent but who did not spontaneously provide follow-up information by email or an online evaluation form were less likely to have experienced complications and more likely to have had a successful medical abortion. 6 This possibility must still be balanced against the biases discussed above. We are also unable to definitively identify gestational age at the time of abortion, and some women might have had higher gestational ages than they were willing to disclose or might have experienced delays in receipt of the medications. Thus, we might reasonably expect the reported rates of adverse events to be slightly higher than those reported in clinical studies of medical abortion, which generally include gestational ages up to a maximum of 7 or 9 weeks. 21 22 25 26 Moreover, a systematic review of the safety of regimens of medical abortion at home up to 8 weeks’ gestation indicates average rates of transfusion of 0.1%, which although higher than for studies of medical abortion in a clinic, is still considered low. 22 It is also important to view the rates of self reported adverse events shown in this study in the context of the other options available to women in the Republic and Northern Ireland who have an unwanted pregnancy. The complication rates we found are lower than the risk of equivalent complications during delivery in the UK. 28 They are also much lower than the equivalent risks associated with unsafe methods of abortion. 3 While some could also raise ethical questions about the provision of medications without formal in person consultation with a doctor, it is worth noting that similar models of telemedicine are used in the US to prescribe and dispense numerous other medications (with the exception of controlled substances) 29 and that both mifepristone and misoprostol are on the WHO list of essential medicines. 30 Additionally, one might view the provision of abortion medications by telemedicine as an ethical response to the unethical practice of criminalising women for choosing abortion. A growing body of literature documents the negative health impacts experienced by women denied a wanted abortion and forced to continue an unwanted pregnancy compared with those who were able to access an abortion. 31 32 In 2016, the United Nations Human Rights Council found Ireland in violation of its human rights obligations, stating that its abortion laws subject women to “suffering and discrimination.” 33 Applicability Our results might not be generalisable to all settings where women self source using online telemedicine. A recent review of the acceptability of self managed abortion in both legal and legally restricted contexts emphasises the role of local attitudes surrounding abortion on women’s experiences. 34 Higher levels of education and medical knowledge as well as better access to healthcare compared with women in developing settings might mean that women in the Republic and Northern Ireland are more likely to use the medications correctly and to seek follow-up care. The stigma surrounding abortion experienced by Irish women, however, is considerable. Women in both countries face the possibility of prosecution and jail sentences if they are found to have conducted an abortion at home. Recently, several women in Northern Ireland have been charged after being reported to the authorities by housemates or medical staff. 35 36 Women who have had a self sourced abortion and those who have had an early pregnancy loss are clinically indistinguishable, but these events raise the concerning possibility of a chilling effect, whereby women might be reluctant to seek care for fear of being reported. On average, women who access abortion through online telemedicine are older than women who access abortion in a clinic setting in the UK. 37 This difference might be because younger women are less likely to recognise their pregnancy sufficiently early to choose medical abortion if they have not been pregnant before, or they might be more likely to be able to travel abroad to access abortion because of parental assistance or fewer childcare commitments, or they might already be overseas pursuing higher education. Alternatively, older women might be more likely to have had abortions or deliveries before and thus be more confident about self managing their abortion at home. A previous study examining the decision making and experiences of Irish women who self sourced medical abortion through WoW indicated that some do so because they lack the financial or social resources to travel elsewhere. 11 Thus, our sample could be less socioeconomically advantaged than women who travel to access abortion care. The same study also showed, however, that others access abortion through online telemedicine because they prefer the privacy of using the medications at home, they lack the required documentation to travel, or they prefer medical abortion to a surgical alternative. 11 Implications and conclusions Our results have important implications for the perception of abortion self sourced outside the formal health system using online telemedicine. Firstly, they clearly show that not at all abortions taking place outside the law are unsafe abortions. Secondly, they add an important dimension to existing evidence that women themselves report abortion through online telemedicine as a positive experience with benefits for health and wellbeing. 11 Millions of women worldwide live in countries where self sourced medical abortion is a potentially lifesaving option, and strengthening services outside the formal healthcare setting could be a vital component of strategies to reduce maternal mortality from unsafe abortion. Finally, given the trajectory of abortion policy in Europe and the US, the visibility and importance of self sourced medical abortion will continue to increase. There are already reports of women seeking abortion outside the formal healthcare setting in the US. 38 Investigating women’s experiences, preferences, outcomes, and unmet needs in various settings is a critical goal for future research. What is already known on this topic In many countries where abortion through the formal healthcare system is restricted, self sourced medical abortion through online telemedicine provides an alternative to methods such as sharp objects or noxious substances Little is known about the safety and effectiveness of medical abortion provided through this online pathway Previous studies have been limited by small sample sizes, high losses to follow-up, and inability to examine self screening for potentially serious complications What this study adds This study is based on women’s self reports of outcomes and complications of medical abortion and provides the best evidence to date that self sourced medical abortion through online telemedicine is highly effective and that rates of adverse events are low Reported rates of successful medical abortion are comparable with protocols in clinics, and women report successfully self screening for potentially serious complications and seeking medical assistance when necessary For the millions of women worldwide living in areas where access to abortion is restricted, the findings show the vital role played by self sourced medical abortion in providing an option with high effectiveness rates and few reported adverse outcomes Footnotes Contributors: ARAA conceived the original research question, conducted the statistical analyses and prepared the tables and figures, and wrote the first draft of the manuscript. ARAA, RG, and JT contributed to the study design. RG and ID provided the de-identified data. ARAA and JT did the initial data interpretation. All authors contributed to final data interpretation, revised first and subsequent drafts critically for intellectual content, and approved the final manuscript. All authors, external and internal, had full access to all of the data (including statistical reports and tables) in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. ARAA is guarantor. Funding: This study was funded by a grant from the Society of Family Planning (SFPRF10-JI2) and was supported in part by the Eunice Kennedy Shriver National Institute of Child Health and Human Development of the NIH through grant R24HD04284 to the population research centre at the University of Texas at Austin, and in part by grant P2C HD047879 to the office of population research at Princeton University. The funders played no role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The authors are completely independent from the funding sources. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the Society of Family Planning or the National Institutes of Health. Competing interests: All authors have completed the ICMJE uniform disclosure form at and declare grants from the Society of Family Planning (ARAA) and infrastructure support from the National Institutes of Health (JT and ARAA); RG is founder and director of Women on Web, ID is a prescribing physician for Women on Web, JT serves on the Board of the Women on Web Foundation; no other relationships or activities that could appear to have influenced the submitted work. Ethical approval: The University of Texas at Austin institutional review board reviewed and approved study protocols and declared the use of the de-identified database for research purposes exempt from full board review. All women consented to the anonymised use of their data at the aggregate level for research purposes. Transparency: The lead author affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained Data sharing: No additional data available. This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: .
Women in Ireland and Northern Ireland acquiring medical abortion pills through online telemedicine report successful terminations with low rates of adverse effects, according to new research published in The BMJ by Princeton University, the University of Texas at Austin and Women on Web. The researchers examined self-reported outcomes following medical abortions conducted outside the formal healthcare setting through Women on Web (WoW), a nonprofit organization that provides access to medications used to induce abortion. The results show that 95 percent of self-sourced and self-managed medical abortions were successful. Less than one percent of the women required a blood transfusion, and three percent received antibiotics. Women were able to identify the symptoms of potentially serious complications, and almost all reported seeking in-person medical attention when advised. "Our results show that telemedicine abortions provided by Women on Web are safe and effective," said co-author James Trussell, the Charles and Marie Robertson Professor of Public and International Affairs, Emeritus, and professor of economics and public affairs, emeritus, at Princeton University's Woodrow Wilson School of Public and International Affairs. Abortion laws in Ireland and Northern Ireland are among the most restrictive in the world, with abortion criminalized in most circumstances. However, online telemedicine has dramatically changed abortion access. "Irish and Northern Irish people who access or help others to access this pathway are choosing an option that has similar effectiveness rates to medication abortion performed in a clinic and has lower rates of complications than continuing a pregnancy to delivery," said lead author Abigail Aiken, assistant professor at the University of Texas's Lyndon B. Johnson School of Public Affairs. "This study shows that medication abortion self-sourced and self-managed outside the formal healthcare setting can be a safe and effective option for those who rely on or prefer it." Using data from WoW, researchers conducted a population based analysis of 1,000 women in the Republic of Ireland and Northern Ireland who self-sourced medical abortion during a three-year period. Aiken and Trussell, along with co-authors Rebecca Gomperts and Irena Digol from WoW, examined self-reported outcomes and complications, and assessed women's ability to self-screen for the symptoms of potentially serious complications, as well as their propensity to seek medical attention. The findings also have implications for other parts of the world where abortion is difficult to access. "Following waves of restrictive legislation in the United States, the parallels between women seeking abortion in certain parts of the U.S. and Ireland and Northern Ireland are striking," Aiken said. "Women in Ireland and Northern Ireland have three options when faced with a pregnancy they do not want or feel they cannot continue: travel long distances to access in-clinic abortion care, remain pregnant, or self-source their own abortion outside the formal healthcare setting. In the case of the United States, we already know women are self-sourcing, so there is a public health duty to help make it as safe and supported as possible." The researchers acknowledge the limitations of self-reporting, but also emphasize that in situations where women self-source their own abortions outside the formal healthcare setting, self-report is the only possible method of follow-up. The large sample size and high follow-up rate, which is comparable with or, in some cases, better than studies of abortion within the formal healthcare setting are also strengths of the study. The study, "Self-Reported Outcomes and Adverse Events Following Medical Abortion via Online Telemedicine: A Population-based Study in Ireland and Northern Ireland," was published online May 16.
www.bmj.com/content/357/bmj.j2011